How to Containerise the Model
The DAFNI platform does not execute code directly, it uses containers to setup and execute the code. In this section, we will containerise the examples we discussed in the previous section using Docker. We assume that Docker is installed on your machine and that you have a basic understanding of Docker. If you are new to Docker, refer to the Docker documentation for detailed guidance.
Example 1: Containerising the Python Model
In this example, we will user the same Python model, as in previous section, to generate Fibonacci numbers.
The model takes the sequence length as input from environment variables and writes the generated sequence to an output file in the folder /data/outputs
.
Below is a Dockerfile to containerise the Python Fibonacci model:
# base image with Python installed
FROM python:3.9-slim
# set the working directory in the container
WORKDIR /src
# copy the current directory contents into the container
COPY . .
# create the outputs directory
RUN mkdir -p /data/outputs
# define the command to run the script
CMD ["python", "fibonacci.py"]
Save this Dockerfile
in the same directory as fibonacci.py
. The content should look like:
.
├── Dockerfile
└── fibonacci.py
Also make sure to change the output file path to /data/outputs/
, e.g.,
# OUTPUT_FOLDER = Path("./outputs/") # use while running locally
OUTPUT_FOLDER = Path("/data/outputs/") # use while running in docker
Run the following command to build the Docker image:
docker build -t fibonacci-py .
There are several ways to run the container and access the outputs. To run the model using default input variables and save the outputs in the local outputs folder:
docker run -v $(pwd)/outputs:/data/outputs fibonacci-py
This command will generate the sequence in /data/outputs
folder inside docker container and map it to the local machine folder ./output
, where we can find the output of the Fibonacci sequence sequence.json
.
We can customise the sequence length, starting values, or other parameters by passing environment variables (SEQUENCE_LENGTH
, SEQUENCE_F0
, and SEQUENCE_F1
) as needed.
docker run -v $(pwd)/outputs:/data/outputs -e SEQUENCE_LENGTH=30 -e SEQUENCE_F0=2 -e SEQUENCE_F1=3 fibonacci-py-app
In Linux, the docker run command sees $(pwd)/outputs:/data/outputs
map the /data/outputs
folder in the current directory ($pwd
).
If working on Windows replace $(pwd)/outputs
with %cd%\outputs
.
Example 2: Containerising the C++ Model
# use an official GCC image as the base image
FROM gcc:latest
# install required libraries (e.g., jsoncpp for JSON handling)
RUN apt-get update && apt-get install -y \
libjsoncpp-dev \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y \
libjsoncpp-dev \
&& rm -rf /var/lib/apt/lists/*
# set the working directory in the container
WORKDIR /src
# copy the current directory contents into the container
COPY . .
# compile the C++ code
RUN g++ -std=c++17 -o fibonacci fibonacci.cpp -I/usr/include/jsoncpp -L/usr/lib/x86_64-linux-gnu -ljsoncpp
# create output directory
RUN mkdir -p /data/outputs
# define the command to run the compiled application
CMD ["./fibonacci"]
Save this Dockerfile
in the same directory as fibonacci.cpp
. The folder should look like:
.
├── Dockerfile
└── fibonacci.cpp
Also make sure to change the output file path to /data/outputs/
, e.g.,
// const std::string OUTPUT_FOLDER = "./outputs/"; // use while running locally
const std::string OUTPUT_FOLDER = "/data/outputs/"; // use while running in docker
Run the following command to build the Docker image:
docker build -t fibonacci-cpp .
To run the model and save the outputs locally:
docker run -v $(pwd)/outputs:/data/outputs fibonacci-cpp
You can customize the sequence length and starting values:
docker run -v $(pwd)/outputs:/data/outputs -e SEQUENCE_LENGTH=30 -e SEQUENCE_F0=2 -e SEQUENCE_F1=3 fibonacci-cpp
After execution, the Fibonacci sequence will be created in the /data/outputs
forlder of the container and mapped to the ./outputs
on local machine.
In the next section, we will demonstrate how to make these containerised models compatible with the DAFNI platform. This includes creating metadata files, setting up input and output interfaces, and deploying the models on the platform.