Using Docker Containers with Podman on ERISXdl

ERISXdl Containers

The ERISXdl container-runtime is provided by a tool called podman. The podman program provides access to a significant portion of the Docker container API without requiring privilege escalation/admin-rights for podman commands to run. Researchers will be able to pull and use containers built and distributed by their colleagues or docker registries, like, for the purpose of running GPU-based analysis on the ERISXdl GPU-nodes.

The GPU-nodes do not have access to the internet so they will not be able to run code which requires internet access. Researchers will need to prepare/update their containers and code before jobs are submitted to the gpu-nodes for analysis. Computational jobs should not be run on the login nodes, and should be submitted through the SLURM scheduler. For more information on SLURM and using containers in submitted jobs, see the Using SLURM Scheduler article.

Containers prepared for analysis need to be pushed to the Harbor registry service hosted at Each researcher is provided their own space/slice of the Harbor registry service which the Harbor program calls a ‘project.’ Harbor project names are the researcher’s user-id ( the same as their ERIS cluster logins ), and an initial 50GB limit of container storage that can be expanded upon request. To request a Harbor account to use containers or to request more Harbor storage, email for support.

All ERISXdl jobs also have access to /PHShome, /data, and /apps folders for direct access to research data and to the ERIS application modules tree. 

Containers Provided by ERIS HPC Team

The containers we provide can be found in the Harbor projects called:

  • : graphical containers like JupyterHub
  • : several curated NVIDIA NVCR containers like TensorFlow and CUDA

During the pilot phase the containers we provide will be minimal. In future phases of the ErisXDL cluster, we will be providing more containers to support gpu-based workflows in many of the programming languages and frameworks that we support for cpu-based workflows in the other ERIS clusters. Requests for graphical containers running sessions for Matlab, RStudio, and Freesurfer containers have already been noted and are being discussed by the HPC team.

Presently the ERISXdl cluster provides access to a JupyterHub container and JupyterHub job-wrapper which provides private session credentials and a custom URL for each JupyterHub job.

ERIS HPC team has also purchased subscriptions to NVIDIA's Container Registry in order to procure private access to NVIDIA containers. These images are not authorized to be distributed anywhere outside of the MGB network as these containers are full of proprietary software belonging to NVIDIA. Any attempts to copy these images could result in termination of services for the broader MGB research groups that rely on these images for the ERISXdl cluster.

How-to Manage Container Examples

Example 1: Pulling container images from outside registries

Users can pull in container images from registries outside of ERISXdl/Harbor such as DockerHub. Login may be necessary for different registries, and may require an account for that registry.  Once logged in, you will then be able to pull a container image from the registry, tag the image as your own copy, and push that copy to your Harbor project. To view all the images you currently have available on your local storage, run the podman images command. This does not reflect the container images you may have in your Harbor project.

For example, the steps below show how an alpine Linux container would be pulled from DockerHub and stored in the hypothetical 'abc123' username's Harbor project.

  1. Login to the registry/registries you are pulling from and pushing to

    Note: your login credentials for the ERISXdl Harbor registry should be the same as your cluster credentials.

    $ podman login

    Username: abc123
    Password: *************
    Login Succeeded!

    $ podman login

    Username: abc123
    Password: *************
    Login Succeeded!

  2. Search for a container

    $ podman search

    INDEX       NAME                                             DESCRIPTION                                       STARS   OFFICIAL   AUTOMATED                         A minimal Docker image based on Alpine Linux...   7670    [OK]
  3. Pull the container image

    $ podman pull

    Trying to pull
    Getting image source signatures
    Copying blob 5843afab3874 done
    Copying config d4ff818577 done
    Writing manifest to image destination
    Storing signatures
    $ podman images

    REPOSITORY                                         TAG        IMAGE ID       CREATED        SIZE                           latest     d4ff818577bc   4 weeks ago    5.87 MB
  4. Tag the container image*

    $ podman images

    REPOSITORY                                         TAG         IMAGE ID       CREATED        SIZE                           latest      d4ff818577bc   4 weeks ago    5.87 MB                  demo-copy   d4ff818577bc   4 weeks ago    5.87 MB

    For the alpine example, we are tagging the alpine image with demo-copy

    $ podman tag d4ff818577bc

    *Once a container has been pulled, you must tag your image so that podman will know the registry location that you intend to push the container. While the image id (found in podman images output) and registry URL must be correct, the tag itself could be anything. Standard convention is to tag the latest version of an image with latest.Tagging your image can also be a helpful versioning and organization method, although it's not necessary to use it as such.

  5. Push the container image 

    $ podman push

    Once it is successfully pushed to your Harbor project, you can now pull your copy to your podman runtime at any time, as well as access it in scripts submitted to the job scheduler.

    Optional: to confirm that it was pushed successfully, remove the locally stored image (this will not affect your Harbor project) and pull it again.

    $ podman rmi -f d4ff818577bc

    $ podman pull

Example 2: Pulling provided containers from Harbor

Once in full production, ERISXdl users will be able to choose from several curated, pre-built containers provided through Harbor. In the following example, the hypothetical ‘abc123’ username pulls the public CUDA image and stores a copy of it in their Harbor project.

  1.  Login to Harbor

    Note: your login credentials for the ERISXdl Harbor registry should be the same as your cluster credentials.
    $ podman login

    Username: abc123
    Password: *************
    Login Succeeded!
  2.  Pull the container image from Harbor

    Note: depending on the size of the container, this step may take several minutes

    $ podman pull
    $ podman images

    REPOSITORY                         TAG      IMAGE ID       CREATED       SIZE   latest   979cd1f9e2c8   2 weeks ago   4.24 GB
  3.  Tag the container

    $ podman tag 979cd1f9e2c8
    $ podman images

    REPOSITORY                         TAG      IMAGE ID       CREATED       SIZE   latest   979cd1f9e2c8   2 weeks ago   4.24 GB   latest   979cd1f9e2c8   2 weeks ago   4.24 GB
  4.  Push the container

    $ podman push

Example 3: Running and customizing containers

One of the key features in using containers is the user who runs the container has root permissions inside of the running image. This means that users can run package managers and make system changes freely within their container. To save changes you make to a container, you will need to run the container image, make modifications, and then commit those changes with podman before you push the latest version to your Harbor project.

Note: some containers have extra security layers that prevent users from making certain changes even with root permissions. This may prevent users from using package managers and installing applications within the container.

In the following example, the hypothetical ‘abc123’ username runs and updates their copy of the CUDA image and then stores this updated image in their Harbor project.

  1.  Pull the container from Harbor
    $ podman pull
    $ podman images

    REPOSITORY                         TAG      IMAGE ID       CREATED       SIZE   latest   979cd1f9e2c8   2 weeks ago   4.24 GB
  2.  Run the container and make any changes in the container, like installing additional packages *

    $ podman run -it 979cd1f9e2c8 /bin/bash

    ## NOTE : once you run the container, you will have root privileges within the container's filesystem
    ## In this example, we install OpenGL using the package manager

    root@54116e44f656:/# apt-get upgrade
    root@54116e44f656:/# apt-get install opengl
    root@54116e44f656:/# exit

    * Container images can be run interactively as containers by using the podman run command. Users cannot run computational jobs on the ERISXdl login nodes, and should only run containers on login nodes when making modifications.

  3.  Commit the changes made as a new container image

    $ podman ps -a

    CONTAINER ID  IMAGE                                   COMMAND    CREATED             STATUS                       PORTS  NAMES
    58b3f6a7ede2  /bin/bash  About a minute ago  Exited (130) 10 seconds ago
    $ podman commit 58b3f6a7ede2 
  4.  Push the modified container image to Harbor

    $ podman images

    REPOSITORY                        TAG      IMAGE ID       CREATED          SIZE with-opengl   a7932ec48e13   37 seconds ago   4.27 GB  latest   979cd1f9e2c8   2 weeks ago      4.24 GB
    $ podman push

Go to KB0038877 in the IS Service Desk

Related articles