Skip to content

Conda Environments

Before using Conda, the following two steps are required. Conda must be initialized, and a .condarc must be created.

Do not initialize Conda on login

Make sure that conda environment is initialized within a job and you have not initialized it and working on the login nodes. This creates additional load on the systems.

Initialize the Environment and Create a .condarc File:

Conda will fill the quota of the $HOME directory if pkgs_dirs is not set in this file. By default, Conda stores packages in the $HOME directory. The $HOME directory is too small for that, and the packages are only needed temporarily. They should not be saved, taking up space in permanent directories. To change the default location to $WORK, use a text editor to create a file called .condarc. The path to that file should be $HOME/.condarc and it should contain the path to the alternative location, e.g.:

pkgs_dirs: 
 - /mnt/system/work/$GROUP/$USER/conda/pkgs
In addition, many packages require adding a 'channel'. Common channels may be added before creating environments by editing the .condarc.

To add channels, add them to ~/.condarc. For example, the bioconda and conda-forge channels may be added by adding the following lines to ~/.condarc:

channels:
  - bioconda
  - conda-forge

Installing and Activating Conda Environment

To install Conda environments, specify a prefix, which will be the path to where the environment will be installed. Choose a descriptive name for the environment - Conda will create the directory, the directory should not already exist.

conda create -p $WORK/.conda/py-311-pytorch python=3.11
conda activate $WORK/.conda/py-311-pytorch
The first command creates the environment in your $WORK directory. The second command activates the environment.

Running Conda with batch script

#!/bin/bash
#SBATCH -J changeme                # Job name
#SBATCH --ntasks=1                 # Number of tasks
#SBATCH --cpus-per-task=8          # Number of CPU cores per task
#SBATCH --nodes=1                  # Ensure that all cores are on the same machine with nodes=1
#SBATCH --partition=2080-galvani   # Which partition will run your job
#SBATCH --time=0-00:05             # Allowed runtime in D-HH:MM
#SBATCH --gres=gpu:2               # (optional) Requesting type and number of GPUs
#SBATCH --mem=50G                  # Total memory pool for all cores (see also --mem-per-cpu); exceeding this number will cause your job to fail.
#SBATCH --output=/CHANGE/THIS/PATH/TO/WORK/myjob-%j.out       # File to which STDOUT will be written - make sure this is not on $HOME
#SBATCH --error=/CHANGE/THIS/PATH/TO/WORK/myjob-%j.err        # File to which STDERR will be written - make sure this is not on $HOME
#SBATCH --mail-type=ALL            # Type of email notification- BEGIN,END,FAIL,ALL
#SBATCH --mail-user=ENTER_YOUR_EMAIL   # Email to which notifications will be sent

source ~/.bashrc
conda activate $WORK/.conda/py-311-pytorch
mycode
conda deactivate

If you need to use specific package such as pytorch or tensorflow, please see the tutorials with conda:

Remove old environments

For users who have already been using different Conda environments and would like to begin installing with the new recommended procedures, clean out the remnants of old Conda environments by doing the following.

cd ~
more .bashrc

If these files contain information for old Conda environments, edit the files and delete this section:

# >>> conda initialize >>>
(stuff)
# >>> conda initialize >>>


Last update: September 9, 2024
Created: September 9, 2024