Setup for Open Catalyst Project versatile force field

Setting Python

The setup procedure necessary for utilizing the versatile force field from LAMMPS is outlined here, following the installation procedure published by the Open Catalyst Project.

Install the software on the machine that will be used for the calculation (e.g. if you will plan to submit the job to a calculation server, the installation should be done on that server).

Hint

Please note that the Open Catalyst Project repository is countinuously updated, and as a result, the installation precedure described on this page may not always be up-to-date. If you encounter difficulties during installation or execution, it is advisable to consult the original installation procedure for most current guidance.

  1. (Optional) Preparing CUDA

    To utilize GPU for the calculation, it is essential to install CUDA Toolkit.

    You can download CUDA Toolkit 11.6 from the NVIDIA download page and proceed with the installation.

    If the NVIDIA driver is not already installed, please ensure to install it as well.

  2. Preparing conda environment

    Conda is utilized as the Python environment. If you are not already using conda, it is recommended to install Miniconda, which provides the minimum necessary recommended.

    Download the Python 3.9 installer from the Miniconda page and proceed with its installation.

    If you are using Windows and install Miniconda without altering the PATHvariable, you should launch Anaconda Prompt from the Start menu and use it for subsequent procedure.

    If you need to use a proxy for internet connection, make sure to adjust the environment variable settings accordingly.

    Example: Windows, without authentication
    set HTTP_PROXY=http://host:port
    set HTTPS_PROXY=http://host:port
    
    Example: Linux, with authentication
    export HTTP_PROXY=http://user:pass@host:port
    export HTTPS_PROXY=http://user:pass@host:port
    
  3. Preparing ocp-models repository

    If the git command is available, please run the following command

    git clone https://github.com/Open-Catalyst-Project/ocp.git
    

    otherwise, download and extract https://github.com/Open-Catalyst-Project/ocp/archive/refs/heads/main.zip .

  4. Installation

    Navigate to the repository folder and start by installing the packages required for the installation.

    conda install mamba conda-merge -n base -c conda-forge
    

    Next, export the list of required packages for creating the ocp-models virtual environment to a file named env.yml .

    Use CPU for calculation (do not use GPU)
     conda-merge env.common.yml env.cpu.yml > env.yml
    
    Use GPU for calculation
     conda-merge env.common.yml env.gpu.yml > env.yml
    

    Depending on your environment, open the env.yml and make modifications if needed.

    On Windows, please edit the following 2 lines as some of specified package versions may not be available.

    Before
    - pyg=2.2.0
    - pytorch=1.13.1
    
    Use CPU for calculation
    - pyg=*=*cpu*
    - pytorch=1.12
    
    Use GPU for calculation
    - pyg
    - pytorch=1.12
    

    If you intend to use CPU for calculations, please add following 4 lines to the dependencies: list to explicitly specify the CPU versions of the related libraries (add them next to - pytorch= line to make it clearl).

    - pytorch-cluster=*=*cpu*
    - pytorch-scatter=*=*cpu*
    - pytorch-sparse=*=*cpu*
    - pytorch-spline-conv=*=*cpu*
    

    Then, proceed to create the virtual environment.

    mamba env create -f env.yml
    

    Please be aware that this process may take some time as it involves downloading and installing packages.

    Once the process is successful, activate the ‘ocp-models’ virtual environment, and proceed to install the repository content as a package.

    conda activate ocp-models
    pip install -e .
    

    Hint

    If you encounter the following error, it can often be resolved by adjusting the package version specified in the env.yml.

    Could not solve for environment specs
    Encountered problems while solving:
      - nothing provides requested (package name) (version)
    

    To find available package versions, please run the following command.

    mamba search -c pytorch -c nvidia -c pyg -c conda-forge -c defaults (package name)
    

    After you’ve updated the package version in the env.yml, try creating the environment again.

    mamba env update -f env.yml
    

    Please note that creating the environment with a different package version may lead to issues, so it’s essential to thoroughly test its functionality.

    Hint

    To delete the ocp-models virtual environment created here, execute the following command:

    conda deactivate
    mamba remove -n ocp-models --all
    

    There is no need to download the pretrained models (pt files) separately, as they are included in the NanoLabo Tool.

Configuring NanoLabo

  • To run locally (on the machine running NanoLabo)

    Set the path for the Python executable in Properties ‣ Python from the mainmenuicon located in the upper left corner of the screen(or gearicon button in the Force Field setting screen).

    It is located at conda installation destination\envs\ocp-models\python.exe on Windows, and conda installation destination/envs/ocp-models/bin/python on Linux/macOS.

  • To run remotely (on calculation server etc.)

    If Conda is installed in either ~/anaconda3 or ~/miniconda3, the LD_LIBRARY_PATHis automatically updated, so no additional configuration is required in this case.

    If Conda is installed in a different location, click the icon mainmenuicon located in the upper left corner, open Network ‣ SSH server, and then add the LD_LIBRARY_PATH to your job script.

    export LD_LIBRARY_PATH=(conda installation destination)/envs/ocp-models/lib:$LD_LIBRARY_PATH
    

Using LAMMPS directly

This guidance is for running LAMMPS in a standalone mode, not through NanoLabo.

Please utilize the executable file lammps_oc20 provided in the NanoLabo Tool. Please note that it does not support MPI parallel execution or virial stress calculation (NPT/NPH ensemble, cell optimization).

On Linux/macOS, it is necessary to set the environment variable LD_LIBRARY_PATH . This ensure that the program can access the Python dynamic library when executed.

$ export LD_LIBRARY_PATH=(conda installation destination)/envs/ocp-models/lib:$LD_LIBRARY_PATH

Additionally, on Linux, you should set the environment variable OPAL_PREFIX .

if installed in the default location
$ export OPAL_PREFIX=/opt/AdvanceSoft/NanoLabo/exec.LINUX/mpi

As LAMMPS calls oc20_driver.py during its operation, please ensure that the oc20driver folder in NanoLabo Tool installation directory is added to the Python module search path. You can achieve this by adding it to the PYTHONPATHenvironment variable, for example.

Linux example
$ export PYTHONPATH=(NanoLabo Tool installation destination)/oc20driver:$PYTHONPATH

In the LAMMPS input file, set the pair_style as follows.

Use CPU for calculation (do not use GPU)
pair_style oc20
pair_coeff * * <model> <element1 element2 ...>
Use GPU for calculation
pair_style oc20/gpu
pair_coeff * * <model> <element1 element2 ...>

Parameter

model

Specify the graph neural network model to be used
specify one of DimeNet++, GemNet-dT_OC20, GemNet-dT_OC22
GemNet-dT_OC20 is used if GemNet-dT is specified

element

list the element names in the same order as the atom types used in LAMMPS