Setup for Open Catalyst Project versatile force field¶
Setting Python¶
The setup procedure necessary for utilizing the versatile force field from LAMMPS is outlined here, following the installation procedure published by the Open Catalyst Project.
Install the software on the machine that will be used for the calculation (e.g. if you will plan to submit the job to a calculation server, the installation should be done on that server).
Hint
Please note that the Open Catalyst Project repository is countinuously updated, and as a result, the installation precedure described on this page may not always be up-to-date. If you encounter difficulties during installation or execution, it is advisable to consult the original installation procedure for most current guidance.
(Optional) Preparing CUDA
To utilize GPU for the calculation, it is essential to install CUDA Toolkit.
You can download CUDA Toolkit 11.6 from the NVIDIA download page and proceed with the installation.
If the NVIDIA driver is not already installed, please ensure to install it as well.
Preparing conda environment
Conda is utilized as the Python environment. If you are not already using conda, it is recommended to install Miniconda, which provides the minimum necessary recommended.
Download the Python 3.9 installer from the Miniconda page and proceed with its installation.
If you are using Windows and install Miniconda without altering the
PATH
variable, you should launch Anaconda Prompt from the Start menu and use it for subsequent procedure.If you need to use a proxy for internet connection, make sure to adjust the environment variable settings accordingly.
set HTTP_PROXY=http://host:port set HTTPS_PROXY=http://host:port
export HTTP_PROXY=http://user:pass@host:port export HTTPS_PROXY=http://user:pass@host:port
Preparing ocp-models repository
If the git command is available, please run the following command
git clone https://github.com/Open-Catalyst-Project/ocp.git
otherwise, download and extract https://github.com/Open-Catalyst-Project/ocp/archive/refs/heads/main.zip .
Installation
Navigate to the repository folder and start by installing the packages required for the installation.
conda install mamba conda-merge -n base -c conda-forge
Next, export the list of required packages for creating the ocp-models virtual environment to a file named
env.yml
.conda-merge env.common.yml env.cpu.yml > env.yml
conda-merge env.common.yml env.gpu.yml > env.yml
Depending on your environment, open the
env.yml
and make modifications if needed.On Windows, please edit the following 2 lines as some of specified package versions may not be available.
- pyg=2.2.0 - pytorch=1.13.1
- pyg=*=*cpu* - pytorch=1.12
- pyg - pytorch=1.12
If you intend to use CPU for calculations, please add following 4 lines to the
dependencies:
list to explicitly specify the CPU versions of the related libraries (add them next to- pytorch=
line to make it clearl).- pytorch-cluster=*=*cpu* - pytorch-scatter=*=*cpu* - pytorch-sparse=*=*cpu* - pytorch-spline-conv=*=*cpu*
Then, proceed to create the virtual environment.
mamba env create -f env.yml
Please be aware that this process may take some time as it involves downloading and installing packages.
Once the process is successful, activate the ‘ocp-models’ virtual environment, and proceed to install the repository content as a package.
conda activate ocp-models pip install -e .
Hint
If you encounter the following error, it can often be resolved by adjusting the package version specified in the
env.yml
.Could not solve for environment specs Encountered problems while solving: - nothing provides requested (package name) (version)
To find available package versions, please run the following command.
mamba search -c pytorch -c nvidia -c pyg -c conda-forge -c defaults (package name)
After you’ve updated the package version in the
env.yml
, try creating the environment again.mamba env update -f env.yml
Please note that creating the environment with a different package version may lead to issues, so it’s essential to thoroughly test its functionality.
Hint
To delete the ocp-models virtual environment created here, execute the following command:
conda deactivate mamba remove -n ocp-models --all
There is no need to download the pretrained models (pt files) separately, as they are included in the NanoLabo Tool.
Configuring NanoLabo¶
To run locally (on the machine running NanoLabo)
Set the path for the Python executable in
from the located in the upper left corner of the screen(or button in the Force Field setting screen).It is located at
conda installation destination\envs\ocp-models\python.exe
on Windows, andconda installation destination/envs/ocp-models/bin/python
on Linux/macOS.To run remotely (on calculation server etc.)
If Conda is installed in either
~/anaconda3
or~/miniconda3
, theLD_LIBRARY_PATH
is automatically updated, so no additional configuration is required in this case.If Conda is installed in a different location, click the icon located in the upper left corner, open
, and then add theLD_LIBRARY_PATH
to your job script.export LD_LIBRARY_PATH=(conda installation destination)/envs/ocp-models/lib:$LD_LIBRARY_PATH
Using LAMMPS directly¶
This guidance is for running LAMMPS in a standalone mode, not through NanoLabo.
Please utilize the executable file lammps_oc20
provided in the NanoLabo Tool. Please note that it does not support MPI parallel execution or virial stress calculation (NPT/NPH ensemble, cell optimization).
Setting Environment Variables¶
Since Python dynamic libraries are used at runtime, set the environment variable LD_LIBRARY_PATH
on Linux/macOS, or the environment variable PATH
on Windows.
On Linux, OpenMPI dynamic libraries are required, so add their path to LD_LIBRARY_PATH
as well.
$ export LD_LIBRARY_PATH=(conda installation destination)/envs/ocp-models/lib:(NanoLabo Tool installation destination)/exec.LINUX/mpi/lib:$LD_LIBRARY_PATH
> set PATH=(conda installation destination)/envs/ocp-models;%PATH%
Additionally, on Linux, you should set the environment variable OPAL_PREFIX
.
$ export OPAL_PREFIX=/opt/AdvanceSoft/NanoLabo/exec.LINUX/mpi
As LAMMPS calls oc20_driver.py
during its operation, please ensure that the oc20driver
folder in NanoLabo Tool installation directory is added to the Python module search path. You can achieve this by adding it to the PYTHONPATH
environment variable, for example.
$ export PYTHONPATH=(NanoLabo Tool installation destination)/oc20driver:$PYTHONPATH
Setting Input File¶
In the LAMMPS input file, set the pair_style
as follows.
pair_style oc20
pair_coeff * * <model> <element1 element2 ...>
pair_style oc20/gpu
pair_coeff * * <model> <element1 element2 ...>
Parameter
model
Specify the graph neural network model to be usedspecify one of DimeNet++, GemNet-dT_OC20, GemNet-dT_OC22GemNet-dT_OC20 is used if GemNet-dT is specifiedelement
list the element names in the same order as the atom types used in LAMMPS