Setup for CHGNet versatile force field¶
Setting Python¶
Here, the setup procedure necessary for using the CHGNet versatile force field + Simple DFT-D3 corrections in LAMMPS is described.
Install the software on the machine that will be used for the calculation (e.g. if you will plan to submit the job to a calculation server, the installation should be done on that server).
Preparing conda environment
Conda is utilized as the Python environment. If you are not already using conda, it is recommended to install Miniconda, which provides the minimum necessary recommended.
Download the Python 3.9 installer from the Miniconda page and proceed with its installation.
If you are using Windows and install Miniconda without altering the
PATH
variable, you should launch Anaconda Prompt from the Start menu and use it for subsequent procedure.If you need to use a proxy for internet connection, make sure to adjust the environment variable settings accordingly.
set HTTP_PROXY=http://host:port set HTTPS_PROXY=http://host:port
export HTTP_PROXY=http://user:pass@host:port export HTTPS_PROXY=http://user:pass@host:port
(Optional) Configuration to use GPU
If NVIDIA driver is not already installed on your system, ensure you install it beforehand.
Install the GPU version of PyTorch. If you have the latest version of CUDA installed, you can refer to Get Started section on the PyTorch website. If you have an older version of CUDA, look under Previous Versions on the website and execute the pip install command that corresponds to your specific CUDA version.
After completing the installation, you can verify whether the GPU is available or not by checking in the Python interactive environment.
python >>> import torch >>> print(torch.cuda.is_available()) # Check GPU availability True >>> exit() # Exit Python environment
Installation
Install the chgnet and necessary packages required for DFT-D3 correction.
pip install chgnet conda install simple-dftd3 dftd3-python -c conda-forge
Configuring NanoLabo¶
To run locally (on the machine running NanoLabo)
Set the path for the Python executable in
from the located in the upper left corner of the screen(or button in the Force Field setting screen).On Windows, the Python executable is located at the
conda installation destination\python.exe
. For Linux or mac OS, it is located in theconda installation destination/bin/python
.To run remotely (on calculation server etc.)
If Conda is installed in either
~/anaconda3
or~/miniconda3
, theLD_LIBRARY_PATH
is automatically updated, so no additional configuration is required in this case.If Conda is installed in a different location, click the icon located in the upper left corner, open
, and then add theLD_LIBRARY_PATH
to your job script.export LD_LIBRARY_PATH=(conda installation destination)/lib:$LD_LIBRARY_PATH
Troubleshooting¶
When using a GPU on Windows, you may encounter certain errors that can prevent execution.
FileNotFoundError: Could not find module 'C:\Program Files\NVIDIA Corporation\NVSMI\nvml.dll' (or one of its dependencies). Try using the full path with constructor syntax.pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not FoundIf you encounter such issues, try copying the file
C:\Windows\System32\nvml.dll
toC:\Program Files\NVIDIA Corporation\NVSMI\nvml.dll
(if the folder doesn’t exist, create it first) and then execute the process again.
The following error may occur during runtime when DFT-D3 is enabled.
version `GOMP_5.0' not foundIf you encounter this issue, set the following environment variable.
export LD_PRELOAD=(conda installation destination)/lib/libgomp.so
We are aware that an error occurs with PyTorch version 1.13.1 and earlier versions:
IndexError: tensors used as indices must be long, byte or bool tensorsIf you encounter this issue, check your current version of PyTorch and ensure to install PyTorch version 2.0 or later.
# Check the installed version pip list # Uninstall PyTorch pip uninstall torch torchvision torchaudio # Display installable PyTorch versions pip install torch== # Install specifying PyTorch 2 or later pip install 'torch>=2' torchvision torchaudio
Using LAMMPS directly¶
This guidance is for running LAMMPS in a standalone mode, not through NanoLabo.
Utilize the executable file lammps_chgnet
included in NanoLabo Tool. Note that MPI parallel execution is not supported with this method.
Setting Environment Variables¶
Since Python dynamic libraries are used at runtime, set the environment variable LD_LIBRARY_PATH
on Linux/macOS, or the environment variable PATH
on Windows (if not set during installation).
On Linux, OpenMPI dynamic libraries are required, so add their path to LD_LIBRARY_PATH
as well.
$ export LD_LIBRARY_PATH=(conda installation destination)/lib:(NanoLabo Tool installation destination)/exec.LINUX/mpi/lib:$LD_LIBRARY_PATH
> set PATH=(conda installation destination)/envs/ocp-models;%PATH%
Additionally, on Linux, you should set the environment variable OPAL_PREFIX
.
$ export OPAL_PREFIX=/opt/AdvanceSoft/NanoLabo/exec.LINUX/mpi
Since LAMMPS calls chgnet_driver.py
during its operation, you need to add the m3gnet
folder, located in the NanoLabo Tool installation directory, to Python’s module search path. You can do this by adding it to the PYTHONPATH
environment variable. This will ensure that Python can find and use the module when LAMMPS runs.
$ export PYTHONPATH=(NanoLabo Tool installation destination)/chgnet:$PYTHONPATH
Setting Input File¶
In the LAMMPS input file, set the pair_style
as follows.
pair_style chgnet
pair_coeff * * <modelname> <element1 element2 ...> # Example specifying model name
pair_style chgnet/d3
pair_coeff * * path <modelfile> <element1 element2 ...> # Example specifying model file
pair_style chgnet/gpu
pair_coeff * * <model> <element1 element2 ...>
pair_style chgnet/d3/gpu
pair_coeff * * <model> <element1 element2 ...>
Parameter
modelnamemodelfile Specify the graph neural network model to be usedSpecify “MPtrj-efsm” to use the bundled pretrained modelTo use a custom model saved in a file, specify a file path after “path”element
list the element names in the same order as the atom types used in LAMMPS