Setup for CHGNet versatile force field

Setting Python

Here, the setup procedure necessary for using the CHGNet versatile force field + Simple DFT-D3 corrections in LAMMPS is described.

Install the software on the machine that will be used for the calculation (e.g. if you will plan to submit the job to a calculation server, the installation should be done on that server).

  1. Preparing conda environment

    Conda is utilized as the Python environment. If you are not already using conda, it is recommended to install Miniconda, which provides the minimum necessary recommended.

    Download the Python 3.9 installer from the Miniconda page and proceed with its installation.

    If you are using Windows and install Miniconda without altering the PATHvariable, you should launch Anaconda Prompt from the Start menu and use it for subsequent procedure.

    If you need to use a proxy for internet connection, make sure to adjust the environment variable settings accordingly.

    Example: Windows, without authentication
    set HTTP_PROXY=http://host:port
    set HTTPS_PROXY=http://host:port
    
    Example: Linux, with authentication
    export HTTP_PROXY=http://user:pass@host:port
    export HTTPS_PROXY=http://user:pass@host:port
    
  2. (Optional) Configuration to use GPU

    If NVIDIA driver is not already installed on your system, ensure you install it beforehand.

    Install the GPU version of PyTorch. If you have the latest version of CUDA installed, you can refer to Get Started section on the PyTorch website. If you have an older version of CUDA, look under Previous Versions on the website and execute the pip install command that corresponds to your specific CUDA version.

    After completing the installation, you can verify whether the GPU is available or not by checking in the Python interactive environment.

    python
    
    >>> import torch
    >>> print(torch.cuda.is_available())   # Check GPU availability
    True
    >>> exit()                             # Exit Python environment
    
  3. Installation

    Install the chgnet and necessary packages required for DFT-D3 correction.

    pip install chgnet
    conda install simple-dftd3 dftd3-python -c conda-forge
    

Configuring NanoLabo

  • To run locally (on the machine running NanoLabo)

    Set the path for the Python executable in Properties ‣ Python from the mainmenuicon located in the upper left corner of the screen(or gearicon button in the Force Field setting screen).

    On Windows, the Python executable is located at the conda installation destination\python.exe . For Linux or mac OS, it is located in the conda installation destination/bin/python .

  • To run remotely (on calculation server etc.)

    If Conda is installed in either ~/anaconda3 or ~/miniconda3, the LD_LIBRARY_PATHis automatically updated, so no additional configuration is required in this case.

    If Conda is installed in a different location, click the icon mainmenuicon located in the upper left corner, open Network ‣ SSH server, and then add the LD_LIBRARY_PATH to your job script.

    export LD_LIBRARY_PATH=(conda installation destination)/lib:$LD_LIBRARY_PATH
    

Troubleshooting

  • When using a GPU on Windows, you may encounter certain errors that can prevent execution.

FileNotFoundError: Could not find module 'C:\Program Files\NVIDIA Corporation\NVSMI\nvml.dll' (or one of its dependencies). Try using the full path with constructor syntax.
pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found

If you encounter such issues, try copying the file C:\Windows\System32\nvml.dll to C:\Program Files\NVIDIA Corporation\NVSMI\nvml.dll (if the folder doesn’t exist, create it first) and then execute the process again.

  • The following error may occur during runtime when DFT-D3 is enabled.

version `GOMP_5.0' not found

If you encounter this issue, set the following environment variable.

export LD_PRELOAD=(conda installation destination)/lib/libgomp.so
  • We are aware that an error occurs with PyTorch version 1.13.1 and earlier versions:

IndexError: tensors used as indices must be long, byte or bool tensors

If you encounter this issue, check your current version of PyTorch and ensure to install PyTorch version 2.0 or later.

# Check the installed version
pip list
# Uninstall PyTorch
pip uninstall torch torchvision torchaudio
# Display installable PyTorch versions
pip install torch==
# Install specifying PyTorch 2 or later
pip install 'torch>=2' torchvision torchaudio

Using LAMMPS directly

This guidance is for running LAMMPS in a standalone mode, not through NanoLabo.

Utilize the executable file lammps_chgnet included in NanoLabo Tool. Note that MPI parallel execution is not supported with this method.

On Linux/macOS, it is necessary to set the environment variable LD_LIBRARY_PATH . This ensure that the program can access the Python dynamic library when executed.

$ export LD_LIBRARY_PATH=(conda installation destination)/lib:$LD_LIBRARY_PATH

Additionally, on Linux, you should set the environment variable OPAL_PREFIX .

if installed in the default location
$ export OPAL_PREFIX=/opt/AdvanceSoft/NanoLabo/exec.LINUX/mpi

Since LAMMPS calls chgnet_driver.py during its operation, you need to add the m3gnet folder, located in the NanoLabo Tool installation directory, to Python’s module search path. You can do this by adding it to the PYTHONPATHenvironment variable. This will ensure that Python can find and use the module when LAMMPS runs.

Linux example
$ export PYTHONPATH=(NanoLabo Tool installation destination)/chgnet:$PYTHONPATH

In the LAMMPS input file, set the pair_style as follows.

CHGNet
pair_style chgnet
pair_coeff * * <modelname> <element1 element2 ...>       # Example specifying model name
CHGNet + DFT-D3 correction
pair_style chgnet/d3
pair_coeff * * path <modelfile> <element1 element2 ...>  # Example specifying model file
CHGNet, using GPU
pair_style chgnet/gpu
pair_coeff * * <model> <element1 element2 ...>
CHGNet + DFT-D3 correction, using GPU
pair_style chgnet/d3/gpu
pair_coeff * * <model> <element1 element2 ...>

Parameter


modelname
modelfile
Specify the graph neural network model to be used
Specify “MPtrj-efsm” to use the bundled pretrained model
To use a custom model saved in a file, specify a file path after “path”

element

list the element names in the same order as the atom types used in LAMMPS