LightForge Commandline Usage

Setup

To allow for commandline execution export the Nanomatch directory and source the lightforge config file:

export NANOMATCH=/path/to/your/nanomatch
export NANOVER=V4 # or your respective release
source $NANOMATCH/$NANOVER/configs/lightforge.config

As a side note it is very much advised to export the NANOMATCH directory and NANOVER in your .bashrc file and source the config file during the batch job.

Gerating input files

LightForge requires a settings file in yaml-format. Check out the tutorials for examples. Alternatively, follow GUI documentation or the GUI-based uscases where LightForge is involved to generate sample settings via the SimStack Client.

Running lightforge locally

You can use lightforge in serial mode and paralellized mode using mpi. In the parallel mode, calculations of independent data points for e.g. statistics or different fields are distributed over available CPUs using MPI. For testing purposes it is recommended to use serial mode.

To run lightforge in serial mode type:

lightforge.py -s scl_settings -n 5000

This will construct the system and then propagate the charges defined in scl_settings for 5000 iterations. A pictures showing the energy levels of the system can be found in the material folder. The average charge density and final particle positions are printed in the folder experiments. To perform a current measurement specified in the experiments block_ in the scl_settings file type:

lightforge.py -s scl_settings

For this example the simulation will run until the convergence criterion speciefid by max_iterations or iv_fluctuation is reached. Termination criteria may differ for different measurement modes_ . Depending on your system environment, you may miss some libraries to invoke the serial mode directly. In this case try starting lightforge by typing:

Running a simulation on remote resources

After setting up your settings file use on of the following commands to run lightforge.

Parallel submission:

# For V4:
$OPENMPI_PATH/bin/mpirun -x OMP_NUM_THREADS --bind-to none -n 32 --mca btl self,vader,tcp python -m mpi4py $LFPATH/lightforge.py -s scl_settings -n 5000 >> progress.txt 2> lf_mpi_stderr
# For V3 and below:
$MPI_PATH/bin/mpirun -n 32 python -m mpi4py $LFPATH/lightforge.py -s scl_settings >> progress.txt 2> lf_mpi_stderr

Serial submission:

# For V4:
$OPENMPI_PATH/bin/mpirun -x OMP_NUM_THREADS --bind-to none -n 1 --mca btl self,vader,tcp python -m mpi4py $LFPATH/lightforge.py -s scl_settings -n 5000
# For V3 and below:
$MPI_PATH/bin/mpirun -np 1 python -m mpi4py $LFPATH/lightforge.py -s scl_settings

Monitoring the calculation

In the case of a serial calculation the status of the calculation is printed out to the terminal. In case of a parallel calculation the status of the simulation of each data

The results of the search are