Hi everybody,
I am trying to install:
- fftw-2.1.5.tar.gz
- openmpi-1.4.5.tar.gz
- liggghts-2.0.6 from git clone from LIGGGHTS-PUBLIC.git
The results are following:
- The installation of FFTW went smoothly.
- The installation of OPENMPI went smoothly too. A simple “hello_world” test gave results as expected, as well as the “mpirun” command.
- The compilation of Liggghts went through without any major problem, except maybe the three lines at first:
-
grep: angle_*.h: No such file or directory
-
grep: dihedral_*.h: No such file or directory
-
grep: improper_*.h: No such file or directory
-
- The execution of the executable with the “conveyor” example gave the following error message:
ERROR on proc0: Cannot open log.liggghts (lammps.cpp:225)
-------------------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them.
--------------------------------------------------------------------------------------
This error is motivating my demand for help. Can anybody suggest me any thing to make Liggghts work properly ?
Best regards, Olivier BONNEFOY
For debugging purposes, here are some other informations:
- Before compilation, I added the two following lines at last in the /home/bonnefoy/.bashrc file:
-
export PATH=/home/bonnefoy/myopenmpi/files/bin:$PATH
-
export LD_LIBRARY_PATH=/home/bonnefoy/myopenmpi/files/lib:$LD_LIBRARY_PATH
-
- After compilation, I added a symbolic link as:
sudo ln -s /home/bonnefoy/myliggghts/src/lmp_myubuntu /usr/bin/liggghts
- In order to fill in the MPI_INC, MPI_PATH and MPI_LIB in the Makefile, I successively typed:
-
mpicc –-showme:compile
-
mpicc –-showme:link
-
- I also tried with openmpi-1.6 instead of openmpi-1.4.5. Same result.
- FFTW.3.x seems not to be compatible with LIGGGHTS.
- The Makefile used is:
- # LIGGGHTS = Ubuntu 12.04, mpic++, openmpi-1.4.5, FFTW2
- SHELL = /bin/bash
- CC = /home/bonnefoy/myopenmpi/files/bin/mpic++
- CCFLAGS = -g -O
- DEPFLAGS = -M
- LINK = /home/bonnefoy/myopenmpi/files/bin/mpic++
- LINKFLAGS = -O
- LIB = -lstdc++
- ARCHIVE = ar
- ARFLAGS = -rc
- SIZE = size
- LMP_INC = -DLAMMPS_GZIP
- MPI_INC = -I/home/bonnefoy/myopenmpi/files/include
-DMPICH_IGNORE_CXX_SEEK - MPI_PATH = -L/home/bonnefoy/myopenmpi/files/lib
- MPI_LIB = -lmpi -lpthread -ldl -lm -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl
- FFT_INC = -I/home/bonnefoy/myfftw/files/include -DFFT_FFTW
- FFT_PATH = -L/home/bonnefoy/myfftw/files/lib
- FFT_LIB = -lfftw
AGl | Wed, 08/22/2012 - 16:18
PPA
Hi,
did you try to install it from PPA?
https://launchpad.net/~liggghts-dev/+archive/ppa
bonnefoy | Thu, 08/23/2012 - 00:24
Hello, thank you for your
Hello,
thank you for your idea. I found my mistake inbetween (see above).
Have a good evening too.
Olivier
msbentley | Wed, 08/22/2012 - 16:43
Do you have write permission?
As a first sanity check, do you have write permissions wherever you are executing LIGGGHTS from? Generally I think the log file is written to the directory you are in when you execute the command, unless specified with the -log option.
Cheers, Mark
bonnefoy | Thu, 08/23/2012 - 00:23
Hi Mark, Thanks for your
Hi Mark,
Thanks for your suggestion. This was indeed the point. Everything went well afterwards and I could run and post-process and visualize the conveyor example with Paraview.
Have a good evening, Olivier
tianya4088 | Sun, 06/30/2013 - 08:50
How do you solve this problem
Hi bonnefoy
I encounter with the same question, could you give a description of how to solve this problem in detail?
Have a nice day. Meng QX
bobsun | Tue, 02/05/2019 - 09:08
similar problem when compiling LIGGGHTS
Dear CFDEM Programmer
I compiled CFDEM and LIGGGHTS together using command 'cfdemCompCFDEMall' and I encountered a similar problem when I tried to run an example named ErgunTestMPI_cgs. The error related to MPI_ABORT is merged as below.
MPI_ABORT was invoked on rank 0 in communicator MPI COMMUNICATOR 3 DUP FROM 0
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
If you have some suggestions for me to solve this problem, please provide some comments. Thank you very much.
Best regards,
Bob
jiaziqi | Mon, 06/02/2025 - 07:06
ERROR for help
Hello everyone, I have such an error when I run this case:bubblingFluidizedBed_Smoothing, can anyone help me to solve it? Thanks a lot.
mesh was built before - using old mesh
// run_liggghts_init_DEM //
/home/jiaziqi/NEWJOBS/bubblingFluidizedBed_Smoothing/DEM
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
Decomposing case.
/*---------------------------------------------------------------------------*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: 5.x |
| \\ / A nd | Web: www.OpenFOAM.org |
| \\/ M anipulation | |
\*---------------------------------------------------------------------------*/
Build : 5.x-7f7d351b741b
Exec : decomposePar -force
Date : Jun 02 2025
Time : 12:47:23
Host : "jiaziqi-virtual-machine"
PID : 24093
I/O : uncollated
Case : /home/jiaziqi/NEWJOBS/bubblingFluidizedBed_Smoothing/CFD
nProcs : 1
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster (fileModificationSkew 10)
allowSystemOperations : Allowing user-supplied system call operations
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time
Decomposing mesh region0
Removing 4 existing processor directories
Create mesh
Calculating distribution of cells
Selecting decompositionMethod simple
Finished decomposition in 0.06 s
Calculating original mesh data
Distributing cells to processors
Distributing faces to processors
Distributing points to processors
Constructing processor meshes
Processor 0
Number of cells = 23760
Number of faces shared with processor 1 = 304
Number of processor patches = 1
Number of processor faces = 304
Number of boundary faces = 7548
Processor 1
Number of cells = 23760
Number of faces shared with processor 0 = 304
Number of faces shared with processor 2 = 288
Number of processor patches = 2
Number of processor faces = 592
Number of boundary faces = 7260
Processor 2
Number of cells = 23760
Number of faces shared with processor 1 = 288
Number of faces shared with processor 3 = 304
Number of processor patches = 2
Number of processor faces = 592
Number of boundary faces = 7260
Processor 3
Number of cells = 23760
Number of faces shared with processor 2 = 304
Number of processor patches = 1
Number of processor faces = 304
Number of boundary faces = 7548
Number of processor faces = 896
Max number of cells = 23760 (0% above average 23760)
Max number of processor patches = 2 (33.3333% above average 1.5)
Max number of faces between processors = 592 (32.1429% above average 448)
Time = 0
Processor 0: field transfer
Processor 1: field transfer
Processor 2: field transfer
Processor 3: field transfer
End
do nothing.
// run_parallel_cfdemSolverPimple_bubblingFluidizedBed_CFDDEM //
/home/jiaziqi/NEWJOBS/bubblingFluidizedBed_Smoothing/CFD
rm: 无法删除 'couplingFiles/*': 没有那个文件或目录
--------------------------------------------------------------------------
mpirun was unable to find the specified executable file, and therefore
did not launch the job. This error was first reported for process
rank 0; it may have occurred for other processes as well.
NOTE: A common cause for this error is misspelling a mpirun command
line parameter option (remember that mpirun interprets the first
unrecognized command line token as the executable).
Node: jiaziqi-virtual-machine
Executable: cfdemSolverPimple
--------------------------------------------------------------------------
4 total processes failed to start
done