Hi,
I'm new to liggghts and CFDEM. I've just installed liggghts and CFDEM according to setup_LIGGGHTS__OpenFoamR_CFDEM_2p0_on_Ubuntu1004_24052011.txt, and that's OK.
So, I try to run the example case, settlingTestMPI. However, it failed and it showed "FOAM Fatal Error":
[1] --> FOAM FATAL ERROR:
[1] index -1 out of range 0 ... 3999
[1]
[1] From function UList::checkIndex(const label)
[1] in file /home/cfd/OpenFOAM/OpenFOAM-1.7.1/src/OpenFOAM/lnInclude/UListI.H at line 109.
[1]
FOAM parallel run aborting
[1]
[1] #0 Foam::error::printStack(Foam::Ostream&) at ~/OpenFOAM/OpenFOAM-1.7.1/src/OSspecific/POSIX/printStack.C:202
[1] #1 Foam::error::abort() at ~/OpenFOAM/OpenFOAM-1.7.1/src/OpenFOAM/lnInclude/error.C:230
[1] #2 Foam::Ostream& Foam::operator<< (Foam::Ostream&, Foam::errorManip) at ~/OpenFOAM/OpenFOAM-1.7.1/src/OpenFOAM/lnInclude/errorManip.H:86
[1] #3 Foam::UList::checkIndex(int) const at ~/OpenFOAM/OpenFOAM-1.7.1/src/OpenFOAM/lnInclude/UListI.H:113
[1] #4 Foam::UList::operator[](int) const at ~/OpenFOAM/OpenFOAM-1.7.1/src/OpenFOAM/lnInclude/UListI.H:172
[1] #5 Foam::Archimedes::setForce(double** const&, double**&, double**&, double**&) const at ~/OpenFOAM/cfd-1.7.1/src/lagrangian/cfdemParticle/subModels/forceModel/Archimedes/Archimedes.C:118
[1] #6 Foam::cfdemCloud::evolve(Foam::GeometricField&, Foam::GeometricField, Foam::fvPatchField, Foam::volMesh>&, Foam::GeometricField, Foam::fvPatchField, Foam::volMesh>&) at ~/OpenFOAM/cfd-1.7.1/src/lagrangian/cfdemParticle/cfdemCloud/cfdemCloud.C:371
[1] #7
[1] at ~/OpenFOAM/cfd-1.7.1/applications/solvers/cfdemSolverPiso_shared/cfdemSolverPiso.C:80
[1] #8 __libc_start_main in "/lib/tls/i686/cmov/libc.so.6"
[1] #9
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[1] in "/home/cfd/OpenFOAM/cfd-1.7.1/applications/bin/linuxGccDPDebug/cfdemSolverPiso_shared"
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 27832 on
node ubuntu exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
The complete screen output is also attached.
What should I do than?
Attachment | Size |
---|---|
![]() | 10.28 KB |
cgoniva | Tue, 02/28/2012 - 12:39
Hi! do you still encounter
Hi!
do you still encounter this error?
could you please run the case with debug flag on (in parCFDEMrun.sh) and post the error statement?
Cheers,
Chris
flamelet | Tue, 02/28/2012 - 13:37
Hi Chris, I've run the case
Hi Chris,
I've run the case again with debug on. The error statement is listed below.
Thanks,
Lee
------------------------------------------------------------------------------------
cfd@ubuntu:~/OpenFOAM/cfd-1.7.1/run/cfdemSolverPiso_shared/cfdemSolverPisoCase_shared/settlingTestMPI/CFD$ decomposePar
/*---------------------------------------------------------------------------*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: 1.7.1 |
| \\ / A nd | Web: www.OpenFOAM.com |
| \\/ M anipulation | |
\*---------------------------------------------------------------------------*/
Build : 1.7.1-03e7e056c215
Exec : decomposePar
Date : Feb 28 2012
Time : 20:21:34
Host : ubuntu
PID : 27574
Case : /home/cfd/OpenFOAM/cfd-1.7.1/run/cfdemSolverPiso_shared/cfdemSolverPisoCase_shared/settlingTestMPI/CFD
nProcs : 1
SigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time
Time = 0
Create mesh
Calculating distribution of cells
Selecting decompositionMethod simple
Finished decomposition in 0.3 s
Calculating original mesh data
Distributing cells to processors
Distributing faces to processors
Calculating processor boundary addressing
Distributing points to processors
Constructing processor meshes
Processor 0
Number of cells = 4000
Number of faces shared with processor 1 = 400
Number of processor patches = 1
Number of processor faces = 400
Number of boundary faces = 1200
Processor 1
Number of cells = 4000
Number of faces shared with processor 0 = 400
Number of processor patches = 1
Number of processor faces = 400
Number of boundary faces = 1200
Number of processor faces = 400
Max number of processor patches = 1
Max number of faces between processors = 400
Processor 0: field transfer
Processor 1: field transfer
End.
cfd@ubuntu:~/OpenFOAM/cfd-1.7.1/run/cfdemSolverPiso_shared/cfdemSolverPisoCase_shared/settlingTestMPI/CFD$ mpirun -np 2 cfdemSolverPiso_shared -parallel
/*---------------------------------------------------------------------------*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: 1.7.1 |
| \\ / A nd | Web: www.OpenFOAM.com |
| \\/ M anipulation | |
\*---------------------------------------------------------------------------*/
Build : 1.7.1-03e7e056c215
Exec : cfdemSolverPiso_shared -parallel
Date : Feb 28 2012
Time : 20:22:09
Host : ubuntu
PID : 27581
Case : /home/cfd/OpenFOAM/cfd-1.7.1/run/cfdemSolverPiso_shared/cfdemSolverPisoCase_shared/settlingTestMPI/CFD
nProcs : 2
Slaves :
1
(
ubuntu.27582
)
Pstream initialized with:
floatTransfer : 0
nProcsSimpleSum : 0
commsType : nonBlocking
SigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time
Create mesh for time = 0
Reading field p
Reading physical velocity field U
Note: only if voidfraction at boundary is 1, U is superficial velocity!!!
Reading momentum exchange field Ksl
Reading voidfraction field voidfraction = (Vgas/Vparticle)
Creating dummy density field rho = 1
Reading particle velocity field Us
Reading/calculating face flux field phi
Selecting incompressible transport model Newtonian
Selecting turbulence model type RASModel
Selecting RAS turbulence model laminar
Reading g
Selecting locateModel standard
Selecting dataExchangeModel twoWayMPI
Starting up LIGGGHTS for first time execution
Executing input script '../DEM/in.liggghts_init'
LIGGGHTS 1.5 based on lammps-10Mar10
# Pour granular particles into chute container, then induce flow
atom_style granular
atom_modify map array sort 0 0
communicate single vel yes
#processors 1 1 2
boundary f f f
newton off
units si
region reg block 0 0.1 0 0.1 0 0.1 units box
create_box 1 reg
Created orthogonal box = (0 0 0) to (0.1 0.1 0.1)
2 by 1 by 1 processor grid
neighbor 0.003 bin
neigh_modify delay 0 binsize 0.01
#Material properties required for new pair styles
fix m1 all property/global youngsModulus peratomtype 5.e6
fix m2 all property/global poissonsRatio peratomtype 0.45
fix m3 all property/global coefficientRestitution peratomtypepair 1 0.3
fix m4 all property/global coefficientFriction peratomtypepair 1 0.5
fix m5 all property/global characteristicVelocity scalar 2.0
#pair style
pair_style gran/hooke 1 0 #Hookean without cohesion
pair_coeff * *
#timestep, gravity
timestep 0.00001
fix gravi all gravity 9.81 vector 0.0 -1.0 0.0
#walls
fix xwalls all wall/gran/hooke 1 0 xplane 0.0 0.1 1
fix ywalls all wall/gran/hooke 1 0 yplane 0 0.1 1
fix zwalls all wall/gran/hooke 1 0 zplane 0 0.01 1
#-import mesh from cad:
#fix cad1 all mesh/gran hopperGenauerSALOME.stl 1 1.0 0. 0. 0. 0. 180. 0.
#-use the imported mesh as granular wall
#fix bucket_wall all wall/gran/hertz/history 1 0 mesh/gran 1 cad1
#particle insertion
#- distributions for insertion using pour
#fix pts1 all particletemplate/sphere 1 atom_type 1 density constant 2500 radius constant 0.001
#fix pts2 all particletemplate/sphere 1 atom_type 1 density constant 2500 radius constant 0.002
#fix pdd1 all particledistribution/discrete 1. 2 pts1 0.3 pts2 0.7
#variable alphastart equal 0.05
#region bc block 0.01 0.09 0.05 0.09 0.001 0.009 units box
#fix ins all pour/dev/packing 1 distributiontemplate pdd1 vol ${alphastart} 200 region bc
#- create single partciles
#create_atoms 1 single 0.05 0.04 0.05 units box
create_atoms 1 single 0.05 0.04 0.046 units box
Created 1 atoms
set group all diameter 0.0001 density 3000
Setting atom values ...
1 settings made for diameter
1 settings made for density
#cfd coupling
fix cfd all couple/cfd/force couple_every 100 mpi
INFO: nevery as specified in LIGGGHTS is overriden by calling external program
#fix cfd all couple/cfd/force couple_every 100 file ../CFD/couplingFiles/
variable vx equal vx[1]
variable vy equal vy[1]
variable vz equal vz[1]
variable time equal step*dt
fix extra all print 100 "${time} ${vx} ${vy} ${vz}" file ../DEM/post/velocity.txt title "%" screen no
#apply nve integration to all particles that are inserted as single particles
fix integr all nve/sphere
#screen output
compute 1 all erotate/sphere
thermo_style custom step atoms ke c_1 vol
thermo 1000
thermo_modify lost ignore norm no
compute_modify thermo_temp dynamic yes
#insert the first particles so that dump is not empty
#dump myDump all stl 1 post/dump.stl
run 1
Setting up run ...
Memory usage per processor = 2.54746 Mbytes
Step Atoms KinEng 1 Volume
0 1 0 0 0.001
1 1 0 0 0.001
Loop time of 0.00010748 on 2 procs for 1 steps with 1 atoms
Pair time (%) = 2.06754e-06 (1.92365)
Neigh time (%) = 0 (0)
Comm time (%) = 3.50224e-05 (32.585)
Outpt time (%) = 2.08946e-05 (19.4405)
Other time (%) = 4.94955e-05 (46.0509)
Nlocal: 0.5 ave 1 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Nghost: 0.5 ave 1 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Neighs: 0 ave 0 max 0 min
Histogram: 2 0 0 0 0 0 0 0 0 0
Total # of neighbors = 0
Ave neighs/atom = 0
Neighbor list builds = 0
Dangerous builds = 0
dump dmp all custom 1000 ../DEM/post/dump.liggghts_init id type type x y z ix iy iz vx vy vz fx fy fz omegax omegay omegaz radius
#undump myDump
#force : f_couple_cfd[0] f_couple_cfd[1] f_couple_cfd[2]
#node : f_couple_cfd[6]
#cell id : f_couple_cfd[7]
run 1 upto #MPI coupling
Setting up run ...
Memory usage per processor = 2.77634 Mbytes
Step Atoms KinEng 1 Volume
1 1 0 0 0.001
Loop time of 8.44628e-06 on 2 procs for 0 steps with 1 atoms
Pair time (%) = 0 (0)
Neigh time (%) = 0 (0)
Comm time (%) = 0 (0)
Outpt time (%) = 0 (0)
Other time (%) = 8.44628e-06 (100)
Nlocal: 0.5 ave 1 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Nghost: 0.5 ave 1 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Neighs: 0 ave 0 max 0 min
Histogram: 2 0 0 0 0 0 0 0 0 0
Total # of neighbors = 0
Ave neighs/atom = 0
Neighbor list builds = 0
Dangerous builds = 0
#run 15000 upto #file coupling
#write_restart liggghts.restart
Selecting voidFractionModel centre
centreVoidFraction constructor done
Selecting averagingModel dilute
Selecting regionModel allRegion
Selecting meshMotionModel noMeshMotion
cfdem version: cfdem-2.3.0-beta-2011-12-05-13:51
If BC are important, please provide volScalarFields -imp/expParticleForces-
Selecting forceModel DiFeliceDrag
Selecting forceModel Archimedes
accounting for Archimedes on DEM and CFD side!
Selecting momCoupleModel implicitCouple
Selecting liggghtsCommandModel execute ,provide dicts, numbered from 0 to n
liggghtsCommand run 100
firstCouplingStep = 1
lastCouplingStep = 1000000000
couplingStepInterval = 1
solving volume averaged Navier Stokes equations of type B
Starting time loop
Time = 0.001
Courant Number mean: 0 max: 0
- evolve()
Starting up LIGGGHTS
Executing command: 'run 100 '
run 100 Setting up run ...
Memory usage per processor = 2.77634 Mbytes
Step Atoms KinEng 1 Volume
1 1 0 0 0.001
101 1 0 0 0.001
Loop time of 0.015447 on 2 procs for 100 steps with 1 atoms
Pair time (%) = 9.25697e-05 (0.599275)
Neigh time (%) = 0 (0)
Comm time (%) = 0.000393073 (2.54466)
Outpt time (%) = 2.0978e-05 (0.135807)
Other time (%) = 0.0149403 (96.7203)
Nlocal: 0.5 ave 1 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Nghost: 0.5 ave 1 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Neighs: 0 ave 0 max 0 min
Histogram: 2 0 0 0 0 0 0 0 0 0
Total # of neighbors = 0
Ave neighs/atom = 0
Neighbor list builds = 0
Dangerous builds = 0
LIGGGHTS finished
timeStepFraction() = 1
[1]
[1]
[1] --> FOAM FATAL ERROR:
[1] index -1 out of range 0 ... 3999
[1]
[1] From function UList::checkIndex(const label)
[1] in file /home/cfd/OpenFOAM/OpenFOAM-1.7.1/src/OpenFOAM/lnInclude/UListI.H at line 109.
[1]
FOAM parallel run aborting
[1]
[1] #0 Foam::error::printStack(Foam::Ostream&) at ~/OpenFOAM/OpenFOAM-1.7.1/src/OSspecific/POSIX/printStack.C:202
[1] #1 Foam::error::abort() at ~/OpenFOAM/OpenFOAM-1.7.1/src/OpenFOAM/lnInclude/error.C:230
[1] #2 Foam::Ostream& Foam::operator<< (Foam::Ostream&, Foam::errorManip) at ~/OpenFOAM/OpenFOAM-1.7.1/src/OpenFOAM/lnInclude/errorManip.H:86
[1] #3 Foam::UList::checkIndex(int) const at ~/OpenFOAM/OpenFOAM-1.7.1/src/OpenFOAM/lnInclude/UListI.H:113
[1] #4 Foam::UList::operator[](int) const at ~/OpenFOAM/OpenFOAM-1.7.1/src/OpenFOAM/lnInclude/UListI.H:172
[1] #5 Foam::Archimedes::setForce(double** const&, double**&, double**&, double**&) const at ~/OpenFOAM/cfd-1.7.1/src/lagrangian/cfdemParticle/subModels/forceModel/Archimedes/Archimedes.C:118
[1] #6 Foam::cfdemCloud::evolve(Foam::GeometricField&, Foam::GeometricField, Foam::fvPatchField, Foam::volMesh>&, Foam::GeometricField, Foam::fvPatchField, Foam::volMesh>&) at ~/OpenFOAM/cfd-1.7.1/src/lagrangian/cfdemParticle/cfdemCloud/cfdemCloud.C:371
[1] #7
[1] at ~/OpenFOAM/cfd-1.7.1/applications/solvers/cfdemSolverPiso_shared/cfdemSolverPiso.C:80
[1] #8 __libc_start_main in "/lib/tls/i686/cmov/libc.so.6"
[1] #9
[1] in "/home/cfd/OpenFOAM/cfd-1.7.1/applications/bin/linuxGccDPDebug/cfdemSolverPiso_shared"
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 27582 on
node ubuntu exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
cgoniva | Fri, 03/02/2012 - 11:56
Hi! Did you try the
Hi!
Did you try the installation for an other OF version also?
You could try to go for 1.7.x or 2.0.x?
Cheers,
Chris