LIGGGHTS could not find property � to write data from calling program to.

Submitted by rahulsoni on Fri, 12/11/2015 - 13:21

Hello everyone

Since few days I am trying to install and run CFDEM coupling. After several troubleshooting and help of Christian Richter. I managed to install CFDEM, OF and LIGGGHTS properly and run the tutorials partially. When I run tutorial cfdemSolverPisoScalar/packedBedTemp then I get peculiar errors related to LIGGGHTS. The log file for this run is attached. The part of terminal output is pasted below.
[Anyone please help me hint to get rid of the issue]

Similar error I am getting for the cases cfdemSolverIB/twoSpheresGlowinskiMPI, cfdemSolverPiso/ErgunTestMPI

FYI, I have installed things by following procedures outlined here:
OpenFOAM: https://www.evernote.com/shard/s214/sh/f4f1184f-e344-4c4a-958f-296edd4b4...
CFDEM: https://www.evernote.com/shard/s214/sh/ed775dfc-0ac0-46e4-b09c-c98698816...

\*---------------------------------------------------------------------------*/
Build : 2.3.x-4d6f4a3115ff
Exec : decomposePar
Date : Dec 11 2015
Time : 17:23:56
Host : "rahul-HP-Z600"
PID : 10131
Case : /home/rahul/CFDEM/CFDEMcoupling-PUBLIC-2.3.x/tutorials/cfdemSolverPisoScalar/packedBedTemp/CFD
nProcs : 1
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

Create time

Decomposing mesh region0

Create mesh

Calculating distribution of cells
Selecting decompositionMethod simple

Finished decomposition in 0 s

Calculating original mesh data

Distributing cells to processors

Distributing faces to processors

Distributing points to processors

Constructing processor meshes

Processor 0
Number of cells = 66
Number of faces shared with processor 1 = 6
Number of processor patches = 1
Number of processor faces = 6
Number of boundary faces = 136

Processor 1
Number of cells = 66
Number of faces shared with processor 0 = 6
Number of processor patches = 1
Number of processor faces = 6
Number of boundary faces = 136

Number of processor faces = 6
Max number of cells = 66 (0% above average 66)
Max number of processor patches = 1 (0% above average 1)
Max number of faces between processors = 6 (0% above average 6)

Time = 0

Processor 0: field transfer
Processor 1: field transfer

End.

// run_parallel_cfdemSolverPisoScalar_packedBedTemp_CFDDEM //

/home/rahul/CFDEM/CFDEMcoupling-PUBLIC-2.3.x/tutorials/cfdemSolverPisoScalar/packedBedTemp/CFD

rm: cannot remove ‘couplingFiles/*’: No such file or directory
/*---------------------------------------------------------------------------*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: 2.3.x |
| \\ / A nd | Web: www.OpenFOAM.org |
| \\/ M anipulation | |
\*---------------------------------------------------------------------------*/
Build : 2.3.x-4d6f4a3115ff
Exec : cfdemSolverPisoScalar -parallel
Date : Dec 11 2015
Time : 17:23:57
Host : "rahul-HP-Z600"
PID : 10151
Case : /home/rahul/CFDEM/CFDEMcoupling-PUBLIC-2.3.x/tutorials/cfdemSolverPisoScalar/packedBedTemp/CFD
nProcs : 2
Slaves : 1("rahul-HP-Z600.10152")
Pstream initialized with:
floatTransfer : 0
nProcsSimpleSum : 0
commsType : nonBlocking
polling iterations : 0
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0

Reading field p

Reading physical velocity field U
Note: only if voidfraction at boundary is 1, U is superficial velocity!!!

Reading momentum exchange field Ksl

Reading voidfraction field voidfraction = (Vgas/Vparticle)

Creating density field rho

Reading particle velocity field Us

Creating dummy density field rho = 1

Creating fluid-particle heat flux field

Reading/calculating face flux field phi

Selecting incompressible transport model Newtonian
Selecting turbulence model type RASModel
Selecting RAS turbulence model laminar

Reading g
Selecting locateModel engine
Selecting dataExchangeModel twoWayMPI
Starting up LIGGGHTS for first time execution
Executing input script '../DEM/in.liggghts_run'
LIGGGHTS (Version LIGGGHTS-PUBLIC 3.1.0, compiled 2015-12-10-20:39:31 by rahul, git commit b3030a8fd1f722c77f69150c7b2d24b386eab848 based on LAMMPS 23 Nov 2013)
units si
processors * * *

# read the restart file
read_restart ../DEM/post/restart/liggghts.restart
Reading restart file ...
WARNING: Restart file version does not match LIGGGHTS version (../read_restart.cpp:470)
--> restart file = Version LIGGGHTS-PUBLIC 3.1.0, compiled 2015-12-10-20:29:35 by rahul, git commit b3030a8fd1f722c77f69150c7b2d24b386eab848 based on LAMMPS 23 Nov 2013
--> LIGGGHTS = Version LIGGGHTS-PUBLIC 3.1.0, compiled 2015-12-10-20:39:31 by rahul, git commit b3030a8fd1f722c77f69150c7b2d24b386eab848 based on LAMMPS 23 Nov 2013
orthogonal box = (0 0 0) to (0.1 0.1 1.1)
1 by 1 by 2 MPI processor grid
1005 atoms

neighbor 0.003 bin
neigh_modify delay 0 binsize 0.01

# Material properties required for granular pair styles

fix m1 all property/global youngsModulus peratomtype 5.e6
fix m2 all property/global poissonsRatio peratomtype 0.45
fix m3 all property/global coefficientRestitution peratomtypepair 1 0.3
fix m4 all property/global coefficientFriction peratomtypepair 1 0.5

# pair style
pair_style gran model hertz tangential history # Hertzian without cohesion
pair_coeff * *

# timestep, gravity
timestep 0.00001
fix gravi all gravity 9.81 vector 0. 0. -1.

# walls
fix xwalls1 all wall/gran model hertz tangential history primitive type 1 xplane 0.0
Resetting global state of Fix history_xwalls1 Style property/atom from restart file info
Resetting per-atom state of Fix history_xwalls1 Style property/atom from restart file info
fix xwalls2 all wall/gran model hertz tangential history primitive type 1 xplane 0.1
Resetting global state of Fix history_xwalls2 Style property/atom from restart file info
Resetting per-atom state of Fix history_xwalls2 Style property/atom from restart file info
fix ywalls1 all wall/gran model hertz tangential history primitive type 1 yplane 0.0
Resetting global state of Fix history_ywalls1 Style property/atom from restart file info
Resetting per-atom state of Fix history_ywalls1 Style property/atom from restart file info
fix ywalls2 all wall/gran model hertz tangential history primitive type 1 yplane 0.1
Resetting global state of Fix history_ywalls2 Style property/atom from restart file info
Resetting per-atom state of Fix history_ywalls2 Style property/atom from restart file info
fix zwalls1 all wall/gran model hertz tangential history primitive type 1 zplane 0.0
Resetting global state of Fix history_zwalls1 Style property/atom from restart file info
Resetting per-atom state of Fix history_zwalls1 Style property/atom from restart file info
fix zwalls2 all wall/gran model hertz tangential history primitive type 1 zplane 1.1
Resetting global state of Fix history_zwalls2 Style property/atom from restart file info
Resetting per-atom state of Fix history_zwalls2 Style property/atom from restart file info

# heat transfer
fix ftco all property/global thermalConductivity peratomtype 5. # lambda in [W/(K*m)]
fix ftca all property/global thermalCapacity peratomtype 0.1 # cp in [J/(kg*K)]
fix heattransfer all heat/gran initial_temperature 600.
Resetting global state of Fix Temp Style property/atom from restart file info
Resetting per-atom state of Fix Temp Style property/atom from restart file info
Resetting global state of Fix heatFlux Style property/atom from restart file info
Resetting per-atom state of Fix heatFlux Style property/atom from restart file info
Resetting global state of Fix heatSource Style property/atom from restart file info
Resetting per-atom state of Fix heatSource Style property/atom from restart file info

# set particle temperature for the bed
run 0
Resetting global state of Fix contacthistory Style contacthistory from restart file info
Resetting per-atom state of Fix contacthistory Style contacthistory from restart file info
Setting up run ...
Memory usage per processor = 9.38752 Mbytes
Step Temp E_pair E_mol TotEng Press Volume
150001 3.016652e+18 0 0 0.062724024 6309.8517 0.011
Loop time of 1.90735e-06 on 2 procs for 0 steps with 1005 atoms

Pair time (%) = 0 (0)
Neigh time (%) = 0 (0)
Comm time (%) = 0 (0)
Outpt time (%) = 0 (0)
Other time (%) = 1.90735e-06 (100)

Nlocal: 502.5 ave 556 max 449 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Nghost: 25.5 ave 28 max 23 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Neighs: 1684.5 ave 1859 max 1510 min
Histogram: 1 0 0 0 0 0 0 0 0 1

Total # of neighbors = 3369
Ave neighs/atom = 3.35224
Neighbor list builds = 0
Dangerous builds = 0
region total block INF INF INF INF INF INF units box
set region total property/atom Temp 600.
Setting atom values ...
1005 settings made for property/atom

# cfd coupling
fix cfd all couple/cfd couple_every 100 mpi
nevery as specified in LIGGGHTS is overriden by calling external program (../cfd_datacoupling_mpi.cpp:61)
fix cfd2 all couple/cfd/force

# this one invokes heat transfer calculation, transfers per-particle temperature and adds convective heat flux to particles
fix cfd3 all couple/cfd/convection T0 600

# apply nve integration to all particles that are inserted as single particles
fix integr all nve/sphere

# screen output
compute rke all erotate/sphere
thermo_style custom step atoms ke c_rke vol
thermo 1000
thermo_modify lost ignore norm no
compute_modify thermo_temp dynamic yes

dump dmp all custom 100 ../DEM/post/dump.liggghts_run id type x y z ix iy iz vx vy vz fx fy fz omegax omegay omegaz radius f_Temp[0] f_heatFlux[0]

run 1
Setting up run ...
Memory usage per processor = 10.0318 Mbytes
Step Atoms KinEng rke Volume
150001 1005 0.062724024 6.7625435e-08 0.011
150002 1005 0.062728126 6.7639024e-08 0.011
Loop time of 0.000483513 on 2 procs for 1 steps with 1005 atoms

Pair time (%) = 0.000219464 (45.3895)
Neigh time (%) = 0 (0)
Comm time (%) = 5.81741e-05 (12.0316)
Outpt time (%) = 2.55108e-05 (5.27613)
Other time (%) = 0.000180364 (37.3028)

Nlocal: 502.5 ave 556 max 449 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Nghost: 25.5 ave 28 max 23 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Neighs: 1684.5 ave 1859 max 1510 min
Histogram: 1 0 0 0 0 0 0 0 0 1

Total # of neighbors = 3369
Ave neighs/atom = 3.35224
Neighbor list builds = 0
Dangerous builds = 0
Selecting IOModel basicIO
Selecting probeModel off
Selecting voidFractionModel divided
Selecting averagingModel dense
Selecting clockModel off
start clock measurement at t >0.005
Selecting smoothingModel off
Selecting meshMotionModel noMeshMotion

CFDEMcoupling version: cfdem-2.9.0
, compatible to LIGGGHTS version: 3.1.0
, compatible to OF version and build: 2.3.x-commit-4d6f4a3115ff76ec4154c580eb041bc95ba4ec09

You are currently using:
OF version: 2.3.x
OF build: 2.3.x-4d6f4a3115ff
CFDEM build: 64f5-dirty

If BC are important, please provide volScalarFields -imp/expParticleForces-
ignoring ddt(voidfraction)
Selecting forceModel KochHillDrag
nrForceSubModels()=1
Selecting forceSubModel ImEx

reading switches for forceSubModel:ImEx
looking for treatForceExplicit ...
treatForceExplicit = false
looking for implForceDEM ...
implForceDEM = false
looking for verbose ...
verbose = false
looking for interpolation ...
interpolation = false
looking for implForceDEMaccumulated ...
implForceDEMaccumulated = false
looking for scalarViscosity ...
scalarViscosity = false

Selecting forceModel LaEuScalarTemp
nrForceSubModels()=1
Selecting forceSubModel ImEx

reading switches for forceSubModel:ImEx
looking for verbose ...
verbose = false
looking for interpolation ...
interpolation = false
looking for scalarViscosity ...
scalarViscosity = false

Selecting forceModel Archimedes
nrForceSubModels()=1
Selecting forceSubModel ImEx

reading switches for forceSubModel:ImEx
looking for treatForceExplicit ...
treatForceExplicit = false
looking for treatForceDEM ...
treatForceDEM = false

Selecting momCoupleModel implicitCouple

implicit momentum exchange field calculate if alphaP larger than : 1
Selecting liggghtsCommandModel runLiggghts ,provide dicts, numbered from 0 to n
Selecting liggghtsCommandModel execute ,provide dicts, numbered from 0 to n
liggghtsCommand set region total property/atom Temp 600.000000
cg is set to: 1
solving volume averaged Navier Stokes equations of type B

Starting time loop

Time = 0.005

Courant Number mean: 0.00133125 max: 0.0412913

Coupling...
Starting up LIGGGHTS
Executing command: 'run 500 '
run 500
Setting up run ...
Memory usage per processor = 10.0318 Mbytes
Step Atoms KinEng rke Volume
150002 1005 0.062728126 6.7639024e-08 0.011
CFD Coupling established at step 150100
CFD Coupling established at step 150200
CFD Coupling established at step 150300
CFD Coupling established at step 150400
CFD Coupling established at step 150500
150502 1005 0.064796 6.9012981e-08 0.011
Loop time of 0.253163 on 2 procs for 500 steps with 1005 atoms

Pair time (%) = 0.109784 (43.3649)
Neigh time (%) = 0.00937116 (3.70163)
Comm time (%) = 0.0286627 (11.3218)
Outpt time (%) = 0.0225199 (8.89543)
Other time (%) = 0.0828253 (32.7162)

Nlocal: 502.5 ave 556 max 449 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Nghost: 25.5 ave 28 max 23 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Neighs: 1684.5 ave 1859 max 1510 min
Histogram: 1 0 0 0 0 0 0 0 0 1

Total # of neighbors = 3369
Ave neighs/atom = 3.35224
Neighbor list builds = 10
Dangerous builds = 0
Executing command: 'set region total property/atom Temp 600.000000 '
set region total property/atom Temp 600.000000
Setting atom values ...
1005 settings made for property/atom
LIGGGHTS finished

LIGGGHTS could not find property � to write data from calling program to.
ERROR on proc 1: This is fatal (../cfd_datacoupling_mpi.h:173)
LIGGGHTS could not find property , to write data from calling program to.
ERROR on proc 0: This is fatal (../cfd_datacoupling_mpi.h:173)

--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI COMMUNICATOR 3 SPLIT FROM 0
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 10152 on
node rahul-HP-Z600 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[rahul-HP-Z600:10148] 1 more process has sent help message help-mpi-api.txt / mpi-abort
[rahul-HP-Z600:10148] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
rm: cannot remove ‘*.eps’: No such file or directory

calc Ergun eqn:
muG = 0.0017820
dpErgun = 2561.0
final pressure drop = 2560.977345 Pa
loading ../postProcessing/probes/0/p ... done.
error: totalPressureDropAndNusselt.m: A(I,J): column index out of bounds; value 2 out of bound 0
error: called from
totalPressureDropAndNusselt.m at line 39 column 8

** (evince:10169): WARNING **: Error when getting information for file '/home/rahul/CFDEM/CFDEMcoupling-PUBLIC-2.3.x/tutorials/cfdemSolverPisoScalar/packedBedTemp/CFD/octave/cfdemSolverPisoScalar_pressureDrop.eps': No such file or directory

** (evince:10168): WARNING **: Error when getting information for file '/home/rahul/CFDEM/CFDEMcoupling-PUBLIC-2.3.x/tutorials/cfdemSolverPisoScalar/packedBedTemp/CFD/octave/cfdemSolverPisoScalar_Nusselt.eps': No such file or directory

** (evince:10168): WARNING **: Error setting file metadata: No such file or directory

** (evince:10168): WARNING **: Error setting file metadata: No such file or directory

** (evince:10168): WARNING **: Error setting file metadata: No such file or directory

** (evince:10168): WARNING **: Error setting file metadata: No such file or directory

** (evince:10168): WARNING **: Error setting file metadata: No such file or directory

** (evince:10168): WARNING **: Error setting file metadata: No such file or directory

** (evince:10168): WARNING **: Error setting file metadata: No such file or directory

** (evince:10168): WARNING **: Error setting file metadata: No such file or directory

** (evince:10168): WARNING **: Error setting file metadata: No such file or directory

** (evince:10168): WARNING **: Error setting file metadata: No such file or directory

** (evince:10169): WARNING **: Error setting file metadata: No such file or directory

** (evince:10169): WARNING **: Error setting file metadata: No such file or directory

** (evince:10169): WARNING **: Error setting file metadata: No such file or directory

** (evince:10168): WARNING **: Error setting file metadata: No such file or directory

** (evince:10169): WARNING **: Error setting file metadata: No such file or directory

** (evince:10169): WARNING **: Error setting file metadata: No such file or directory

** (evince:10169): WARNING **: Error setting file metadata: No such file or directory

** (evince:10169): WARNING **: Error setting file metadata: No such file or directory

** (evince:10169): WARNING **: Error setting file metadata: No such file or directory

** (evince:10169): WARNING **: Error setting file metadata: No such file or directory

** (evince:10169): WARNING **: Error setting file metadata: No such file or directory

** (evince:10169): WARNING **: Error setting file metadata: No such file or directory

** (evince:10168): WARNING **: Error setting file metadata: No such file or directory
cp: cannot stat ‘cfdemSolverPisoScalar_Nusselt.eps’: No such file or directory
cp: cannot stat ‘cfdemSolverPisoScalar_pressureDrop.eps’: No such file or directory
deleting data at: /home/rahul/CFDEM/CFDEMcoupling-PUBLIC-2.3.x/tutorials/cfdemSolverPisoScalar/packedBedTemp : ???\n
Cleaning /home/rahul/CFDEM/CFDEMcoupling-PUBLIC-2.3.x/tutorials/cfdemSolverPisoScalar/packedBedTemp/CFD case
rm: cannot remove ‘/home/rahul/CFDEM/CFDEMcoupling-PUBLIC-2.3.x/tutorials/cfdemSolverPisoScalar/packedBedTemp/CFD/clockData’: No such file or directory
done
rahul@rahul-HP-Z600:~/CFDEM/CFDEMcoupling-PUBLIC-2.3.x/tutorials/cfdemSolverPisoScalar/packedBedTemp$ ^C
rahul@rahul-HP-Z600:~/CFDEM/CFDEMcoupling-PUBLIC-2.3.x/tutorials/cfdemSolverPisoScalar/packedBedTemp$

rahulsoni | Sun, 12/13/2015 - 08:24

When I recompile the cfdemCompCFDEM I have seen messages similar to http://www.cfdem.com/forums/cfdem-coupling
I believe the CFDEM is not compiling properly due to following messages that I found in log files in CFDEM_SRC_DIR/lagrangian/cfdemParticle/etc/log

could not open file RASModel.H for source file cfdemPostproc.C due to No such file or directory
could not open file meshToMeshNew.H for source file cfdemSolverIB.C due to No such file or directory
could not open file omp.h for source file derived/cfdemCloudIB/cfdemCloudIB.C due to No such file or directory
could not open file ompi/mpi/cxx/pmpicxx.h for source file derived/cfdemCloudIB/cfdemCloudIB.C due to No such file or directory
could not open file ompi/mpi/cxx/pop_inln.h for source file derived/cfdemCloudIB/cfdemCloudIB.C due to No such file or directory
could not open file ompi/mpi/cxx/pgroup_inln.h for source file derived/cfdemCloudIB/cfdemCloudIB.C due to No such file or directory
could not open file ompi/mpi/cxx/pstatus_inln.h for source file derived/cfdemCloudIB/cfdemCloudIB.C due to No such file or directory
could not open file ompi/mpi/cxx/prequest_inln.h for source file derived/cfdemCloudIB/cfdemCloudIB.C due to No such file or directory
could not open file stdio.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file string.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file limits.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file stdint.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file inttypes.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file erf.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file direct.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file math.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file sleep.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file omp.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file ompi/mpi/cxx/pmpicxx.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file ompi/mpi/cxx/pop_inln.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file ompi/mpi/cxx/pgroup_inln.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file ompi/mpi/cxx/pstatus_inln.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file ompi/mpi/cxx/prequest_inln.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
could not open file ctype.h for source file subModels/forceModel/forceModel/forceModel.C due to No such file or directory
Making dependency list for source file subModels/forceModel/forceModel/newForceModel.C
could not open file RASModel.H for source file subModels/forceModel/forceModel/newForceModel.C due to No such file or directory

--
Regards

Rahul Kumar Soni
Scientist, CSIR - IMMT, India

rahulsoni | Mon, 12/14/2015 - 05:10

I believe the most of the errors were caused by the previous compilation of OpenFOAM. Actually, I had OpenFOAM-2.3.x but later stuck to a particular branch of its git (i.e. by git checkout 4d6f4a3115ff76ec4154c580eb041bc95ba4ec0) but I didn't do the clean installation.
So, I did
wcleanAll
./Allwmake

And, now when I compile the solvers then for cfdemSolverPiso and cfdemSolverPisoScalar the isssues are:
could not open file RASModel.H for source file cfdemSolverPiso.C due to No such file or directory
and for cfdemSolverIB, it is:
could not open file RASModel.H for source file cfdemSolverIB.C due to No such file or directory
could not open file meshToMeshNew.H for source file cfdemSolverIB.C due to No such file or directory

--
Regards

Rahul Kumar Soni
Scientist, CSIR - IMMT, India

rahulsoni | Mon, 12/14/2015 - 10:50

I am getting messages related to several header files missing. Some of them are related to computer or OS. The log_file_cfdem_compilation_messages attached to parent post shows these messages.

Can anyone tell me how to resolve. Is it related LD_LIBRARY_PATH declaration. My OS is ubuntu-15.10

--
Regards

Rahul Kumar Soni
Scientist, CSIR - IMMT, India

richti83's picture

richti83 | Thu, 12/17/2015 - 15:10

did you notice that git clones cfdem in $HOME/CFDEM//CFDEMcoupling-PUBLIC while bashrc exports CFDEMcoupling-PUBLIC-2.3.x ? I needed to rename the CFDEM-coupling folder to make it to compile.
my bashrc:

source $HOME/OpenFOAM/OpenFOAM-2.3.x/etc/bashrc
.#.
#================================================#
#- source cfdem env vars
export CFDEM_VERSION=PUBLIC
export CFDEM_PROJECT_DIR=$HOME/CFDEM/CFDEMcoupling-$CFDEM_VERSION-$WM_PROJECT_VERSION
export CFDEM_SRC_DIR=$CFDEM_PROJECT_DIR/src
export CFDEM_SOLVER_DIR=$CFDEM_PROJECT_DIR/applications/solvers
export CFDEM_DOC_DIR=$CFDEM_PROJECT_DIR/doc
export CFDEM_UT_DIR=$CFDEM_PROJECT_DIR/applications/utilities
export CFDEM_TUT_DIR=$CFDEM_PROJECT_DIR/tutorials
export CFDEM_PROJECT_USER_DIR=$HOME/CFDEM/$LOGNAME-$CFDEM_VERSION-$WM_PROJECT_VERSION
export CFDEM_bashrc=$CFDEM_SRC_DIR/lagrangian/cfdemParticle/etc/bashrc
export CFDEM_LIGGGHTS_SRC_DIR=$HOME/LIGGGHTS/LIGGGHTS-PUBLIC/src
export CFDEM_LIGGGHTS_MAKEFILE_NAME=fedora_fpic
#export CFDEM_LPP_DIR=$HOME/LIGGGHTS/mylpp/src
#export CFDEM_PIZZA_DIR=$HOME/LIGGGHTS/PIZZA/gran_pizza_17Aug10/src
. $CFDEM_bashrc
#================================================#

I'm not an associate of DCS GmbH and not a core developer of LIGGGHTS®
ResearchGate | Contact

rahulsoni | Fri, 12/18/2015 - 13:42

While cloning CFDEM I used git clone git://github.com/CFDEMproject/CFDEMcoupling-PUBLIC.git CFDEMcoupling-PUBLIC-$WM_PROJECT_VERSION which automatically created CFDEMcoupling-PUBLIC-2.3.x inside $HOME/CFDEM/
Moreover, cfdemSysTest does not mark any error, it is able to find the right directory.

--
Regards

Rahul Kumar Soni
Scientist, CSIR - IMMT, India

j-kerbl's picture

j-kerbl | Mon, 01/18/2016 - 14:06

Hi Rahul,

can you please paste the output of cfdemSysTest? Which mpi-version are you using?

Cheers,
Josef

heliana60 | Tue, 03/01/2016 - 17:03

Hi there,

I have more or less the same problem, I try to run the tutorials by in my case it doesn't go so far like Rahul's runs.

I get the next:

Create time

Decomposing mesh region0

Create mesh

Calculating distribution of cells
Selecting decompositionMethod simple

Finished decomposition in 0 s

Calculating original mesh data

Distributing cells to processors

Distributing faces to processors

Distributing points to processors

Constructing processor meshes

Processor 0
Number of cells = 66
Number of faces shared with processor 1 = 6
Number of processor patches = 1
Number of processor faces = 6
Number of boundary faces = 136

Processor 1
Number of cells = 66
Number of faces shared with processor 0 = 6
Number of processor patches = 1
Number of processor faces = 6
Number of boundary faces = 136

Number of processor faces = 6
Max number of cells = 66 (0% above average 66)
Max number of processor patches = 1 (0% above average 1)
Max number of faces between processors = 6 (0% above average 6)

Time = 0

Processor 0: field transfer
Processor 1: field transfer

End.

// run_parallel_cfdemSolverPisoScalar_packedBedTemp_CFDDEM //

/home/heliana/CFDEM/heliana-PUBLIC-2.3.x/tutorials/cfdemSolverPisoScalar/packedBedTemp/CFD

rm: cannot remove ‘couplingFiles/*’: No such file or directory
/*---------------------------------------------------------------------------*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: 2.3.x |
| \\ / A nd | Web: www.OpenFOAM.org |
| \\/ M anipulation | |
\*---------------------------------------------------------------------------*/
Build : 2.3.x-2f9138f6f49f
Exec : cfdemSolverPisoScalar -parallel
Date : Mar 01 2016
Time : 16:58:44
Host : "mp-oldroyd"
PID : 12319
Case : /home/heliana/CFDEM/heliana-PUBLIC-2.3.x/tutorials/cfdemSolverPisoScalar/packedBedTemp/CFD
nProcs : 2
Slaves : 1("mp-oldroyd.12320")
Pstream initialized with:
floatTransfer : 0
nProcsSimpleSum : 0
commsType : nonBlocking
polling iterations : 0
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Allowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0

Reading field p

Reading physical velocity field U
Note: only if voidfraction at boundary is 1, U is superficial velocity!!!

Reading momentum exchange field Ksl

Reading voidfraction field voidfraction = (Vgas/Vparticle)

Creating density field rho

Reading particle velocity field Us

Creating dummy density field rho = 1

Creating fluid-particle heat flux field

Reading/calculating face flux field phi

Selecting incompressible transport model Newtonian
Selecting turbulence model type RASModel
Selecting RAS turbulence model laminar

Reading g
Selecting locateModel engine
Selecting dataExchangeModel twoWayMPI
Starting up LIGGGHTS for first time execution
Executing input script '../DEM/in.liggghts_run'
LIGGGHTS (Version LIGGGHTS-PUBLIC 3.2.1, compiled 2016-03-01-13:42:21 by heliana, git commit f1f4118076fd92a2ac5275cf56e1862988d61434 based on LAMMPS 23 Nov 2013)
units si
processors 1 1 2

# read the restart file
read_restart ../DEM/post/restart/liggghts.restart
Reading restart file ...
ERROR on proc 0: Cannot open restart file ../DEM/post/restart/liggghts.restart (../read_restart.cpp:148)
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI COMMUNICATOR 3 SPLIT FROM 0
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 12319 on
node mp-oldroyd exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
rm: cannot remove ‘*.eps’: No such file or directory

calc Ergun eqn:
muG = 0.0017820
dpErgun = 2561.0
final pressure drop = 2560.977345 Pa
loading ../postProcessing/probes/0/p ...
*** error: could not open "../postProcessing/probes/0/p" ...
error: totalPressureDropAndNusselt.m: A(I,J): column index out of bounds; value 2 out of bound 0

I have no clue what I have to do to make this run :(

help!

Heliana

Daniel Queteschiner | Wed, 03/02/2016 - 17:18

>> ERROR on proc 0: Cannot open restart file ../DEM/post/restart/liggghts.restart (../read_restart.cpp:148)

Did the initial DEM simulation run fine and produce the restart file in question?

heliana60 | Fri, 03/04/2016 - 12:58

yeah, I fixed that problem. Apparently when I compiled the lagrangian libraries, the link to LIGGGHTS was not done right so I made the link manually. Ligggths run, problem comes with the OpenFOAM run the CFD part (I pressume), because I get the next:

Setting up run ...
Memory usage per processor = 10.0318 Mbytes
Step Atoms KinEng rke Volume
150001 1005 0.062724024 6.7625435e-08 0.011
150002 1005 0.062728126 6.7639024e-08 0.011
Loop time of 0.000216961 on 2 procs for 1 steps with 1005 atoms

Pair time (%) = 0.000100493 (46.3187)
Neigh time (%) = 0 (0)
Comm time (%) = 2.69413e-05 (12.4176)
Outpt time (%) = 9.89437e-06 (4.56044)
Other time (%) = 7.96318e-05 (36.7033)

Nlocal: 502.5 ave 556 max 449 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Nghost: 25.5 ave 28 max 23 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Neighs: 1684.5 ave 1859 max 1510 min
Histogram: 1 0 0 0 0 0 0 0 0 1

Total # of neighbors = 3369
Ave neighs/atom = 3.35224
Neighbor list builds = 0
Dangerous builds = 0
Selecting IOModel basicIO
Selecting probeModel off
Selecting voidFractionModel divided
Selecting averagingModel dense
Selecting clockModel off
start clock measurement at t >0.005
Selecting smoothingModel off
Selecting meshMotionModel noMeshMotion

CFDEMcoupling version: cfdem-2.9.0
, compatible to LIGGGHTS version: 3.1.0
, compatible to OF version and build: 2.3.x-commit-4d6f4a3115ff76ec4154c580eb041bc95ba4ec09

You are currently using:
OF version: 2.3.x
OF build: 2.3.x-2f9138f6f49f
CFDEM build: 64f5-dirty

If BC are important, please provide volScalarFields -imp/expParticleForces-
ignoring ddt(voidfraction)
Selecting forceModel KochHillDrag
nrForceSubModels()=1
Selecting forceSubModel ImEx

reading switches for forceSubModel:ImEx
looking for treatForceExplicit ...
treatForceExplicit = false
looking for implForceDEM ...
implForceDEM = false
looking for verbose ...
verbose = false
looking for interpolation ...
interpolation = false
looking for implForceDEMaccumulated ...
implForceDEMaccumulated = false
looking for scalarViscosity ...
scalarViscosity = false

Selecting forceModel LaEuScalarTemp
nrForceSubModels()=1
Selecting forceSubModel ImEx

reading switches for forceSubModel:ImEx
looking for verbose ...
verbose = false
looking for interpolation ...
interpolation = false
looking for scalarViscosity ...
scalarViscosity = false

Selecting forceModel Archimedes
nrForceSubModels()=1
Selecting forceSubModel ImEx

reading switches for forceSubModel:ImEx
looking for treatForceExplicit ...
treatForceExplicit = false
looking for treatForceDEM ...
treatForceDEM = false

Selecting momCoupleModel implicitCouple

implicit momentum exchange field calculate if alphaP larger than : 1
Selecting liggghtsCommandModel runLiggghts ,provide dicts, numbered from 0 to n
Selecting liggghtsCommandModel execute ,provide dicts, numbered from 0 to n
liggghtsCommand set region total property/atom Temp 600.000000
cg is set to: 1
solving volume averaged Navier Stokes equations of type B

Starting time loop

Time = 0.005

Courant Number mean: 0.00133125 max: 0.0412913

Coupling...
Starting up LIGGGHTS
Executing command: 'run 500 '
run 500
Setting up run ...
Memory usage per processor = 10.0318 Mbytes
Step Atoms KinEng rke Volume
150002 1005 0.062728126 6.7639024e-08 0.011
CFD Coupling established at step 150100
CFD Coupling established at step 150200
CFD Coupling established at step 150300
CFD Coupling established at step 150400
CFD Coupling established at step 150500
150502 1005 0.064796 6.9012981e-08 0.011
Loop time of 0.114101 on 2 procs for 500 steps with 1005 atoms

Pair time (%) = 0.0495391 (43.4169)
Neigh time (%) = 0.00453043 (3.97055)
Comm time (%) = 0.0131272 (11.5049)
Outpt time (%) = 0.0108374 (9.49811)
Other time (%) = 0.0360668 (31.6095)

Nlocal: 502.5 ave 556 max 449 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Nghost: 25.5 ave 28 max 23 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Neighs: 1684.5 ave 1859 max 1510 min
Histogram: 1 0 0 0 0 0 0 0 0 1

Total # of neighbors = 3369
Ave neighs/atom = 3.35224
Neighbor list builds = 10
Dangerous builds = 0
Executing command: 'set region total property/atom Temp 600.000000 '
set region total property/atom Temp 600.000000
Setting atom values ...
1005 settings made for property/atom
LIGGGHTS finished
LIGGGHTS could not find property \F6x\FF to write data from calling program to.
ERROR on proc 0: This is fatal (../cfd_datacoupling_mpi.h:189)
LIGGGHTS could not find property X\8Fo\FF to write data from calling program to.
ERROR on proc 1: This is fatal (../cfd_datacoupling_mpi.h:189)
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI COMMUNICATOR 3 SPLIT FROM 0
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

//////////////////////////////

so there are some properties that could not be found because there might be something wrong with the installation/compilation ....? Have you seen something like this before?

Heliana

Nucleophobe's picture

Nucleophobe | Thu, 03/30/2017 - 14:57

Hi everyone,

Sorry to bring up an old thread, but I am having the same problem as OP Rahul here in his first post. When I attempt to run a simulation (cfdemSolverIB), the program crashes at the "push" function in LIGGGHTS 'cfd_datacoupling_mpi.h':
LIGGGHTS could not find property � to write data from calling program to.

Note the "�" is different every time, and it is always nonsense. So, it appears the strings 'name' and 'type' passed to this function are referencing the wrong memory address?

I have tried multiple versions of OpenMPI (1.5.4, 1.8.5, 1.10.2), methodically recompiling OpenFOAM (2.3.1), LIGGGHTS (3.1.0), and CFDEM (2.7.1) each time, but I still get the error. One thing I haven't tried is changing compilers (currently using gcc 5.1).

Anyway, I know I should just upgrade my software, but I'm attempting to resurrect some old code specific to this combination of OpenFOAM/CFDEM versions.

Could anyone shed some light on the cause of this error, in general? It seems that a pointer is dereferencing to the wrong value and pulling garbage out of memory... Could this be caused by compiler flags or something? Did something change between GCC4 and GCC5?

Any hints appreciated. Thanks!
-Nuc

Nucleophobe's picture

Nucleophobe | Thu, 03/30/2017 - 19:50

Eureka!

It looks like there are some tricky things going on with the conversion between C/C++ strings. I narrowed down the problem to the function calls in "twoWayMPI.C", and then looked at the source code in the most recent version of CFDEMcoupling. It looks like the cast from C++ to C strings has since been updated, I assume to make the code more robust (and to avoid the problem I was having)

For posterity:

Old:
char* twoWayMPI::wordToChar(word& inWord) const
{
return const_cast(inWord.c_str());
//string HH = string(inWord);
//return const_cast(HH.c_str());
}

Fixed:
char* twoWayMPI::wordToChar(word& inWord) const
{
return const_cast(inWord.c_str());
//string HH = string(inWord);
//return const_cast(HH.c_str());
}

Regards--
-Nuc

Nucleophobe's picture

Nucleophobe | Thu, 03/30/2017 - 20:35

In case this applies to anyone else: you should also update the other relevant references to "c_str()" (e.g., in liggghtsCommandModel)

j-kerbl's picture

j-kerbl | Wed, 04/05/2017 - 13:40

Hi Nuc,

thanks for sharing :)

Cheers,
Josef

Nathan | Thu, 12/13/2018 - 06:21

Hi Nuc and Josef,

I got exactly the same problem to Nuc after upgrading my OpenMPI. Something has changed with MPI and that caused the problem. However I could not apply the solution given by Nuc well. I changed the twoWayMPI.C file as Nuc suggested but it does not work. The problem is still coming. I donot understand well what to change in the liggghtsCommandModel.

Please kindly advise.

Thanks you,

regards,