Dear community,
I am simulating the packing of a chromatography column at Technical University Munich. Therefore I use CFDEMcoupling 2.7.1, LIGGGHTS 3.0.2 and OpenFOAM 2.3.x. I already built some working small scale models (40.000 particles, 4% of the volume of the fullscale experiment) with a discrete particle distribution based on the Ergun Test Case (reversed flow).
Now I wanted to scale up the geometry and increase the number of particles proportionally (cell size for CFD and particle size are kept constant). If I use a smaller number of particles in the scaled up geometry the simulation works fine. But if I increase the number of particles up to 1.000.000 particles the simulation crashes during Coupling (for 500.000 particles the simulation is still working):
LIGGGHTS finished
Coupling...
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 14209 on node lxa6 exited on signal 9 (Killed).
-----------------------------------------------------------------
It seems to be an issue with parallelization. I'm working on a cluster with 16 cores per node. If I use 20 processors for the simulation, it works with 1.000.000 particles. But if I increase the number of processors the simulation only works with 500.000 particles.
Does anyone have an idea how I could face the problem?
Thanks in advance for your help.
Cheers
Philipp