Hi all,
i am running simulations on a redhat cluster and found that the number of particles that are added during a run is depending on the number of CPU's i use for that.
So for example if i use 8 Cores i get the number 44K particles that are added but if use 16 cores with the exact sam in-file i only get 32K particles also i get an error about the timestep being suddenly too large which i haven't encountered running on my local machine with 8 cores.
Is this a known problem? How can this be fixed?
Thanks for your suggestions!
mschramm | Thu, 05/06/2021 - 05:42
Processor boundaries
You are encountering the processor boundaries.
When inserting a sphere, LIGGGHTS will not allow the sphere to overlap with another boundary (unless you remove this flag: see check_dist_from_subdomain_border in https://www.cfdem.com/media/DEM/docu/fix_insert_pack.html). An example (greatly exaggerated due to bonded particles) can be seen at the end of the following pdf:
https://github.com/schrummy14/LIGGGHTS_Flexible_Fibers/blob/master/examp...
The time step error/warning I do not know how to solve off the top of my head.
Could you post the error message?