Dear friends,
I perform a simulation with a big moving particle and about 8000 bed particles. One or more processors are running out of different results that is the big particle’s position in Z direction, even the same number of processors, e.g. px py pz=1 2 3 or 2 1 3 or 3 1 2. Therefore, I suspect it has something to do with the parallelism. I attached my scripts and running results. What’s wrong with my scripts? How could I set the optimal number of cores to get the most accurate result? How could I carry out the parallel computing?
In addition, I tried to simulate the example in.convery in the LIGGGHTS. I also find that the results with a different number of processors in the simulation are different.
I'm struggling with this problem for one months. Has anyone any idea about this problem? Thanks in advance for your reply,
Best regards,
Ping
Attachment | Size |
---|---|
![]() | 919.72 KB |
Daniel Queteschiner | Wed, 06/13/2018 - 13:01
What do you mean by "One or
What do you mean by "One or more processors are running out of different results that is the big particle’s position in Z direction"? Please rephrase.
Typically, the differences in parallel runs stem from particle insertion. In parallel runs, particles cannot be inserted close to the sub-domain boundaries (the distance to any subdomain boundary is at least equal to (or greater than) the maximum particle radius). Of course subdomain boundaries change if you change how the domain is subdivided
NvNR | Sat, 10/13/2018 - 08:45
Different results with different number of processors
I am running simulations with 8000 particles on different machines with different number of processors. I am getting different results everytime. This is happening for coarser grid whereas simulations for same case with same number of particles on different machines and processors, I am getting almost same results for fine grid. Any suggestion why there is difference in results when I am using coarser grid for fluid part.