Particle agglomeration in front of processor patch in parallel run

Submitted by ulrich on Tue, 08/28/2012 - 14:33

Hi,
I have a problem running some cases in parallel with cfdemSolverPiso. I distributed particles in a tube and run the case on 8 mesh partitions by a simple domain partitioning along the cylinder axis. The particles show an unphysical behavior when they pass the processor patch. They agglomerate before the processor patch. Any ideas what might be wrong in my case-setup? I run this case with the latest Liggghts version.
Thanks in advance.
Best regards
Ulrich

ulrich | Tue, 08/28/2012 - 22:40

Hi Chris,

thanks for the offer. I think I'm one step further: In the partitioning of the fluid domain I used 8 segment in z-direction, while in Liggghts I used for the processors 8 1 1 which seems to a partitioning in x-direction. Here I see a sticking effect at the processor patches. If I use also 1 1 8 in Liggghts I get a smooth movement across the patches (but still some pressure jumps). Do you think it is likely that this might cause the problem or should the partitioning be completely independent? I looked at the Ergun-Tutorial once more and found (1 1 2) in decomposePar but 2 1 1 in processors for Liggghts. So I tend to believe that OpenFOAM enforces the partitioning also for the particles.

I can prepare the case for an upload but I can also try to dig deeper and share my experience here.

Best regards
Ulrich

cgoniva's picture

cgoniva | Thu, 08/30/2012 - 09:12

Hi Ulrich,

the partiction of CFDEMcoupling and LIGGGHTS should be completely independent. It would be great if you could prepare a testcase, then we can have a closer look.

Cheers,
Chris

ulrich | Thu, 08/30/2012 - 14:17

Hi Chris,

it seems you are right. Running the case further, there still occur certain disturbances when the particles pass the processor patches.
I added the case to my first post. I think it is not possible to add new attachments to a reply. I hope I prepared everything so far so you only have to execute the Allrun-script. For post processing, I loaded the decomposed case into paraview, added the particles and generated a glyph-sphere in the size of particles. I hope this isn’t a post processing issue.

Thanks a lot.

Best regards
Ulrich

cgoniva's picture

cgoniva | Thu, 09/06/2012 - 11:20

Hi Ulrich,

unfortunately I cannot reproduce your problem as the case you posted blows up after 0.0003... seconds?

Cheers,
Chris

ulrich | Thu, 09/06/2012 - 14:45

Hi Chris,
sorry for the problem. This is very strange.The case runs here.
Do we use the same versions?
I did the following:
- I downloaded again my case from your server.
- I renamed the package and unpacked it
- I ran it with Allrun.

Please attached log-File (test.dat) at the first message.

Best regards

Ulrich

cheng1988sjtu | Sat, 01/28/2017 - 23:51

Hi

I searched for the solution to my problem, and found this topic, I had the exact same problem, when we simulate particles (say neutrally buoyant), and if we decompose the domain in the vertical directions into 4 processors, I will have 3 concentration jumps right at the interface of processors, and if I decompose into 8 processors, I will have 7, so on and so forth, this means that the problem is caused by the parallel running, do you happen to find out why? and is there a solution for this?

Thanks!

C.Z. U of D

Ruturaj Deshpande | Thu, 02/16/2017 - 14:06

Dear C.Z. U of D,

were you able to solve the "JUMP" issue when particle crosses the processor?

Thank you
Ruturaj

Ruturaj Deshpande | Mon, 02/13/2017 - 23:01

Recently, I also encountered a similar problem. In my case the velocity of the particles decrease after they cross the process boundary.

There is no physical reason for this to happen.

i am sure this is due to communication across the processor boundaries for the fluid solver

can someone guide me on ....... which part of CFDEM code deals with parallel communication.

Or any direct solution is highly appreciated.

Thank you
Ruturaj