Hi,
I have a problem running some cases in parallel with cfdemSolverPiso. I distributed particles in a tube and run the case on 8 mesh partitions by a simple domain partitioning along the cylinder axis. The particles show an unphysical behavior when they pass the processor patch. They agglomerate before the processor patch. Any ideas what might be wrong in my case-setup? I run this case with the latest Liggghts version.
Thanks in advance.
Best regards
Ulrich
Attachment | Size |
---|---|
![]() | 425.29 KB |
![]() | 12.68 MB |
![]() | 1.27 MB |
cgoniva | Tue, 08/28/2012 - 16:18
Hi Ulrich, could you please
Hi Ulrich,
could you please post the case, then we can have a look at it.
Cheers,
Chris
ulrich | Tue, 08/28/2012 - 22:40
Hi Chris, thanks for the
Hi Chris,
thanks for the offer. I think I'm one step further: In the partitioning of the fluid domain I used 8 segment in z-direction, while in Liggghts I used for the processors 8 1 1 which seems to a partitioning in x-direction. Here I see a sticking effect at the processor patches. If I use also 1 1 8 in Liggghts I get a smooth movement across the patches (but still some pressure jumps). Do you think it is likely that this might cause the problem or should the partitioning be completely independent? I looked at the Ergun-Tutorial once more and found (1 1 2) in decomposePar but 2 1 1 in processors for Liggghts. So I tend to believe that OpenFOAM enforces the partitioning also for the particles.
I can prepare the case for an upload but I can also try to dig deeper and share my experience here.
Best regards
Ulrich
cgoniva | Thu, 08/30/2012 - 09:12
Hi Ulrich, the partiction of
Hi Ulrich,
the partiction of CFDEMcoupling and LIGGGHTS should be completely independent. It would be great if you could prepare a testcase, then we can have a closer look.
Cheers,
Chris
ulrich | Thu, 08/30/2012 - 14:17
Hi Chris, it seems you are
Hi Chris,
it seems you are right. Running the case further, there still occur certain disturbances when the particles pass the processor patches.
I added the case to my first post. I think it is not possible to add new attachments to a reply. I hope I prepared everything so far so you only have to execute the Allrun-script. For post processing, I loaded the decomposed case into paraview, added the particles and generated a glyph-sphere in the size of particles. I hope this isn’t a post processing issue.
Thanks a lot.
Best regards
Ulrich
cgoniva | Thu, 09/06/2012 - 11:20
Hi Ulrich, unfortunately I
Hi Ulrich,
unfortunately I cannot reproduce your problem as the case you posted blows up after 0.0003... seconds?
Cheers,
Chris
ulrich | Thu, 09/06/2012 - 14:45
Hi Chris, sorry for the
Hi Chris,
sorry for the problem. This is very strange.The case runs here.
Do we use the same versions?
I did the following:
- I downloaded again my case from your server.
- I renamed the package and unpacked it
- I ran it with Allrun.
Please attached log-File (test.dat) at the first message.
Best regards
Ulrich
cheng1988sjtu | Sat, 01/28/2017 - 23:51
I have the same problem, do you have a solution?
Hi
I searched for the solution to my problem, and found this topic, I had the exact same problem, when we simulate particles (say neutrally buoyant), and if we decompose the domain in the vertical directions into 4 processors, I will have 3 concentration jumps right at the interface of processors, and if I decompose into 8 processors, I will have 7, so on and so forth, this means that the problem is caused by the parallel running, do you happen to find out why? and is there a solution for this?
Thanks!
Ruturaj Deshpande | Thu, 02/16/2017 - 14:06
Dear C.Z. U of D,
Dear C.Z. U of D,
were you able to solve the "JUMP" issue when particle crosses the processor?
Thank you
Ruturaj
cgoniva | Mon, 01/30/2017 - 16:40
Particle agglomeration in front of processor patch in parallel
Dear C.Z. U of D,
can you provide a testcase shwoing the problem with max 1000 particles and attach an image of the problem?
Without that it won't be possible to give proper advise.
Best regards,
Christoph
Ruturaj Deshpande | Mon, 02/13/2017 - 23:01
Particle agglomeration in front of processor patch in parallel
Recently, I also encountered a similar problem. In my case the velocity of the particles decrease after they cross the process boundary.
There is no physical reason for this to happen.
i am sure this is due to communication across the processor boundaries for the fluid solver
can someone guide me on ....... which part of CFDEM code deals with parallel communication.
Or any direct solution is highly appreciated.
Thank you
Ruturaj