Particle not leaving the domain

Submitted by shawnwuch on Thu, 04/23/2015 - 11:47

Hi all,
I am trying to simulate particulate flow in a cylindrical pipe that has one inlet, one outlet, and two face patches on the side which I assigned to have constant velocity. I am using cfdemSolverPiso to simulate this problem. I had no trouble seeing the particle leaving the domain at outlet (fixed boundary) or returning to inlet (periodic boundary) when running my DEM code. My CFD code ran fine individually as well. But when I do the coupling simulation, the particles always slow down and accumulate at the outlet. None of them leaves the domain as I was trying to do. It seems to me that the particles are blocked by a surface in the coupled simulation, but I could not find out what causes that.
Has anyone had any experience like mine and is willing to share some thoughts? Thank you very much!

shawnwuch | Thu, 04/23/2015 - 13:03

OK, I found a way to get around that problem by using a smaller DEM domain. But the reason is still unclear to me. There seem to be an interplay between DEM and CFD domain sizes. I would really appreciate if anyone could explain or suggest if there is any better way to have particle leaving or recirculating back.

Nucleophobe's picture

Nucleophobe | Wed, 05/13/2015 - 20:16

Shawnwuch,

I am experiencing the same behavior using CFD coupling with a continuous insertion of particles. Were you able to identify the source of this problem? It seems like some non-physical drag develops as the particles start to leave the domain, preventing the particles from fully exiting the outlet...

I running with a smaller DEM domain now as you suggested to see if that helps.

Thanks,
-Nuc

Nucleophobe's picture

Nucleophobe | Fri, 05/15/2015 - 08:49

In case anyone else has this problem, running with a smaller DEM domain also worked for me. I used the fixed boundary in the x-direction for my case (boundary f m m) to set limits on the x-position of the particles (i.e., if a particle leaves the range between xmin and xmax, it is deleted).

Another note: for some reason, I needed to increase 'maxNumberOfParticles' (under 'twoWayMPIProps') in the 'constant/couplingProperties' file. I had it set to 10 while testing and the first two particles were inserted successfully, but when the third particle was inserted, the second particle was deleted (?). Each additional particle insertion always resulted in a particle deletion regardless of the position of the deleted particle. Anyway, I increased 'maxNumberOfParticles' to 1000 and it seems to be working now. Very odd.
Thanks,
-Nuc

Daniel Queteschiner | Mon, 05/18/2015 - 12:45

>> Anyway, I increased 'maxNumberOfParticles' to 1000 and it seems to be working now. Very odd.

That's indeed very strange especially since the parameter maxNumberOfParticles isn't used anymore in the current version of the twoWayMPI data exchange model ...

Nucleophobe's picture

Nucleophobe | Tue, 05/19/2015 - 19:25

Dan,

After investigating further, it looks like the behavior is still occurring. I mistakenly thought changing the 'maxNumberOfParticles' parameter had fixed the problem, but it looks like the behavior is more random that I originally thought.

Anyway, I started a new thread in the bug reporting section with more details. The only pattern I have identified is that "small" particles are erroneously deleted more frequently that larger particles.
-Nuc

shawnwuch | Wed, 06/03/2015 - 06:47

Hi Nuc and CFDEM users,

I also experienced some problems with maxNumberOfParticles setting. Now I am using 1M, a randomly chosen large number to avoid them and those particles seem to last until flow out of domain.
However, because I am simulating dense particulate flow, I found out that even if the particles left the domain, they still seem to occupy some part of the hardware memory. Hence, eventually there would be an error looks like this:
...
insertion: proc 28 at 90 %
insertion: proc 28 at 100 %
CFD Coupling established at step 6002800
Step Atoms KinEng RotE ts[1] ts[2] Volume
6002800 605394 151634.1 81.544819 0.025126228 0.0093965634 1596.771
6002801 605394 151634.49 81.542911 0.025126228 0.0093965634 1596.771
Loop time of 2.61396 on 36 procs for 100 steps with 605394 atoms
LIGGGHTS finished
ERROR on proc 20: Failed to reallocate 12493232 bytes for array CfdDatacouplingMPI:data (../memory.cpp:70)
ERROR on proc 21: Failed to reallocate 12493232 bytes for array CfdDatacouplingMPI:data (../memory.cpp:70)
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 22 in communicator MPI COMMUNICATOR 3 SPLIT FROM 0
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

I haven't find a solution for this one yet. Guess will need to check what the array CfdDatacouplingMPI:data stores first and perhaps find more memory for the simulation. Please share your experience if you've seen this. Thanks in advance.

Shawn