Problem with lpp command

Submitted by nicolasoviedoc on Mon, 06/05/2017 - 16:05

Greetings, i recently finished a simulation in LIGGGHTS 3.6, but when i try to post processing the information using "lpp", the process freezes. I use the cpunum command in order to reduce the cpu cores, but it still freezes and no error message is shown. I'm still waiting for restarting of the process, but it has passed more than 7 hours and nothing happen.

I add the lines from my console.

simulaciones@simulaciones-virtual-machine:~/Descargas/MPO/post$ lpp dump*.particles
starting LIGGGHTS memory optimized parallel post processing
chunksize: 8 --> 8 files are processed per chunk. If you run out of memory reduce chunksize.
Working with 16 processes...
calculating chunks 1 - 16 of 170
dump is already unscaled
dump is already unscaled
dump is already unscaled
dump is already unscaled
dump is already unscaled
dump is already unscaled
dump is already unscaled
dump is already unscaled
dump is already unscaled

I hope anyone could help me out.

Best regards

Nicolás.

j-kerbl's picture

j-kerbl | Tue, 06/06/2017 - 14:46

Hi Nicolás,

that is unusual. Can you please try some things:
1.) If the simulation is rather big, the memory usage might get very high during the lpp process with many files. Please track this during the post-processing with a system-monitor or top.
2.) You can try to avoid this with reducing the reduction of the chunksize. This is an option similar to ncpus, but declares the number of files to work on. Also you can check what happens if you don't try dump*, but only one file similar to dump10000.particles.
3.) If these reductions do not help, please tell me how many particles you had in the simulation and your python version.

Cheers,
Josef

nicolasoviedoc | Thu, 06/08/2017 - 21:29

Hi Josef

I reduced the size of chunks. By default, the chunksize is 8, and i reduce it to 1, and reduces the number of cpus too (I have 16 and now i setted to 8) and now it works perfectly.

Thank you very much for the help!!.