From weeks to hours – how GPU technology is changing the face of Discrete Element Modeling

The fact that DEM is a computationally expensive method is well known amongst the CAE industry. Simulation times can range from a few hours to a few weeks and there is no secret behind this: bulk materials in real life could be made of an almost infinite number of what we call “particles”. When we try to define a numerical model for these materials, the number of particles is usually reduced in order to have a simulation we can afford to run. However, most of the time we are still talking about hundreds of thousands or several millions of particles and each one of them will need to have the potential contacts searched and solved if existent – and for as long as we want to represent.

Luckily enough, it is also well known that the GPU industry is working at a very high pace, providing every year cards capable of faster and better image and video processing. This progress benefits mostly the gaming industry and other sectors in need of powerful visualization, which are also areas in constant evolution and a major key towards pushing graphic cards development forwards. With all the remarkable progress that the GPU cards have been showing, it was not long until the technology would come to the world of Engineering software and scientific computation.

Harness the power of GPU for your DEM simulations!

EDEM, being the leading commercial DEM code in the market, has been a very good witness of the evolution of the tools available for heavy computations. It all started 11 years ago with the first EDEM release, which would run DEM simulations with the use of multiprocessors technology. This meant that all the CPUs available in the computer would spread the simulation tasks (particle generation, contact detection, movement of geometries, contact calculation…) into several processing units. Of course, these tasks had to be taken care of one after the other for every particle and for every timestep.

Over the years, EDEM’s solver engine has been worked on to improve the handling of the different simulation tasks across the processors, providing faster performance and a much better scalability, which means speeding up simulations as you increase the number of processors.

With the release of EDEM 2017 in the summer of 2016, GPU technology was incorporated into the EDEM solver. This meant that now all the aforementioned tasks could be spread into hundreds or thousands of compute units rather than into a few CPUs. In more detail, imagine the particle contact calculation for instance: a certain number of CPUs are assigned to take care of solving the forces in all the contacts existing in each timestep. This means that for every timestep a lot of detected contacts will cue up in every assigned processor and will be taken care of one after the other. With GPU the idea is the same but having many more processing units working simultaneously in the task, which is translated into a faster simulation progression.

The best way to see the power of GPU technology applied to DEM is through an example. Blast furnace is the process of transforming components like fuel, ores and limestone into iron. How these materials are spread in the furnace impacts the quality of the foundry and the final material. Hence, predicting the trajectories of the components within the transfer points of the system is key and also a very suitable DEM application in terms of material type and flow behavior. The following silo is part of the supplying system of a furnace in the heart of China.

EDEM simulation of silo discharge: 11 million particles (33million spheres)

The material contained by this equipment requires a large number of particles when it comes to creating a DEM model, we are talking several millions. This has implications in both number of calculations and RAM required, making the CPU simulation take several weeks to run. However, with the powerful EDEM 2017.2 GPU solver engine and the new Nvidia GP 100 card, the application has been simulated and postprocessed with what looks like almost no effort.

For a total of 20 seconds of real time, this silo has been filled with 11M particles (33M spheres) of 4 different sizes and partly discharged. All the process simulated in 70 hours. You can see the post-processed animation measuring mass flow rate at the exit in the following video:

11 million particles were simulated in only 70 hours using EDEM 2017.2 GPU solver engine and Nvidia GP 100 card

Not only the calculation speed of the 3580 CUDA cores within the GP 100 is key, also the 16 GB of memory allow for simulations with a large number of particles and a big calculation domain. Furthermore, one of the most powerful features of the GP 100 is the ability to process double precision calculations at a high speed, since it has been especially designed for such purpose.

Single-double precision is what determines the number of digits you carry with your values of, let’s say, force, displacement or overlap. In most cases, single precision is enough to provide accurate simulations but in some scenarios double precision could be required to maintain stability in the results. Let us imagine for example, extremely small particles. When in contact, their overlap could be a very small digit that would need to be rounded when stored in single precision.  Twice the digits will be able to be kept within double precision, which will allow EDEM to maintain even more precision in the calculations.

While EDEM 2017.2 is a hybrid between single and double precision, EDEM 2018 (released in October 2017) now includes a double precision GPU Solver Engine, giving maximum stability to the simulations and even more power to cards such as the Nvidia GP 100. This takes further what is already a game-changing combo: with EDEM GPU and Nvidia GP 100 you can send your 20 second, 11M particles simulation to run on a Friday before leaving the office and pick up the results on Monday!

BlogHardware

Share this article