When viewing the Technical Program schedule, on the far righthand side
is a column labeled "PLANNER." Use this planner to build your own
schedule. Once you select an event and want to add it to your personal
schedule, just click on the calendar icon of your choice (outlook
calendar, ical calendar or google calendar) and that event will be
stored there. As you select events in this manner, you will have your
own schedule to guide you through the week.
You can also create your personal schedule on the SC11 app (Boopsie) on your smartphone. Simply select a session you want to attend and "add" it to your plan. Continue in this manner until you have created your own personal schedule. All your events will appear under "My Event Planner" on your smartphone.
Passing The Three Trillion Particle Limit With An Error-Controlled Fast Multipole Method
SESSION: Research Poster Reception
EVENT TYPE: ACM Student Research Competition Poster, Poster, Electronic Poster
TIME: 5:15PM - 7:00PM
SESSION CHAIR: Bernd Mohr
AUTHOR(S):Ivo Kabadshow, Holger Dachsel, Jeff Hammond
ROOM:WSCC North Galleria 2nd/3rd Floors
ABSTRACT: We present an error-controlled, highly scalable FMM implementation for
long-range interactions of particle systems with open, 1D, 2D and 3D periodic boundary
conditions. We highlight three aspects of fast summation codes not fully addressed in most
articles; namely memory consumption, error control and runtime minimization. The aim of this poster is to contribute to all of these three points in the context of modern large scale parallel machines. Especially the used data structures, the parallelization approach and the precision-dependent parameter optimization will be discussed.
The current code is able to compute all mutual long-range interactions of more than three
trillion particles on 294912 BG/P cores within a few minutes for an expansion up to quadrupoles. The maximum memory footprint of such a computation has been reduced to less than 45 Bytes per particle. The code employs a one-sided, non-blocking parallelization approach with a small communication overhead.
Chair/Author Details:
Bernd Mohr (Chair) - Juelich Supercomputing Centre