SC is the International Conference for
High Performance Computing, Networking,
Storage and Analysis



SCHEDULE: NOV 12-18, 2011

When viewing the Technical Program schedule, on the far righthand side is a column labeled "PLANNER." Use this planner to build your own schedule. Once you select an event and want to add it to your personal schedule, just click on the calendar icon of your choice (outlook calendar, ical calendar or google calendar) and that event will be stored there. As you select events in this manner, you will have your own schedule to guide you through the week.

You can also create your personal schedule on the SC11 app (Boopsie) on your smartphone. Simply select a session you want to attend and "add" it to your plan. Continue in this manner until you have created your own personal schedule. All your events will appear under "My Event Planner" on your smartphone.

Enabling and Scaling Biomolecular Simulations of 100 Million Atoms on Petascale Machines with a Multicore-optimized Message-driven Runtime

SESSION: Molecular Dynamics and Computational Physics

EVENT TYPE: Paper

TIME: 11:00AM - 11:30AM

AUTHOR(S):Chao Mei, Yanhua Sun, Gengbin Zheng, James C. Phillips, Eric J. Bohm, Chris Harrison, Laxmikant V. Kale

ROOM:TCC 304

ABSTRACT:
A 100-million-atom biomolecular simulation with NAMD is one of the three benchmarks for the NSF-funded sustainable petascale machine. Simulating this large molecular system on a petascale machine presents great challenges, including handling I/O, large memory footprint and getting good strong-scaling results. In this paper, we present parallel I/O techniques to enable the simulation. A new SMP model is designed to efficiently utilize ubiquitous wide multicore clusters by extending the Charm++ asynchronous message-driven runtime. We exploit node-aware techniques to optimize both the application and the underlying SMP runtime. Hierarchical load balancing is further exploited to scale NAMD to the full Jaguar PF Cray XT5 (224,076 cores) at Oak Ridge National Laboratory, both with and without PME full electrostatics, achieving 93% parallel efficiency (vs 6720 cores) at 9ms per step for a simple cutoff calculation. Excellent scaling is also obtained on 65,536 cores of the Intrepid Blue Gene/P at Argonne National Laboratory.

Chair/Author Details:

Chao Mei - University of Illinois at Urbana-Champaign

Yanhua Sun - University of Illinois at Urbana-Champaign

Gengbin Zheng - University of Illinois at Urbana-Champaign

James C. Phillips - University of Illinois at Urbana-Champaign

Eric J. Bohm - University of Illinois at Urbana-Champaign

Chris Harrison - University of Illinois at Urbana-Champaign

Laxmikant V. Kale - University of Illinois at Urbana-Champaign

Add to iCal  Click here to download .ics calendar file

Add to Outlook  Click here to download .vcs calendar file

Add to Google Calendarss  Click here to add event to your Google Calendar

The full paper can be found in the ACM Digital Library

   Sponsors    ACM    IEEE