SC is the International Conference for
High Performance Computing, Networking,
Storage and Analysis



SCHEDULE: NOV 12-18, 2011

When viewing the Technical Program schedule, on the far righthand side is a column labeled "PLANNER." Use this planner to build your own schedule. Once you select an event and want to add it to your personal schedule, just click on the calendar icon of your choice (outlook calendar, ical calendar or google calendar) and that event will be stored there. As you select events in this manner, you will have your own schedule to guide you through the week.

You can also create your personal schedule on the SC11 app (Boopsie) on your smartphone. Simply select a session you want to attend and "add" it to your plan. Continue in this manner until you have created your own personal schedule. All your events will appear under "My Event Planner" on your smartphone.

Fast One-Sided Communication on Supercomputers and Application to Three Scientific Codes

SESSION: Research Poster Reception

EVENT TYPE: ACM Student Research Competition Poster, Poster, Electronic Poster

TIME: 5:15PM - 7:00PM

SESSION CHAIR: Bernd Mohr

AUTHOR(S):Jeff R. Hammond, Sreeram Potluri, Zheng (Cynthia) Gu, Alex Dickson, James Dinan, Ivo Kabadshow, Pavan Balaji, Vinod Tipparaju

ROOM:WSCC North Galleria 2nd/3rd Floors

ABSTRACT:
We have developed a new library for one-sided communication and applied it to three different scientific codes: the NWChem computational chemistry application, the ScaFaCoS (Scalable Fast Coulomb Solvers) library, and the NEUS (non-equilibrium umbrella sampling) application. All three codes rely upon asynchronous communication, such as one-sided put, get, accumulate and remote atomics (e.g. fetch-and-add). Our library was designed to meet the requirements of current and next-generation interconnects found in Blue Gene/P and Q, PERCS and Cray XE architectures, such as dynamic routing, DMA engines and hardware remote atomics. The synchronization semantics and implementation have been designed for scalability to more than one million ranks. We demonstrate the scaling of all three applications to thousands of cores. As a scaling exemplar, the fast multiple method (FMM) in ScaFaCoS is demonstrated to scale to 300K ranks of Jugene, which is impossible with MPI and ARMCI for both semantic and implementation reasons.

Chair/Author Details:

Bernd Mohr (Chair) - Juelich Supercomputing Centre

Jeff R. Hammond - Argonne National Laboratory

Sreeram Potluri - Ohio State University

Zheng (Cynthia) Gu - Florida State University

Alex Dickson - University of Chicago

James Dinan - Argonne National Laboratory

Ivo Kabadshow - Juelich Supercomputing Centre

Pavan Balaji - Argonne National Laboratory

Vinod Tipparaju - Oak Ridge National Laboratory

Add to iCal  Click here to download .ics calendar file

Add to Outlook  Click here to download .vcs calendar file

Add to Google Calendarss  Click here to add event to your Google Calendar

   Sponsors    ACM    IEEE