When viewing the Technical Program schedule, on the far righthand side
is a column labeled "PLANNER." Use this planner to build your own
schedule. Once you select an event and want to add it to your personal
schedule, just click on the calendar icon of your choice (outlook
calendar, ical calendar or google calendar) and that event will be
stored there. As you select events in this manner, you will have your
own schedule to guide you through the week.
You can also create your personal schedule on the SC11 app (Boopsie) on your smartphone. Simply select a session you want to attend and "add" it to your plan. Continue in this manner until you have created your own personal schedule. All your events will appear under "My Event Planner" on your smartphone.
A scalable two-phase parallel I/O library with application to a large scale subsurface simulator
SESSION: Research Poster Reception
EVENT TYPE: ACM Student Research Competition Poster, Poster, Electronic Poster
TIME: 5:15PM - 7:00PM
SESSION CHAIR: Bernd Mohr
AUTHOR(S):Sarat Sreepathi, Vamsi Sripathi, Glenn Hammond, Richard Mills, Kumar Mahinthakumar
ROOM:WSCC North Galleria 2nd/3rd Floors
ABSTRACT: This poster describes the development of a highly scalable application layer parallel I/O library (ASCEM-IO) for scientific applications. This library was envisioned to leverage our earlier I/O optimization experience to build a scalable general purpose parallel I/O capability for any application. The parallel I/O library provides a higher level API (Application Programming Interface) to read and write large scientific datasets in parallel at very large processor counts. Specifically, the goal is to take advantage of existing parallel I/O libraries, such as HDF5 which are being widely used by scientific applications and modify these algorithms to better scale on larger number of processors. This is accomplished by dividing the traditional I/O operations (read/write) into two phases, a communication phase and an I/O phase. Results with a real application on the Cray XT/5 indicates significant performance improvement on large processor cores when compared to default HDF collective I/O operations.
Bernd Mohr (Chair) - Juelich Supercomputing Centre
Sarat Sreepathi - North Carolina State University
Vamsi Sripathi - North Carolina State University
Glenn Hammond - Pacific Northwest National Laboratory
Richard Mills - Oak Ridge National Laboratory
Kumar Mahinthakumar - North Carolina State University