When viewing the Technical Program schedule, on the far righthand side
is a column labeled "PLANNER." Use this planner to build your own
schedule. Once you select an event and want to add it to your personal
schedule, just click on the calendar icon of your choice (outlook
calendar, ical calendar or google calendar) and that event will be
stored there. As you select events in this manner, you will have your
own schedule to guide you through the week.
You can also create your personal schedule on the SC11 app (Boopsie) on your smartphone. Simply select a session you want to attend and "add" it to your plan. Continue in this manner until you have created your own personal schedule. All your events will appear under "My Event Planner" on your smartphone.
Scalable Infrastructure to Support Supercomputer Resiliency-Aware Applications and Load Balancing
SESSION: Research Poster Reception
EVENT TYPE: ACM Student Research Competition Poster, Poster, Electronic Poster
TIME: 5:15PM - 7:00PM
SESSION CHAIR: Bernd Mohr
AUTHOR(S):Yoav Tock, Benjamin Mandler, Jose Moreira, Terry Jones
ROOM:WSCC North Galleria 2nd/3rd Floors
ABSTRACT: Current trends dictate increasing complexity and component counts on supercomputers and mainstream commercial systems alike. This trend exposes weaknesses in the underlying clustering infrastructure needed for continuous availability, maximizing utilization, and efficient administration of such systems.
We propose a step to alleviate this problem by providing a highly scalable clustering infrastructure, based on peer-to-peer technologies, for supporting resiliency-aware applications as well as efficient monitoring and load balancing.
Supported services include Membership, supporting resiliency-aware applications, publish-subscribe messaging, Convergecast supporting load balancing schemes, attributes dissemination for propagating slowly changing information, and DHT facilitating services discovery and integration.
We employ a flexible two layer hierarchical topology, comprised of base zones federated by a management zone. Our design aspires to quickly identify membership changes in a scalable environment with minimal overall system disruption, thus enabling efficient Exascale size deployments. We present experimental evaluation taken from an IBM BlueGene/P, demonstrating scalability up to ~256K nodes.
Bernd Mohr (Chair) - Juelich Supercomputing Centre