SC is the International Conference for
High Performance Computing, Networking,
Storage and Analysis



SCHEDULE: NOV 12-18, 2011

When viewing the Technical Program schedule, on the far righthand side is a column labeled "PLANNER." Use this planner to build your own schedule. Once you select an event and want to add it to your personal schedule, just click on the calendar icon of your choice (outlook calendar, ical calendar or google calendar) and that event will be stored there. As you select events in this manner, you will have your own schedule to guide you through the week.

You can also create your personal schedule on the SC11 app (Boopsie) on your smartphone. Simply select a session you want to attend and "add" it to your plan. Continue in this manner until you have created your own personal schedule. All your events will appear under "My Event Planner" on your smartphone.

The Data Superconductor: Demonstrating 100Gb WAN Lustre Storage Using OpenFlow Enabled Traffic Engineering.

SESSION: SCinet Research Sandbox Experiment Results

EVENT TYPE: Research Sandbox

TIME: 10:45AM - 11:00AM

Presenter(s):Stephen C. Simms, Matthew Davy, Matthew Link, Robert Henschel, David Hancock, Kurt Seiffert

ROOM:TCC LL2

ABSTRACT:
The rate at which data from scientific instruments and simulations can be produced is constantly increasing. We see this increase in next generation gene sequencers, climate modeling, and telescopes just to name a few. High bandwidth networks are being deployed more widely in an attempt to meet the requirements that these challenges present. For example, it's not uncommon to find 10 Gigabit in the laboratory, linking researchers to centralized computational and storage resources. The Global Research Network Operations Center at Indiana University has extensive network expertise as the NOC for many of the high-bandwidth research networks in the US and abroad. The GRNOC is the home of the recently announced Network Development and Deployment Initiative based on OpenFlow. IU's Data Capacitor team has successfully pioneered the use of the Lustre filesystem to distribute and compute data across wide area networks. Combining network and Lustre experience, IU will deploy two Lustre filesystems at the ends of a 100Gb network spanning the distance between Bloomington, Indiana and the SC11 showfloor in Seattle, Washington. Using compute and storage resources at both ends of the connection, we will execute real world scientific applications that will saturate this 100Gb link. As the 100Gb link becomes saturated, we will dynamically route application traffic over a separate shared network using OpenFlow, allowing us to tune application traffic based on need, priority and capacity. The demonstration will showcase how IU operates HPC resources and networks today and can serve as a model as we move into the future.

Chair/Presenter Details:

Stephen C. Simms - Indiana University

Matthew Davy - Indiana University

Matthew Link - Indiana University

Robert Henschel - Indiana University

David Hancock - Indiana University

Kurt Seiffert - Indiana University

Add to iCal  Click here to download .ics calendar file

Add to Outlook  Click here to download .vcs calendar file

Add to Google Calendarss  Click here to add event to your Google Calendar

   Sponsors    ACM    IEEE