When viewing the Technical Program schedule, on the far righthand side
is a column labeled "PLANNER." Use this planner to build your own
schedule. Once you select an event and want to add it to your personal
schedule, just click on the calendar icon of your choice (outlook
calendar, ical calendar or google calendar) and that event will be
stored there. As you select events in this manner, you will have your
own schedule to guide you through the week.
You can also create your personal schedule on the SC11 app (Boopsie) on your smartphone. Simply select a session you want to attend and "add" it to your plan. Continue in this manner until you have created your own personal schedule. All your events will appear under "My Event Planner" on your smartphone.
Large-Scale Computational Epidemiology Modeling using Charm++
SESSION: Research Poster Reception
EVENT TYPE: ACM Student Research Competition Poster, Poster, Electronic Poster
TIME: 5:15PM - 7:00PM
SESSION CHAIR: Bernd Mohr
AUTHOR(S):Ashwin Aji, Tariq Kamal, Jae-Seung Yeom, Keith Bisset
ROOM:WSCC North Galleria 2nd/3rd Floors
ABSTRACT: Controlling outbreaks of infectious diseases such as pandemic
influenza is a top public health priority. EpiSimdemics is an
implementation of a scalable parallel algorithm to simulate the spread
of contagion in large (10^8 individuals), realistic social contact
networks using individual-based models. It includes a rich language
for describing public policy and agent behavior. We describe
CharmSimdemics and evaluate its performance on national scale
populations. Charm++ is a machine independent parallel programming
system, providing high-level mechanisms and strategies to facilitate
the task of developing highly complex parallel applications. Our
design includes mapping of application entities to tasks and
leveraging scalable communication, synchronization and load balancing
strategies of Charm++. Our experimental results show that the Charm++
version achieves up to a 4-fold increase in strong scaling on 796 PEs
and 3-fold increase in weak scaling on 512 PEs when compared to the
MPI version.
Chair/Author Details:
Bernd Mohr (Chair) - Juelich Supercomputing Centre