SC is the International Conference for
High Performance Computing, Networking,
Storage and Analysis



SCHEDULE: NOV 12-18, 2011

When viewing the Technical Program schedule, on the far righthand side is a column labeled "PLANNER." Use this planner to build your own schedule. Once you select an event and want to add it to your personal schedule, just click on the calendar icon of your choice (outlook calendar, ical calendar or google calendar) and that event will be stored there. As you select events in this manner, you will have your own schedule to guide you through the week.

You can also create your personal schedule on the SC11 app (Boopsie) on your smartphone. Simply select a session you want to attend and "add" it to your plan. Continue in this manner until you have created your own personal schedule. All your events will appear under "My Event Planner" on your smartphone.

High-Performance Lattice QCD for Multi-core Based Parallel Systems Using a Cache-Friendly Hybrid Threaded-MPI Approach

SESSION: QCD and DFT

EVENT TYPE: Paper

TIME: 1:30PM - 2:00PM

AUTHOR(S):Mikhail Smelyanskiy, Karthikeyan Vaidyanathan, Jee Choi, Balint Joo, Jatin Chhugani, Michael A. Clark, Pradeep Dubey

ROOM:TCC 304

ABSTRACT:
QCD is a computationally challenging problem that solves the discretized Dirac equation. Its key operation is a matrix-vector product (Dslash operator). We have developed a novel multi-core architecture-friendly Wilson-Dslash operator which delivers 75Gflops (single-precision) on Intel Xeon processor, achieving 60% computational efficiency for datasets that fit in the last-level cache. For larger datasets, performance drops to 50Gflops. Our performance is 2-3x higher than a well-known Chroma implementation when running on the same hardware platform. The novel implementation of QCD reported is based on recently published 3.5D spatial and 4.5D temporal tiling schemes. Both schemes significantly reduce QCD external memory bandwidth requirements, delivering a more compute-bound implementation. The performance advantage of our schemes will become more significant as the gap between compute and memory bandwidth continues to grow. We further demonstrate very good cluster-level scalability achieving 4Tflops using 128 nodes for 32x32x32×256 lattice and 3Tflops for the full CG solver.

Chair/Author Details:

Mikhail Smelyanskiy - Intel Corporation

Karthikeyan Vaidyanathan - Intel Corporation

Jee Choi - Georgia Tech

Balint Joo - Jefferson Lab

Jatin Chhugani - Intel Corporation

Michael A. Clark - Harvard-Smithsonian Center for Astrophysics

Pradeep Dubey - Intel Corporation

Add to iCal  Click here to download .ics calendar file

Add to Outlook  Click here to download .vcs calendar file

Add to Google Calendarss  Click here to add event to your Google Calendar

The full paper can be found in the ACM Digital Library

   Sponsors    ACM    IEEE