SC is the International Conference for
High Performance Computing, Networking,
Storage and Analysis



SCHEDULE: NOV 12-18, 2011

When viewing the Technical Program schedule, on the far righthand side is a column labeled "PLANNER." Use this planner to build your own schedule. Once you select an event and want to add it to your personal schedule, just click on the calendar icon of your choice (outlook calendar, ical calendar or google calendar) and that event will be stored there. As you select events in this manner, you will have your own schedule to guide you through the week.

You can also create your personal schedule on the SC11 app (Boopsie) on your smartphone. Simply select a session you want to attend and "add" it to your plan. Continue in this manner until you have created your own personal schedule. All your events will appear under "My Event Planner" on your smartphone.

Parallel Reduction to Condensed Forms for Symmetric Eigenvalue Problems using Aggregated Fine-Grained and Memory-Aware Kernels

SESSION: Dense Linear Algebra

EVENT TYPE: Paper

TIME: 11:30AM - 12:00PM

AUTHOR(S):Azzam Haidar, Hatem Ltaief, Jack Dongarra

ROOM:TCC 305

ABSTRACT:
This paper introduces a novel implementation in reducing a symmetric dense matrix to tridiagonal form, which is the preprocessing step toward solving symmetric eigenvalue problems. Based on tile algorithms, the reduction follows a two-stage approach, where the tile matrix is first reduced to symmetric band form prior to the final condensed structure. The challenging trade-off between algorithmic performance and task granularity has been tackled through a grouping technique, which consists in aggregating fine-grained and memory-aware computational tasks during both stages, while sustaining the application overall high performance. A dynamic runtime environment system schedules then the different tasks in an out-of-order fashion. The performance for the tridiagonal reduction reported in this paper are unprecedented. Our implementation results in an up to 50-fold improvement (125 Gflop/s) compared to the equivalent routine from LAPACK and Intel MKL on an eight socket hexa-core AMD Opteron multicore shared-memory system with a matrix size of 24000x24000.

Chair/Author Details:

Azzam Haidar - University of Tennessee, Knoxville

Hatem Ltaief - KAUST Supercomputing Laboratory

Jack Dongarra - University of Tennessee, Knoxville

Add to iCal  Click here to download .ics calendar file

Add to Outlook  Click here to download .vcs calendar file

Add to Google Calendarss  Click here to add event to your Google Calendar

The full paper can be found in the ACM Digital Library

   Sponsors    ACM    IEEE