Georgia Tech at
SIAM PP20

Feb. 12-15, Seattle, Washington

Georgia Tech Dominates at Premier Conference for Parallel Processing


The Society for Industrial and Applied Mathematics (SIAM) is a leading international community that aims to integrate mathematics with science and technology to create solutions to real-world problems through conferences, publications, and workshops. Its premier conference for the exchange of updates and best practices in the field of parallel processing research, SIAM Conference on Parallel Processing for Scientific Computing 2020 (PP20), begins today in Seattle, Washington and will run until Saturday, February 15.
 
Georgia Tech leads this year’s conference presence with 28 different forms of engagement from 25 researchers across units including the School of Computational Science and Engineering (CSE), School of Computer Science (SCS), and the Georgia Tech Research Institute (GTRI).
 
Georgia Tech’s presence includes an invited plenary talk by SCS Associate Professor Hyesoon Kim that discusses different ways to apply and evaluate modeling techniques for heterogeneous computing systems; a poster presentation by GTRI researchers Micah E. HalterKun Cao, and James Fairbanks that proposes a theory-based framework to facilitate a more ideal workflow in scientific development processes; and a presentation by CSE Professor Ümit Çatalyürek and Ph.D. student Abdurrahman Yasar at the SIAM Workshop on Combinatorial Scientific Computing, which is co-located with SIAM PP.
 
“The SIAM conference series as a whole is fantastic because its content is focused on peoples' latest work rather than published technical papers. Because of this content focus, SIAM PP's sessions can create more interaction and spawn new ideas,” said Senior Research Scientist Jason Riedy who is set to present at several sessions throughout the week, including one session focused on providing updates from the Rogues Gallery.
 
The Rogues Gallery is a test bed established by Georgia Tech’s Center for Research into Novel Computing Hierarchies (CRNCH). The project was initiated in an effort to develop understanding of next-generation hardware, with an emphasis on unorthodox and uncommon technologies. 
 
Other notable tracks in which Georgia Tech researchers are both organizers and presenters include:
Scroll below to view Georgia Tech's complete participation at SIAM PP20.

SESSION SPOTLIGHTS


 
SemanticModels.jl is built around representing models as category theory-based mathematical structures that facilitate meta-modeling tasks such as model augmentation and composition while maintaining domain level semantic validity. By utilizing this universal representation of scientific models, we can expand and combine models of different domains and implementations seamlessly and automatically. We demonstrate how the SemanticModels.jl framework can use existing scientific models as building blocks to create new, more complex simulations through a working case study to simulate the combination of a simple predator-prey interaction and the spread of Malaria.
Join us for the poster session of this work Thursday, February 13, from 6-8 PM in the Foyer on the 5th floor.


The last decade has seen a paradigm shift in the architecture of computing platforms, with a trend toward combining general-purpose processors and specialized accelerators. For example, GPUs have made a significant impact in both the hardware industry and the application domain, as has been seen from the recent development of machine learning applications. Other platforms such as Processing-In-Memory and FPGA-based reconfigurable architectures have regained attention, and with these special accelerators, the computing platforms become more heterogeneous. These heterogeneous architectures are especially attractive because they can provide high performance and energy efficiency for both general-purpose applications and high-throughput applications. Thus, from IoT devices to server processors, heterogeneous architectures have become increasingly popular. However, these heterogeneous architectures introduce several new challenges, including programmability issues and designing the hardware architecture in a way that can maximally exploit the underlying heterogeneity. To address these issues, a wide variety of modeling has been used including analytical, regression based, and cycle-level. In this talk I will discuss how different modeling techniques have been addressed and how they can be helpful to guide architecture studies and software optimizations.
 




In the early 2000s, due to constraints on economical heat dissipation, clock speeds of single-core CPUs could no longer be  increased, which marked the adoption of multi-core CPUs, together with a paradigm shift to algorithms specifically designed for multi-core architectures. About 15 years into this current architectural cycle and on its way to exascale performance, the computing industry finds itself at the confluence of technical difficulties that cast doubt on its ability to sustain this architectural model beyond the exascale capability. These difficulties are driving the hardware industry to develop application-specific chips and to look beyond silicon- based chips (e.g., quantum computing, physical annealing, neuromorphics, etc.), with a continued emphasis on raw processing power and emerging concerns about energy efficiency. This minisymposium provides a forum for sharing innovative ideas on algorithm development for leveraging future computing platforms.


 
Georgia Tech
Papers
 
MS18: Exploiting Task Parallelism in Exascale Computing Era
H. Metin Aktulga, Ümit V. Çatalyürek 

 
DeepSparse: A Task-Parallel Framework for Sparse Solvers on Deep Memory Architectures
Md Afibuzzaman, Fazlay Rabbi, M. Yusuf Ozkaya and Ümit V. Çatalyürek, H. Metin Aktulga

Using the PETSc/TAO ADMM Methods on GPUs
Hansol
David SuhTobin Isaac, Todd Munson
 
MS24 Parallel Matrix Factorization Algorithms - Part I of III 
Piyush Sao, Xiaoye S. Li, Ramakrishnan Kannan, Richard Vuduc
 

Communication Avoiding Sparse Direct Solvers for Linear Systems & Graph Problems 
Piyush Sao, Ramakrishnan Kannan, Prasun Gera, Richard Vuduc

SEMANTICMODELS.JL: A Framework for Automatic Composition of Scientific Models Across Domains 
Micah E. Halter, Kun Cao, James Fairbanks
 

IP4 Modeling of Heterogeneous Computing Systems and Their Usages
Hyesoon Kim, Richard Vuduc (Chair)
 

Fine-Grained Parallel Incomplete Factorizations 
Edmond Chow
 

PartILUT - a Parallel Threshold Incomplete Factorization Preconditioner for Multicore and GPU 
Hartwig Anzt, Edmond Chow, Tobias Ribizel, Goran Flegar, Jack J. Dongarra
 

Low-Latency Mesh-Refinement Cycle Algorithms for Octrees 
Hansol David Suh, Tobin Isaac
 

AMR During PDEConstrained Optimization using PETSc
Matthew G. Knepley, Tobin Isaac
 

An Asynchronous Algorithm for 2:1 Octree Balance
Hansol David Suh, Tobin Isaac
 

MS51 Novel Computational Algorithms for Future Computing Platforms - Part I of III
Arash Fathi, Dimitar Trenev, Jason Riedy, Jeffrey Young 
 

MS55 High-Performance Tensor Computation and Applications - Part I of III
Jee W. Choi, Richard Vuduc, Eric Phipps 
MS62 Novel Computational Algorithms for Future Computing Platforms - Part II of III
Arash Fathi, Dimitar Trenev, Jason Riedy, Jeffrey Young, Laurent White
 

MS65 High-Performance Tensor Computation and Applications - Part II of III
Jee W. Choi, Richard Vuduc, Eric Phipps
 

Hpc_td_tbd_battaglino 
Casey Battaglino
 

Performance Portable and Productive Resilience using Kokkos 
Jeffery Miles, Nicholas Morales, Carson Mould, Bogdan Nicolae, Keita Teran
 

Composing Asynchrony, Communication and Resilience 
Sri Raj Paul, Akihiro Hayashi, Nicole Slattengren, Hemanth Kolla, Seonmyeong Bak, Matthew Whitlock, Jackson Mayo, Keita Teranishi, Vivek Sarkar,  Max Grossman
 

MS72 Novel Computational Algorithms for Future Computing Platforms - Part III of III
Arash Fathi, Dimitar Trenev, Jason Riedy, Jeffrey Young, Laurent White
 

The Rogues Gallery as a Testbed for Novel Algorithm Design for Future Architectures
Jeffrey Young, Jason Riedy, Thomas M. Conte, Vivek Sarkar, 
 

Design of New Streaming and Graph Analytics Algorithms for the Strider Architecture
Sriseshan Srikanth, Thomas M. Conte
 

MS74 High-Performance Tensor Computation and Applications - Part III of III
Jee W. Choi, Richard Vuduc, Eric Phipps 
 

Reproducible Linear Algebra from Application to Architecture 
Jason Riedy, James W. Demmel, Peter Ahrens
 
Revisiting the Jacobi Method for Eigen Problems in Computational Chemistry 
Hua Huang
 

CP14 HPC for Data Science and Large Graphs
(Chair) Oded Green
 

HashGraph - Scalable Hash Tables using A Sparse Graph Data Structure 
Oded Green
 
Scalable Triangle Counting on Distributed-Memory Systems 
Seher Acer, Abdurrahman Yasar, Sivasankaran Rajamanickam, Michael Wolf, 
Ümit V. Çatalyürek

PD1: Is AI transforming HPC or HPC Transforming AI?

Srinivas Aluru, Tamara G. Kolda, Dong Li, Torsten Hoefler
 
Interested in more information about CSE research at this conference or others? Follow us on Twitter at @GTCSE, Facebook @GTcomputing, and Instagram @GTcomputing
 
Content developed by:
  • Kristen Perez, Communications Officer, School of Computational Science and Engineering