There is no registration fee for workshop attendance (presenters and attendees).
If you are presenting or attending the CnC'21 workshop remotely, please use this zoom link (password: cnc2021).
The workshop will be held in SUNY Global Center.
Dr. Vivek Sarkar is Chair of the School of Computer Science and the Stephen Fleming Chair for Telecommunications in the College of Computing at Georgia Institute of Technology. He conducts research in multiple aspects of programmability and productivity in parallel computing, including programming languages, compilers, runtime systems, and debuggers for parallel, heterogeneous, and high-performance computer systems.
Dr. Sarkar started his career in IBM Research after obtaining his Ph.D. from Stanford University, supervised by John Hennessy. His research projects at IBM include the PTRAN automatic parallelization system led by Fran Allen, the ASTI optimizer for IBM’s XL Fortran product compilers, the open-source Jikes Research Virtual Machine for the Java language, and the X10 programming language developed in the DARPA HPCS program. He was a member of the IBM Academy of Technology during 1995-2007. After moving to academia, Sarkar has mentored over 30 Ph.D. students and postdoctoral researchers in the Habanero Extreme Scale Software Research Laboratory, first at Rice University since 2007, and now at Georgia Tech since 2017. Researchers in his lab have developed the Habanero-C/C++ and Habanero-Java programming systems for parallel, heterogeneous, and distributed platforms, and have introduced a wide range of extensions to the Concurrent Collections (CnC) programming model and implementations . While at Rice, Sarkar was the E.D. Butcher Chair in Engineering, served as Chair of the Department of Computer Science, created a new sophomore-level course on the fundamentals of parallel programming, as well as a three-course Coursera specialization on parallel, concurrent, and distributed programming.
Dr. Sarkar is an ACM Fellow and an IEEE Fellow. He has been serving as a member of the US Department of Energy’s Advanced Scientific Computing Advisory Committee (ASCAC) since 2009, and on CRA’s Board of Directors since 2015. He is also the recipient of the 2020 ACM-IEEE CS Ken Kennedy Award.
With the advent of modern computer architectures characterized by -- amongst other things -- many-core nodes, deep and complex memory hierarchies, heterogeneous subsystems, and power-aware components, it is becoming increasingly difficult to achieve best possible application scalability and satisfactory parallel efficiency. The community is experimenting with new programming models that rely on finer-grain parallelism, and flexible and lightweight synchronization, combined with work-queue-based, message-driven computation. The recently growing interest in the C++ programming language in industry and in the wider community increases the demand for libraries implementing those programming models for the language.
In this talk, we present a new asynchronous C++ parallel programming model that is built around lightweight tasks and mechanisms to orchestrate massively parallel (and -- if needed -- distributed) execution. This model uses the concept of (Standard C++) futures to make data dependencies explicit, employs explicit and implicit asynchrony to hide latencies and to improve utilization, and manages finer-grain parallelism with a work-stealing scheduling system enabling automatic load balancing of tasks.
Dr. Hartmut Kaiser is a member of the faculty at Louisiana State University (LSU) and a senior research scientist at LSU's Center for Computation and Technology (CCT). He is probably best known through his involvement in open source software projects, mainly as the author of several C++ libraries he has contributed to Boost, which are in use by thousands of developers worldwide. His current research is focused on leading the STE||AR group at CCT working on the practical design and implementation of future execution models and programming methods. He is a voting member of the C++ standardization committee.
Asynchronous task-based programming has become a popular parallel programming model, offering advantages such as better extraction of available parallelism, dynamic load balancing, and improved scalability. For productivity, performance, and code sustainability reasons, there is an increasing demand for auto-parallelizing and optimizing compilers to generate code using task-based models. In this talk, we present our work that delivers an end-to-end path for optimization and code generation for asynchronous task-based systems using R-Stream, an auto-parallelizing polyhedral compiler. We also discuss the key aspects of our compiler to generate task-based parallelism and data management for a broader class of task-based runtimes that includes (but not limited to) Legion, Kokkos, OpenMP, and PaRSEC. We give an overview of the benefits of our compiler support that provides improved developer productivity and application performance.
Dr. Muthu Baskaran, Fellow and Managing Engineer at Reservoir Labs, is an expert in compilers and high-performance computing (HPC). He is one of the principal leads of Reservoir Labs' R-Stream compiler. He has been developing advanced compiler techniques for modern high-performance parallel systems, Exascale systems, and low-power embedded systems. He is the primary inventor of several patented/patent-pending techniques in R-Stream. His work has been published in several top-tier peer-reviewed conferences and journals. He has also served in the Technical and Program Committees of several top-tier HPC conferences and forums. He has won Best Paper Awards at IEEE HPEC Conference for his innovative work on high-performance optimizations.
https://icnc.github.ioPrior workshops have served as a forum for users and potential users of Concurrent Collections (CnC), to discuss experiences with CnC and a range of topics, including developments for the language, applications, usability, performance, semantics, and teaching of CnC.
and
https://habanero.rice.edu/cnc