2021 Speakers

IEEE ICRC 2021 Keynote Speakers

 

IEEE ICRC 2021 Invited Talks

 

Prof. Fred Chong, Seymour Goodman Professor, University of Chicago

Fred Chong

Talk Title: Resource-Efficient Quantum Computing by Breaking Abstractions

Talk Abstract: Quantum computing is at an inflection point, where 53-qubit (quantum bit) machines are deployed, 100-qubit machines are just around the corner, and even 1000-qubit machines are perhaps only a few years away. These machines have the potential to fundamentally change our concept of what is computable and demonstrate practical applications in areas such as quantum chemistry, optimization, and quantum simulation.

Yet a significant resource gap remains between practical quantum algorithms and real machines. A promising approach to closing this gap is to selectively expose to programming languages and compilers some of the key physical properties of emerging quantum technologies. I will describe some of our recent work that focuses on compilation techniques that break traditional abstractions, including compiling directly to analog control pulses, compiling for machine variations, and compiling with ternary quantum bits.

Bio: Fred Chong is the Seymour Goodman Professor in the Department of Computer Science at the University of Chicago and the Chief Scientist at Super.tech. He is also Lead Principal Investigator for the EPiQC Project (Enabling Practical-scale Quantum Computing), an NSF Expedition in Computing. Chong received his Ph.D. from MIT in 1996 and was a faculty member and Chancellor’s fellow at UC Davis from 1997-2005. He was also a Professor of Computer Science, Director of Computer Engineering, and Director of the Greenscale Center for Energy-Efficient Computing at UCSB from 2005-2015. He is a recipient of the NSF CAREER award, the Intel Outstanding Researcher Award, and 11 best paper awards. His research interests include emerging technologies for computing, quantum computing, multicore and embedded architectures, computer security, and sustainable computing. Prof. Chong has been funded by NSF, DOE, Intel, Google, AFOSR, IARPA, DARPA, Mitsubishi, Altera and Xilinx. He has led or co-led over $40M in awarded research, and been co-PI on an additional $41M.

 

Dr. Catherine “Katie” Schuman, Oak Ridge National Laboratory and University of Tennessee, Knoxville

Catherine Schuman

Talk Title: Neuromorphic Computing for Real-Time Control at the Edge

Talk Abstract: Neuromorphic computing is a promising future computing technology for a variety of applications. There is tremendous opportunity to apply neuromorphic computing for real-time control applications at the edge because of its native low power operation and temporal processing capabilities. In this talk, a neuromorphic computing approach that includes hardware, software, and an evolutionary optimization-based algorithm is described and several real-time control applications at the edge are demonstrated.

Bio: Catherine (Katie) Schuman is a research scientist at Oak Ridge National Laboratory (ORNL). She received her Ph.D. in Computer Science from the University of Tennessee (UT) in 2015, where she completed her dissertation on the use of evolutionary algorithms to train spiking neural networks for neuromorphic systems. She is continuing her study of algorithms for neuromorphic computing at ORNL. Katie has an adjunct faculty appointment with the Department of Electrical Engineering and Computer Science at UT, where she co-leads the TENNLab neuromorphic computing research group. Katie will be joining UT full time as an assistant professor in 2022. She received the U.S. Department of Energy Early Career Award in 2019.

 

Prof. John Paul Strachan, Peter Grünberg Institute (PGI-14), Forschungszentrum Jülich, Jülich, Germany and RWTH Aachen University, Aachen, Germany

Catherine Schuman

Talk Title: Analog and in-memory computing for power efficient hardware

Talk Abstract: After approximately 100 years of engineering computers, humans have reached performance that rivals biological brains in many ways, while exceeding it in sheer number-crunching capacity. Yet, we took a very different path from biology, and there remains a huge advantage in energy-efficiency for biological information processing systems. I will discuss our work exploring brain-inspired non-von Neumann computing systems for machine learning and optimization problems. We leverage emerging non-volatile and analog devices (e.g., memristors) combined with mature CMOS technology to construct novel circuits and architectures. We have developed non-volatile associative memory circuits for storing and retrieving complex patterns at low area and power consumption. These enable computing applications in network security, genomics, and tree-based machine learning. We use analog operations and intrinsic stochasticity to speed up the solution of intractable optimization problems, forecasting significant improvement over traditional and emerging compute technologies. Finally, I will describe approaches to address the challenges with precision in analog systems through novel error-correcting codes.

Bio: John Paul Strachan directs the Peter Grünberg Institute on Neuromorphic Compute Nodes (PGI-14) at Forschungszentrum Jülich and is a Professor at RWTH Aachen. Previously he led the Emerging Accelerators team as a Distinguished Technologist at Hewlett Packard Labs, HPE. His teams explore novel types of hardware accelerators using emerging device technologies, with expertise spanning materials, device physics, circuits, architectures, benchmarking and building prototype systems. Applications of interest include machine learning, network security, and optimization. John Paul has degrees in physics and electrical engineering from MIT and a PhD in applied physics from Stanford University. He has over 50 patents, has authored or co-authored over 90 peer-reviewed papers, and been the PI in many USG research grants. He has previously worked on nanomagnetic devices for memory for which he was awarded the Falicov Award from the American Vacuum Society, and has developed sensing systems for precision agriculture in a company which he co-founded.

 

Rene Celis-Cordova, University of Notre Dame

Talk Title: Adiabatic CMOS Design for Adiabatic Reversible Computing

Talk Abstract: Adiabatic reversible computing can dramatically reduce heat dissipation by introducing a tradeoff between energy and speed. Adiabatic circuits operate slowly, relative to their RC time constants, and use reversible logic to implement energy efficient computing. Adiabatic CMOS is an immediate implementation of reversible computing that operates CMOS circuits adiabatically by replacing their DC power supplies with ramping clocks. Complex adiabatic CMOS designs have been demonstrated such as a 16-bit adiabatic microprocessor. However, automated design tools are needed to implement larger adiabatic circuits. Furthermore, energy savings using CMOS circuits are ultimately limited by leakage. A new approach, adiabatic capacitive logic (ACL), implements reversible computing by using variable capacitors as pull-up and pull-down networks. ACL uses MEMS relay-like devices that eliminate leakage current and are not limited by passive power. A complete reversible computing system, using adiabatic CMOS or ACL logic gates, needs to consider the implementation of adiabatic ramping clocks. We propose aluminum nitride MEMS resonators as an efficient approach to drive reversible logic gates and recycle the bit energies from the logic. Such resonators can be used to implement a complete reversible computing system that includes both energy efficient clocks and logic and can dramatically reduce heat dissipation.

Bio: Rene Celis-Cordova is a Ph.D. candidate in electrical engineering at the University of Notre Dame. Before his graduate studies he was a hardware engineer for Intel Corporation. He has over 5 years of experience in the design, verification, and fabrication of integrated circuits. His research interests are in the areas of device physics, novel electronic devices, and nanofabrication.