INC 2019 Speakers

IEEE International Nanodevices and Computing Conference (INC)
3-5 April 2019 • Grenoble, France

Keynote Speakers

  • William Chappell, Director, DARPA Microsystems Technology Office
  • Wen-mei W. Hwu, Professor, University of Illinois; Chief Scientist, UIUC Parallel Computing Institute; Director, IMPACT research group

Speakers

  • Francis Balestra, CNRS Research Director, IMEP-LAHC
  • Enrique Blair, Assistant Professor, Electrical and Computer Engineering Department, Baylor University
  • Sorin Christian Cheran,
  • Marius V. Costache, Researcher, The Catalan Institute of Nanoscience and Nanotechnology (ICN2), Barcelona, Spain
  • Laurent Daudet, CTO and co-founder, LightOn
  • Laurent Fesquet, Deputy Director, CIME Nanotech
  • Raphaël Frisch, PhD student, University of Grenoble Alpes (UGA)
  • Natesh Ganesh, PhD student, Department of Electrical and Computer Engineering, University of Massachusetts, Amherst
  • Paolo A. Gargini, IEEE Fellow and JSAP Fellow
  • David Holden, Cooperative Programs Manager, CEA
  • Paul K. Hurley, Senior Research Scientist, Tyndall National Institute; Research Professor, Department of Chemistry, University College Cork
  • Alice Mizrahi, Researcher, CNRS/Thales lab, Thales Research and Technology
  • Damien Querlioz, CNRS Researcher, Centre de Nanosciences et de Nanotechnologies, Université Paris-Sud
  • Carsten Schuck, Assistant Professor, Physics Institute of the University of Münster, Germany, and the Center for NanoTechnology (CeNTech)
  • Luca Selmi, Professor, University of Modena and Reggio Emilia
  • Olivier Sentieys, Professor, University of Rennes
  • Sandip Tiwari, Cornell University
  • Stefano Vassanelli, Professor of Neurophysiology, University of Padova, Dept. of Biomedical Sciences and Padua Neuroscience Center
  • J. Joshua Yang, Professor, Department of Electrical and Computer Engineering, University of Massachusetts, Amherst

 

William Chappell, Director, DARPA Microsystems Technology Office

William ChappellDr. Chappell is leading the new Electronics Research Initiative (ERI), which aims to ensure far-reaching improvements in electronics performance well beyond the limits of traditional scaling. The ERI plans to forge forward-looking collaborations among the commercial electronics community, defense industrial base, university researchers, and the DoD. Before joining DARPA, Dr. Chappell served as a professor in the ECE Department of Purdue University, where he led the Integrated Design of Electromagnetically-Applied Systems (IDEAS) Laboratory. Dr. Chappell received his BS, MS, and PhD degrees in Electrical Engineering, all from the University of Michigan.

 

 

 

 

 

Wen-mei W. Hwu, Professor, University of Illinois; Chief Scientist, UIUC Parallel Computing Institute; Director, IMPACT research group

Wen-mei W. HwuWen-mei W. Hwu is a Professor and holds the Sanders-AMD Endowed Chair in the Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign. He is the chief scientist of UIUC Parallel Computing Institute and director of the IMPACT research group (http://impact.crhc.illinois.edu). He is the chairman of the IEEE Computer Society Technical Committee on Microarchitecture (TCuARCH) and the chairman of the IEEE Computer society Research Advisory Board (RAB). He co-directs the IBM-Illinois Center for Cognitive Computing Systems Research (C3SR) and serves as one of the principal investigators of the NSF Blue Waters Petascale supercomputer. For his contributions, he received the ACM SigArch Maurice Wilkes Award, the ACM Grace Murray Hopper Award, the IEEE Computer Society Charles Babbage Award, the ACM/IEEE ISCA Influential Paper Award, the ACM/IEEE MICRO Test-of-Time Award, the ACM/IEEE CGO Test-of-Time Award, the IEEE Computer Society B. R. Rau Award and the Distinguished Alumni Award in Computer Science of the University of California, Berkeley. He is a fellow of IEEE and ACM. Dr. Hwu received his Ph.D. degree in Computer Science from the University of California, Berkeley.

 

 

Francis Balestra, CNRS Research Director, IMEP-LAHC

BALESTRA Francis, CNRS Research Director at IMEP-LAHC, has been Director of several Laboratories, IMEP and LPCS, for a total of 10 years and Director of the European Sinano Institute during 6 years. Within FP6, FP7 and H2020, he coordinated several European Projects (SINANO, NANOSIL, NANOFUNCTION and NEREID “European Nanoelectronics Roadmap”) that have represented unprecedented collaborations in Europe in the field of Nanoelectronics. He is member of the AENEAS Scientific Council, of the European Academy of Sciences, of the Advisory Committee of several International Journals and of European Working Groups for Roadmapping activities. He founded or organized many international Conferences, and has co-authored a large number of books and publications. He is currently Vice President of Grenoble INP, in charge of European activities.

Abstract – Overview of NEREID

The objective of NEREID was to elaborate a new roadmap for nanoelectronics, focused on the requirements of European semiconductor and application industries, covering the following domains: Automotive, health, energy, security, IoT, mobile convergence, digital manufacturing. It addresses societal challenges following advanced concepts developed by Research Centres and Universities in order to achieve an early identification of promising novel technologies covering the R&D needs all along the innovation chain. The final roadmap for European micro- and nanoelectronics highlights medium to long term recommendations for strengthening the European competitiveness in this domain, opening the doors to many new markets. The NEREID roadmap is intended as input for future research programmes at European and National levels. It also targets a better coordination between academic and industrial research in equipment, semiconductors and application developments. The NEREID roadmap is divided into several main technology sectors: Advanced Logic (including Nanoscale FETs and Memories) and Connectivity, Functional Diversification (Smart Sensors, Smart Energy, Energy for Autonomous Systems), Beyond-CMOS (Emerging Devices and Computing Paradigms), Heterogeneous Integration and System Design, Equipment, and Materials and Manufacturing Science.

 

 

Enrique Blair, Assistant Professor, Electrical and Computer Engineering Department, Baylor University

Dr. Blair joined Baylor University’s Electrical and Computer Engineering Department in August 2015 as an assistant professor. Dr. Blair also is a veteran of the U.S. Navy, serving as a submarine officer and as a military faculty member at the U.S. Naval Academy in Annapolis, Maryland.

Dr. Blair completed his Ph.D. at the University of Notre Dame, where he developed theory and models for power dissipation and quantum decoherence in energy-efficient, high-speed molecular computing devices known as Quantum-dot Cellular Automata (QCA). His work includes modeling molecule-environment interactions by numerically solving Schrödinger’s equation or by using reduced dynamics (the Lindblad equation or the operator-sum equation). Results demonstrated that environmentally-driven quantum decoherence stabilizes QCA bits, and current work includes the modeling of field-driven electron transfer and power dissipation for clocked QCA molecules.

Dr. Blair teaches courses on electrical circuits, quantum mechanics, quantum computing, and electronic communication systems.

Abstract – Generating Stochastic Bits using Tunable Quantum Systems

Stochastic computing enables simple logic to perform fundamental mathematical operations on stochastic numbers using only logic as simple as AND and OR. This simplicity comes at the cost of generating random bit strings whose mean represents a number on the range [0,1]. Stochastic numbers must be uncorrelated, as correlation introduces error in calculation. We present a method for producing stochastic bits using coupled quantum dots. A measurement yields a random bit, and a string of measurements provides a stochastic number. The mean is set by adjusting the bias on the DQD. DQDs are biased independently, minimizing correlation between stochastic numbers. We describe two implementations for these DQDs: a lithographic implementation, and a molecular implementation.

 

 

Sorin Christian Cheran,

Bio coming soon

Abstract – System Software Stack for Memristor-based Accelerators

The deceleration of transistor feature size scaling has motivated growing adoption of specialized accelerators implemented as GPUs, FPGAs, ASICs, and more recently new types of computing such as neuromorphic, bio-inspired, ultra-low energy, reversible, stochastic, optical, quantum, combinations, and others unforeseen. There is a tension between specialization and generalization, with the current state trending to master slave models where accelerators (slaves) are instructed by a general purpose system (master) running an Operating System (OS). This talk presents System Software stack for memristor-based accelerators. We explore one accelerator implementation, the Dot Product Engine (DPE), for a select pattern of applications in machine learning. We demonstrate that making an accelerator, such as the DPE, more general will result in broader adoption and better utilization.

 

 

Marius V. Costache, Researcher, The Catalan Institute of Nanoscience and Nanotechnology (ICN2), Barcelona, Spain

Marius V. Costache obtained a Ph.D. degree in physics from University of Groningen, the Netherlands, working on high frequency spin transport and spin pumping in the group of Prof. B.J. van Wees. He was then a postdoctoral at the Massachusetts Institute of Technology (MIT), working with Prof. J.S. Moodera on MBE-grown MgB2-based Josephson junctions. He joined the ICN2-Physics and Engineering of Nanodevices Group as a postdoc and currently is a Research Staff Member. He is recipient of the 2015 IUPAP Young Scientist medal in the field of magnetism.

Abstract – Magnon Computing

This talk will provide an introduction to the field of magnon spintronics and will review recent progress in the various approaches of magnonic logic circuits and basic elements required for circuit structure. As the propagation of magnons is done without the use of electrical charge, it is possible to exploit magnons for efficient data transfer and novel logic devices. Comparing with the traditional approaches, magnonic logic circuits allow scaling down to the deep sub-micrometer range and THz frequency operation. In addition, recent experiments that demonstrate the potential of magnons as a quantum information carrier in the microwave domain will be discussed.

 

 

Laurent Daudet, CTO and co-founder, LightOn

Laurent is CTO and co-founder at LightOn, currently on leave from its position as Professor of Physics at Paris Diderot University, France. He is a recognized expert in signal processing for the physics of waves. At LightOn, Laurent manages cross-disciplinary R&D projects, involving machine learning, optics, signal processing, electronics, and software engineering. Prior to that or in parallel, he has held various academic positions: fellow of the Institut Universitaire de France, associate professor at Université Pierre et Marie Curie, Visiting Senior Lecturer at Queen Mary University of London, UK, Visiting Professor at the National Institute for Informatics in Tokyo, Japan. Laurent has authored or co-authored nearly 200 scientific publications, has been a consultant to various small and large companies, and is a co-inventor in several patents. He is a graduate from Ecole Normale Supérieure, Paris, and holds a PhD in Applied Mathematics from Marseille University.

Abstract – An optical co-processor for large-scale machine learning based on random features

The propagation of coherent light through a thick layer of scattering material is an extremely complex physical process. However, it remains linear, and under certain conditions, if the incoming beam is spatially modulated to encode some data, the output as measured on a sensor can be modeled as a random projection of the input, i.e. its multiplication by an iid random matrix. One can leverage this principle for compressive imaging, and more generally for any data processing pipeline involving large-scale random projections. This talk will discuss recent technological developments of optical co-processors within the startup LightOn, and present a series of proof of concept experiments in machine learning, such as transfer learning, change point detection, recommender systems, time series analysis, or Natural Language Processing.

 

 

Laurent Fesquet, Deputy Director, CIME Nanotech

Laurent Fesquet (IEEE M’99, S’09), received the Ph.D. degree in electrical engineering from Paul Sabatier University, Toulouse, France, in 1997. In 1995, he was a Lecturer in charge of electronics and inertial navigation systems with the French Navy Instruction Center. In 1999, he joined the Grenoble Institute of Technology, Grenoble, France, as an Associate Professor. Since 2008, he has been Deputy Director of CIME Nanotech, an academic center that supports microelectronic teaching and research activities. He also was an invited professor at EPFL in Switzerland in 2018 and is currenlty a team leader at the TIMA Laboratory. His research, focused on asynchronous circuits, has been extended to non-uniform sampling and processing techniques, which are well-suited for event-driven circuits. His current research covers today asynchronous circuit design, computer-aided design (CAD) for event-based systems and non-uniform signal processing.

Abstract – Asynchronous design for new device development

Asynchronous circuits offer today an alternative to the all-synchronous circuit paradigm, which tends to process useless data and to activate unnecessarily the flip-flops and clock-tree. Indeed, asynchronous circuits are data-driven and only process on-demand. This is particularly beneficial for enhancing the circuit power efficiency. Among the techniques for reducing power consumption, brain inspired techniques knows a renewed interest because they can today benefit from the silicon integration. As the brain is not clock-synchronized, an asynchronous design of Artificial Neural Networks is preferable. Intel recently fabricated in 14 nm FinFET technology a fully asynchronous Spiking Neural Network chip implementing 130.000 neurons to evaluate the concept. Stochastic computing is also a technique that could also benefit from the asynchronous approach. The presentation will give a short overview of asynchronous circuit design and how we can exploit it with the emerging computation techniques.

 

 

Raphaël Frisch, PhD student, University of Grenoble Alpes (UGA)

After receiving his B.Sc. in computer science at the Karlsruhe Institute of Technology (KIT) in Germany, Raphael Frisch pursued his studies with a double-master degree (M.Sc.) between the KIT and the ENSIMAG in Grenoble, France where he wrote his master thesis at INRIA in Grenoble. In November 2016, Mr. Frisch started a PhD at the University of Grenoble Alpes (UGA) working on the topic “Design of stochastic machines for source localization and separation”.

Abstract – Stochastic sampling machine for Bayesian inference

Compared to conventional processors, stochastic computing architectures have strong potential to speed up computation time and to reduce power consumption. In this talk, such an architecture, called Bayesian Machine (BM), dedicated to solving Bayesian inference problems will be presented. The BM uses stochastic computing and Bayesian models to compute the inference of a given probabilistic model. It calculates in parallel the inference over a searched variable. The machine will be explained using the example of SSL – Sound Source Localization. Different optimizations made on the BM to speed up the computation time and hence reduce the power consumption will be given.

 

 

Natesh Ganesh, PhD student, Department of Electrical and Computer Engineering, University of Massachusetts, Amherst

Natesh Ganesh is a PhD student in the Electrical and Computer Engineering Dept. at the Univ. of Massachusetts, Amherst working under Prof. Neal Anderson. His research interests include physical intelligence, information theory, non-equilibrium thermodynamics, brain inspired hardware and artificial consciousness. He has been working on the fundamental non-equilibrium conditions for intelligence in physical systems (A Thermodynamic Treatment of Intelligent Systems), and proposed the novel framework of Thermodynamic Intelligence (Thermodynamic Intelligence, A Heretical Theory). He is currently working on the new engineering paradigm that puts thermodynamics at the heart of computing called thermodynamic computing.

Abstract – Engineering Energy Efficient Intelligence

The shift in the focus of the computing industry towards learning applications, and the inevitable end to Moore’s law have empowered the search for novel substrates and architectures to build computing systems that can “learn like the human brain.” Machine learning techniques have made tremendous progress in narrow tasks using large datasets and compute power. These systems are a far cry from the human brain in terms of energy efficiency and performance across several domains. In order to build energy efficient AI systems, we need to identify the optimal devices, architectures and design techniques. To achieve this, it is necessary to have a solid theoretical foundation for intelligence and computing, and what they both entail. In this talk we will review some of the fundamental ideas that underlie the current computational approach to intelligence, and explore the foundational question – is computing the optimal path forward to artificial intelligence? A thermodynamic framework of intelligence will be proposed as an alternative option – one that describes intelligence as a physical process in terms of homeostasis, entropy flow and energy dissipation. I will discuss the exciting path moving forward, examining novel test-bed substrates and changes to design philosophies needed to engineer a ‘computer’ on thermodynamic principles.

 

 

Paolo A. Gargini, IEEE Fellow and JSAP Fellow

Dr. Paolo Gargini returned to the world of research in 2012 after having worked for 34 years (1978-2012) at Intel Corporation. During his tenure at Intel Dr. Gargini was Director of Technology Strategy in Santa Clara, California. While at Intel, Dr. Gargini was also responsible for worldwide research activities conducted by universities and consortia for the benefit of the Technology and Manufacturing Group.

Dr. Gargini was born in Florence, Italy and received a doctorate in Electrical Engineering in 1970 and a doctorate in Physics in 1975 from the Universita di Bologna, Italy, both with full honor and marks.

He has done research at Stanford University and at Fairchild Camera and Instrument Research and Development in Palo Alto in the early 70s.

Since joining Intel in 1978 he was responsible for developing the building blocks of HMOS III and CHMOS III technologies used in the 1980’s for the 80286 and the 80386 processors. In 1985 he headed the first submicron process development team at Intel. He was also responsible for all equipment selections from 1994 to 2007.

In 1996, Dr. Gargini was elevated to Director of Technology Strategy, Intel Fellow.

From 1998 to 2015, Dr. Gargini has been the Chairman of the International Technology Roadmap for Semiconductors (ITRS). Since 2016 he is the Chairman of IRDS. Since 2001 he has been the chairman of IEUVI.

Dr. Gargini became the first Chairman of the Governing Council of the Nanoelectronics Research Initiative (NRI) funded in June 2005 by SIA.

Dr. Gargini was inducted in the VLSI Research Hall of Fame in 2009.

Dr. Gargini was elevated to IEEE Fellow in 2009 and to International Fellow of the Japan Society of Applied Physics in 2014.

Dr. Gargini is chairman of ETAB of E3S (UCB), he is also a member of NEREID Advisory Board for the European Roadmap

He is a member of the leadership committee of the IEEE initiative on 5G and Beyond Roadmap

Abstract – IRDS Overview

“Geometrical Scaling” characterized the 70’s, 80’s and 90’s. The NTRS identified major transistors material and structural limitations. To solve these problems the ITRS introduced strained silicon, high-?/metal gate, FinFET, and other semiconductor materials under “Equivalent Scaling”.

Horizontal (2D) features will reach a limit beyond 2020. Flash producers have adopted the vertical dimension. Logic producers will follow. IRDS assessed that “3D Power Scaling” will extend Moore’s Law for at least another 15 years. Furthermore, computing performance will be substantially improved by monolithically integrating several new heterogeneous memory layers on top of logic layers powered by a combination of CMOS and “new switch” transistors.

Abstract – International Network Generations Roadmap (INGR)

The telecommunications industry sequentially introduced corded telephones, multiple radios; black-and-white televisions were followed by color television, and finally Internet and wireless cell phones became popular in the past century. Since 1980 a new generation of wireless cell phone has been introduced every 10 years. Email and Internet usage by means of Personal Computers connected via telephone lines were popularized in the 90s. Shortly after the PC went wireless and introduction of smart phones capable of multiple functions occurred in the past 10 years. These two wireless worlds came together with the introduction of 4G/LTE as the whole communication worlds fully converged in the digital way; this confluence opened the way for the Internet of Things where everything can be connected to anything, anywhere, anytime. The continuous demand for higher performance led to the quest for new and higher electromagnetic frequencies to increase the amount of information and reduce latency time. 5G is the latest generation under development but in order to provide a faster path for seamless upgrades to the whole network it has finally come to the point that constructing a 10 years roadmap to plan in advance for 6G and any other future generation is absolutely necessary. The new INGR will be presented for the first time at this conference.

 

 

David Holden, Cooperative Programs Manager, CEA

David Holden is Cooperative Programs Manager at CEA where he is responsible for building successful collaborations in the fields of nanoelectronics, photonics, sensors and design technologies. He manages relations with partner organizations as well as with public funding agencies at the national and European levels for a portfolio of projects with combined annual budget exceeding 20M€. From 2002 to 2011, David was involved in technology transfer and strategic marketing for CEA. Prior to joining CEA, he was active in technology commercialization with General Electric, Thomson, Bosch and Xerox. He also played key roles related to technology development and licensing at start-ups PixTech and Advanced Vision Technologies. David holds an engineering degree from Cornell University and an MBA from INSEAD. He has been active in the Governing Board of ECSEL, the Support Group of the European nanoelectronics industry association AENEAS, and the Executive Committee of EPOSS. He has also served as an expert evaluator for the European Commission, for the Eureka cluster Euripides and for the National Research Council of Canada.

Abstract – More than Moore Roadmap – Smart Energy for power applications & Autonomous IOT Systems

Presentation of the NEREID Roadmap for Energy related components and systems, covering a wide range from power electronics to autonomous sensing and energy harvesting. By aggregating the predictions of several expert focus groups, the NEREID technology forecast released end of 2018 presents the expected evolution of More than Moore components including comparisons of competing technologies and the expectations for the coming years. A brief comparison of other roadmaps as well as an overview of the driving factors for implementation will provide context for discussion.

 

 

Paul K. Hurley, Senior Research Scientist, Tyndall National Institute; Research Professor, Department of Chemistry, University College Cork

Paul K. HurleyPaul K. Hurley is a Senior Research Scientist at the Tyndall National Institute (www.tyndall.ie), and a Research Professor in the Department of Chemistry at University College Cork (www.ucc.ie). Paul leads a research team exploring alternative semiconductor materials and device structures aimed at improving the energy efficiency in the next generation of logic devices. In particular the group are working on III-V and 2D (e.g., MoS2, WSe2) semiconductors and their interfaces with metals and oxides which will form the heart of logic devices incorporating these materials. The group are also researching the use of metal-oxide-semiconductor (MOS) systems for the creation of solar fuels through water splitting reactions. He may be reached at: paul.hurley@tyndall.ie.

Abstract – 2D Nanodevices: Investigating the Electronic Properties of Oxide/MoS2 Interfaces

One of the principal factors motivating the study of 2 dimensional semiconductors in insulating gate electron devices is the potential for 2D semiconductor systems to result in near ideal semiconductor/oxide interface properties. This presentation will focus on the application of impedance spectroscopy (100Hz to 1 MHz) to the analysis of interface states and border traps in the oxide/MoS2 system using MOS and MOSFET structures. The experimental capacitance-voltage (CV) and conductance-voltage (GV) response over frequency and applied bias are analyzed in conjunction with physics based ac simulations to probe the density and energy distribution of oxide/MoS2 interface states and border traps in the oxide.

The CV and GV response of back gated p+Si/Al2O3/MoS2/Au capacitor structures, with relatively thick MoS2 (>200nm), exhibit a near ideal multi-frequency CV response in the depletion region. This provides experimental results to support low/negligible interface states in oxide/MoS2 structures for the case of MoS2 layers transferred to an amorphous Al2O3 surface. Results will also be presented for CV/GV gate-to-channel analysis of top gated n-type MoS2 MOSFET structures, based on thin channels (5-10 layers) and Al2O3 or HfO2 gate oxides formed on the MoS2 surface by atomic layer deposition. For the top gated MoS2 MOSFETs, multi-frequency CV/GV responses yield variable interface state density values over the range (5×1011 to 1×1013 cm-2eV-1), with the response of border traps in the Al2O3/HfO2 also evident for bias conditions corresponding to accumulation of the MoS2/oxide interface. Finally, results will be presented illustrating how oxide/MoS2 interface state density values in top gated MOSFETs are significantly reduced by forming gas (H2/N2) annealing at 300oC.

The authors acknowledge the financial support of Science Foundation Ireland under the IvP project INVEST (SFI-15/IA/3131), the US-Ireland R&D Partnership Programme (SFI/13/US/I2862) and the NSF UNITE US/Ireland R&D Partnership for support under NSF-ECCS–1407765.

 

 

Damien Querlioz, CNRS Researcher, Centre de Nanosciences et de Nanotechnologies, Université Paris-Sud

Damien Querlioz is a CNRS Researcher at the Centre de Nanosciences et de Nanotechnologies of Université Paris-Sud. He focuses on novel usages of emerging non-volatile memory and other nanodevices, in particular relying on inspirations from biology and machine learning. He received his predoctoral education at Ecole Normale Supérieure, Paris and his PhD from Université Paris-Sud in 2009. Before his appointment at CNRS, he was a Postdoctoral Scholar at Stanford University and at the Commissariat a l’Energie Atomique. Damien Querlioz is the coordinator of the interdisciplinary INTEGNANO research group, with colleagues working on all aspects nanodevice physics and technology, from materials to systems. He is a member of the bureau of the French Biocomp research network, and a management committee member of the European MEMOCIS COST action. He has coauthored one book, four book chapters, more than 100 journal articles and conference proceedings, and given more than 50 invited talks at national and international workshops and conferences. In 2016, he was the recipient of an ERC Starting Grant to develop the concept of natively intelligent memory. In 2017, he received the CNRS Bronze medal. He has also been a co-recipient of the 2017 IEEE Guillemin-Cauer Best Paper Award and of the 2018 IEEE Biomedical Circuits and Systems Best Paper Award.

Abstract – Memory Centric Artificial Intelligence

When performing artificial intelligence tasks, central and graphics processing units consume considerably more energy for moving data between processing units and memory than for doing actual arithmetics. Brains, by contrast, achieve vastly superior energy efficiency by fusing logic and memory entirely, performing a form of “in-memory” computing. In this talk, we will look at neuroscience inspiration to extract lessons on the design of in-memory computing systems. In particular, we will study the reliance of brains on approximate memory strategies, which can be reproduced in deep learning hardware. We will give the example of a binarized neural network relying on resistive memory. Second, we will see that brains use the physics of their memory device in a way that is much richer than only storage. This can inspire radical electronic designs, where memory devices become a core part of computing. We will illustrate this concept by works using spin torque memories as artificial neurons.

 

 

Carsten Schuck, Assistant Professor, Physics Institute of the University of Münster, Germany, and the Center for NanoTechnology (CeNTech)

Carsten SchuckDr. Schuck is an Assistant Professor at the Physics Institute of the University of Münster, Germany, and the Center for NanoTechnology (CeNTech). His work focuses on integrated quantum technology, in particular the integration of quantum emitters and superconducting single-photon detectors with nanophotonic circuits. Before his appointment in Münster, he was a postdoctoral fellow in the Nanodevices Laboratory at Yale University and he worked as an architect in the Sensors, Metrology and Computational Modeling division of ASML Research (Netherlands). In 2016 he was awarded a fellowship of the ministry of science and research (North Rhine-Westphalia) for returning to Germany. He studied physics in Hamburg, Munich and Uppsala (Sweden) and obtained his PhD degree for work on quantum networking with single-trapped ions at the Institute of Photonic Sciences, ICFO (Barcelona).

Abstract – Nanophotonic devices for quantum information processing

A wide range of quantum information processing and communication schemes can be implemented with single-photons as information carriers. However, scaling such schemes to large system size is an outstanding problem in quantum technology. Here we envision a versatile photonic quantum information processing system on a silicon chip, which integrates non-classical light sources and single-photon detectors with a network of nanophotonic devices. Scalability of our approach emerges from leveraging modern nanofabrication routines to straightforwardly replicate nanoscale integrated optical devices with high reproducibility. Single-photons are generated on-chip via nonlinear optical processes and efficient interfaces allow for supplying photonic networks with such individual quantum systems. We realize building blocks of these networks that combine optical, electrical and mechanical functionality by exploring novel material systems as well as non-traditional design approaches. Waveguide-coupled superconducting nanowire single-photon detectors integrate seamlessly with such photonic circuitry and offer high detection efficiency, low noise and excellent timing performance at the level of individual quanta. We present progress towards integrating sources, circuits and detectors on a silicon chip to match the demands of future large-scale implementations of quantum technologies.

 

 

Luca Selmi, Professor, University of Modena and Reggio Emilia

Luca Selmi (M’01–SM’09–F’15) received the Ph.D. degree in Electronic Engineering from the University of Bologna, Bologna, Italy, in 1992. Since 2000, he has been Professor of Electronics at University of Udine, Italy, and then recently moved to University of Modena and Reggio Emilia. He served as TPC member of various conferences, including IEEE IEDM and IEEE VLSI Tech. Symposium, and as Editor of IEEE EDL. His research interests include simulation, modeling, and characterization of nanoscale CMOS transistors and NVM, with emphasis on hot-carrier effects, quasiballistic transport, Monte Carlo simulation techniques, and with a recent twist toward nanoelectronic (bio)sensors, simulation and characterization of ISFET and impedance spectroscopy sensors.

Abstract – Advanced Simulation of Nanodevices

 

 

Olivier Sentieys, Professor, University of Rennes

Olivier Sentieys is a Professor at the University of Rennes holding an Inria Research Chair on Energy-Efficient Computing Systems. He is leading the Cairn team common to Inria (French research institute dedicated to computational sciences) and IRISA Laboratory. He is also the head of the “Computer Architecture” department of IRISA. His research interests are in the area of computer architectures, embedded systems and signal processing, with a focus on energy-efficiency, reconfigurable systems, hardware acceleration, approximate computing, and energy harvesting sensor networks.

He authored or co-authored more than 250 journal or conference papers, holds 6 patents, and served in the technical committees of several international IEEE/ACM/IFIP conferences.

Abstract – Playing with number representations for energy efficiency: an introduction to approximate computing

Energy consumption is one of the major issues in computing today shared by all domains in computer science, from high-performance computing to embedded systems. The two main factors that influence energy consumption is the execution time and data volume. In the recent years, approximation is receiving renewed interests to improve both speed and energy consumption in embedded systems. Many applications in embedded systems do not require high precision/accuracy, and both software designers and hardware designers often seek for a golden point of the compromise between accuracy, speed, energy, and area cost in several layers with a broad range from application, software levels to architecture, circuit levels. Various techniques for approximate computing (AC) augment the design space by providing another set of design knobs for performance-accuracy trade-off. In this talk, we will review the main techniques for operator-level approximations using various number representations and by playing with data word-length and types of operators, to show their benefit and drawbacks in terms of energy efficiency.

 

 

Alice Mizrahi, Researcher, CNRS/Thales lab, Thales Research and Technology

Alice Mizrahi is a researcher at the joint CNRS/Thales lab within Thales Research and Technology. She received her PhD in Physics in 2017 from the “Université Paris Saclay”. Her thesis, done partly at Unité Mixte de Physique CNRS/Thales and partly at the Center for Nanoscience and Nanotechnology, focused on the use of stochastic magnetic tunnel junctions as artificial neurons. She broadened her interest to swarm intelligence and memristors during her joint appointment as postdoctoral researcher at the National Institute of Standards and Technology and the University of Maryland, in the United States. Late 2018, she joined Thales Research and Technology to develop hardware for artificial intelligence.

Abstract – Bio-inspired computing with stochastic nanomagnets

Artificial neural networks are performing tasks, such as image recognition and natural language processing, that offer great promises for artificial intelligence. However, these algorithms run on traditional computers and consume orders of magnitude more energy more than the brain does at the same task. One promising path to reduce the energy consumption is to build dedicated hardware to perform artificial intelligence. Nanodevices are particularly interesting because they allow for complex functionality with low energy consumption and small size. I discuss two nanodevices. First, I focus on stochastic magnetic tunnel junctions, which can emulate the spike trains emitted by neurons with a switching rate that can be controlled by an input. Networks of these tunnel junctions can be combined with CMOS circuitry to implement population coding to build low power computing systems capable of processing sensory input and controlling output behavior. Second, I turn to different nanodevices, memristors, to implement a different type of computation occurring in nature: swarm intelligence. A broad class of algorithms inspired by the behavior of swarms have been proven successful at solving optimization problems (for example an ant colony can solve a maze). Networks of memristors can perform swarm intelligence and find the shortest paths in mazes, without any supervision or training. These results are striking illustrations of how matching the functionalities of nanodevices with relevant properties of natural systems open the way to low power hardware implementations of difficult computing problems.

 

 

Sandip Tiwari, Cornell University

Sandip Tiwari, a native of India, was educated starting in Physics before moving to Electrical Engineering, attending IIT Kanpur, RPI, and Cornell, and after working at IBM Research, joined Cornell in 1999. He has been a visiting faculty at Michigan, Columbia, and Harvard, the founding editor-in-chief of Transactions on Nanotechnology and authored a popular text book of device physics. He is currently the Charles N. Mellowes Professor in Engineering and the director of USA’s National Nanotechnology Infrastructure Network. His research has spanned the engineering and science of semiconductor electronics and optics, and has been honored with the Cledo Brunetti Award of the Institution of Electronic and Electrical Engineers (IEEE), the Distinguished Alumnus Award from IIT Kanpur, the Young Scientist Award from Institute of Physics, and the Fellowships of American Physical Society and IEEE.

Particular joyful to him is discovering scientific explanations, uncovering new phenomena, inventing new devices and technologies, and moving in directions that are of broader societal use. His current research interests are in the challenging questions that arise when connecting large scales, such as those of massively integrated electronic systems – a complex system, to small scales, such as those of small devices and structures that come about from the use of nanoscale, bringing together knowledge from engineering and physical and computing sciences.
Through National Nanotechnology Infrastructure Network (NNIN) and in his personal life, he is also active in bringing broader education, openness and understanding and cooperation across this world.

Abstract – Deus ex machina: Energy and effectiveness in stimulated implementations of machine learning

The confluence of probabilistic methods and machine learning in information mechanics and its implementation in hardware leads to some very intriguing and challenging questions. Machine learning presupposes by pretraining a model of the world. What does not fit the model then leads to a failure in inference. Even a fit may be an erroneous model conclusion that happens to be right. Statistically, these have a linkage to Jeffrey’s notion of sufficient statistics, mutual information and information aggregation, and linkages over domains.

Introduction of randomness in this inference style improves effectiveness for a variety of reasons including the ones that make compressive sensing succeed. So, probabilistic and machine learning together make an unusual combination that lead to effectiveness but also possibilities towards improving energy efficiency.

I would like to combine the information mechanics notions to stimulated examples of hardware and models to illustrate some of these conclusions. Examples include (a) introduction of von Eckomono—layer bypassing—neurons that improve accuracy where higher order correlations are important, (b) the use of probabilistic elements in improving computational efficacy and energy consumption in noisy conditions (stochastic, deterministic and Bayesian approaches combined in hidden Markov models applications for inference and traditional mathematical problems), (c) exploring how close one can get to the thermodynamic limits constrained by Boltzmann and Fisher, and (d) an example of recursive approach that captures complexity (in music).

This work points to a design optimization inherent in the accuracy of probabilities, which are subject to the information input and the accuracy of the resources, the scale of the network and its ability to capture correlations, and the sufficiency of statistics.

 

 

Stefano Vassanelli, Professor of Neurophysiology, University of Padova, Dept. of Biomedical Sciences and Padua Neuroscience Center

S. Vassanelli is professor of Neurophysiology at the University of Padova, Dept. of Biomedical Sciences and Padua Neuroscience Center, and leader of the NeuroChip laboratory. His main research focus is the development of novel nanotechnologies for brain-computer interfacing and for the investigation of information processing in brain circuits. He is coordinator of the SYNCH project founded by the European Commission under H2020 with the objective of creating a hybrid biological-artificial neural architecture with memristive plasticity in vivo.

Abstract – Connecting silicon and brain neurons with memristive synapses

Developments of neural interfaces are boosting for large-scale and high-density implementations that represent an ideal gateway to micro- and nanoelectronic devices emulating fundamental properties of neurons, such as action potential firing and synaptic plasticity. A new concept of brain-machine interfacing emerges where brain and silicon neurons are physically connected for seamless spike-based computation, differently from signal processing approaches based on Von Neumann machines. As a first step in this direction, we show how thin film nanoscale electrodes and memristors can be used as synaptic-like connections between biological and very-large integration spiking neurons. We show that memristors can mimic synapses in compressing information on spikes occurrence and emulate plasticity across an elementary biohybrid network.

 

 

J. Joshua Yang, Professor, Department of Electrical and Computer Engineering, University of Massachusetts, Amherst

J. Joshua YangDr. J. Joshua Yang is a professor of the Department of Electrical and Computer Engineering at the University of Massachusetts, Amherst. Before joining UMass in 2015, he spent eight years at HP Labs and led the Memristive Materials and Devices team since 2012. His current research interests are Nanoelectronics and Nanoionics for computing and artificial intelligent systems, where he authored and co-authored over 140 technical papers and holds 110 granted and 55 pending US Patents. He was named as a Spotlight Scholar of UMass Amherst in 2017. He obtained his PhD from the University of Wisconsin – Madison in the Material Science Program in 2007.

Abstract – In Situ Learning With Memristive Neural Networks: Supervised, Unsupervised, Reinforcement

Memristive devices have become a promising candidate for energy-efficient and high-throughput bio-inspired computing, which is a key enabler for artificial intelligent systems in the big data and IoT era. In-situ learning is a critical and challenging step for such computing with memristive neural networks. There are three major machine learning paradigms, namely, supervised, reinforcement and unsupervised learning approaches, with different levels of bio-inspiration. I will present our recent experimental demonstrations of these learning paradigms using memristive neural networks: first, deep learning accelerators with supervised online learning; second, reinforcement learning for game play and decision making; third, neuromorphic computing for pattern classification with unsupervised learning.