Course Identification

Brain-inspired computer architectures and programming

Lecturers and Teaching Assistants

Dr. Elishai Ezra Tsur

Course Schedule and Location

First Semester
Tuesday, 14:15 - 17:00, Wolfson Auditorium

Field of Study, Course Type and Credit Points

Life Sciences: Lecture; Elective; 3.00 points
Life Sciences (Molecular and Cellular Neuroscience Track): Lecture; Elective; 3.00 points
Life Sciences (Brain Sciences: Systems, Computational and Cognitive Neuroscience Track): Lecture; Elective; 3.00 points
Life Sciences (Computational and Systems Biology Track): Lecture; Elective; 3.00 points
Mathematics and Computer Science: Lecture; Elective; 3.00 points


Course will be delivered by Dr. Elishai Ezra Tsur


Knowledge in the fundamentals of machine learning, as well as in the basics of computers architecture is desirable (but not mandatory). 



Language of Instruction


Attendance and participation

Expected and Recommended

Grade Type

Numerical (out of 100)

Grade Breakdown (in %)

all assignments are mandatory

Evaluation Type


Scheduled date 1

Wolfson Auditorium

Scheduled date 2


Estimated Weekly Independent Workload (in hours)



As a result of clock speed stagnation and the foreseen limitation of transistor density, traditional computing technology based on the Von Neumann architecture is facing fundamental limits. Neuromorphic engineering is therefore attracting increased attention with the ultimate goal of realizing machines that could surpass the human brain, with some aspects of cognitive intelligence. In that sense, brain research bear the promise of a new computing paradigm. One such framework was developed by IBM. IBM has developed the brain-inspired TrueNorth chip. It is powered by an unprecedented 1,000,000 neurons and 256,000,000 synapses. It is the largest chip IBM has ever built at 5.4 billion transistors, and has an on-chip network of 4,096 neuro-synaptic cores. It only consumes 70mW during real-time operation — orders of magnitude less energy than traditional chips. As part of a complete cognitive hardware and software ecosystem, this technology opens new computing frontiers for distributed sensor and super-computing applications. 

In this course we will study the fundamentals of neuro-morphic computing, focusing on the digital neuron model for neuro-synaptic cores, cognitive programming paradigms, and algorithms and applications for networks of neuro-synaptic cores.

Unit 1: Introduction to cognitive computing: models, abstractions and reductionism in brain sciences. We will be focusing on three perspectives of cognitive computing: the scientific perspective: conceptualizing and modelling the brain, the brains’ computa- tional building blocks, neurophysiology and neuroanatomy, emergent behavior and neuro- simulations; the computer architect perspective: CMOS circuits, quantum and power limitations of tran- sistor density, emerging computing architectures, and paradigms of neuromorphic hardware; the algorithmic perspective: feature extraction VS. feature learning; descriptive, mechanistic, and interpretive models of receptive fields, and functional architectures.

Unit 2. From the neuron doctrine to neural networks: the neural engineering framework: representation, decoding, transformation and dynamics; the Nengo software environment for the simulation of large scale neural systems. Research focus: Spaun - a large scale model of the functioning brain.

Unit 3. Neuromorphic VLSI: Circuits: Introduction to VLSI circuits and the silicon neuron; temporal integration: the pulse current source synapse, the reset and discharge synapse, the linear charge and discharge synapse and the Diff-Pair integrator; spike generation and the axon-hillock circuit; the DPI neuron circuit; spike-frequency adaptation; balancing signal’s energy, efficiency, and precision. Neuronal architectures: fully dedicated, shared axon, shared synapse and shared dendrite models; the address event representation and the address-event bus. Technology focus: the Neurogrid (Stanford University) - architecture, NEF integration, spaun integration; controlling robots with spiking silicon neurons. Research focus: retina on a chip

Unit 4. Hybrid neuromorphic architectures: Liberating programming from the Von- Neumann architecture, discrete neuronal modeling, even-driven computing, SOI fabrication, the neural programming model, the compass simulator, multi-core programming paradigms, the Corelet programming language, and neural codes (binary, rate, population and time-to-spike coding); from VLSI to FPGA. Technology focus: the TrueNorth (IBM), the SpiNNaker (University of Manchester), the BlueHive (Cambridge University)

Unit 5. Tensor Processing Unit (TPU), cloud TPU, and TensorFlow on AWS.

Learning Outcomes

Upon successful completion of this course students should be able to:

  1. Describe different approaches to neuromorphic engineering
  2. Develop algorithms based on spiking neurons 
  3. Appreciate the computation, representation, and dynamics in neuromorphic computing. 

Reading List

Eliasmith, Chris, and Charles H. Anderson. Neural engineering: Computation, representation, and dynamics in neurobiological systems. MIT press, 2004.

Eliasmith, C. (2013). How to build a brain: A neural architecture for biological cognition. Oxford University Press.

Bekolay, Trevor, James Bergstra, Eric Hunsberger, Travis DeWolf, Terrence C. Stewart, Daniel Rasmussen, Xuan Choo, Aaron Voelker, and Chris Eliasmith. "Nengo: a Python tool for building large-scale functional brain models." Frontiers in neuroinformatics 7 (2014): 48.

Merolla, Paul A., John V. Arthur, Rodrigo Alvarez-Icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson et al. "A million spiking-neuron integrated circuit with a scalable communication network and interface." Science345, no. 6197 (2014): 668-673.

Merolla, Paul, John Arthur, Filipp Akopyan, Nabil Imam, Rajit Manohar, and Dharmendra S. Modha. "A digital neurosynaptic core using embedded crossbar memory with 45pJ per spike in 45nm." In Custom Integrated Circuits Conference (CICC), 2011 IEEE, pp. 1-4. IEEE, 2011.