the 2010 blockbuster Robot was all about Chitti, a robot that could learn and feel emotions. Even as it showed to packed theatres across the country, scientists in a south Delhi IBM lab were working on creating a computer chip that would behave pretty much like Chitti. It’s the stuff of fiction, and, as in an Asimov world, there are those who paint a doomsday scenario with machines going rogue and threatening human existence.

But researchers at six IBM labs in collaboration with those at four universities—Cornell, Columbia, the University of Wisconsin, and the University of California, Merced—are trying to create machines that can see and identify people, move down crowded sidewalks without running into humans and objects, and resolve problems based on experiences and the surrounding environment. They are trying to create a chip that mimics the way the human brain works.

“Today’s computers make fast calculations. They are not designed to make decisions nor are they learning systems. They are half a brain. We are trying to add the other half to execute tasks our brains do effortlessly, such as recognising danger or remembering faces, by integrating ambiguous information from sight, hearing, touch, taste, and smell,” says Dharmendra S. Modha. He heads IBM’s Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project, which is working to use the design and functioning of the brain to add more intelligence to the computing world.

Five years ago, there were challenges to computing evolution. The unprecedented surge in data generation and the demand for faster processing power resulted in the creation of increased capacity in processors. But fast, small chips need more power than their larger cousins to perform the same activity. Inefficiencies creep in as the chips heat up. There are also limits to the size and capacity of any material. All these lead to stagnation and fatigue in development.

This graphic shows the distribution of neurons to different sections of a macaque monkey brain, which are commonly used as precursors to research on human brains.
This graphic shows the distribution of neurons to different sections of a macaque monkey brain, which are commonly used as precursors to research on human brains.

Modha saw possibilities in the brain. He combined the supercomputing capability of IBM’s Blue Gene with neurosciences and nanotechnology to simulate the brains of a mouse and cat. In 2009, Defense Advanced Research Projects Agency (DARPA), the Pentagon’s research agency, decided to fund the five-stage project till 2018-19, allocating a total of $41 million (Rs 216.1 crore) to date. “The idea was to change the fundamental thinking around computing architecture,” says Modha.

DARPA wants a technology that is self-organising, self-adopting, and able to learn rather than one that is limited to responses based on conventional programming commands. And, of course, consume minimal power. This separates SyNAPSE from artificial intelligence, where a set of algorithms mimics the problem to find a solution rather than “thinking” for itself. However, Modha is quick to add that both technologies are complementary and will have a symbiotic relationship.

Raghavendra Singh leads the group that is mapping the brain for SyNAPSE, in IBM’s research lab in New Delhi. His work forms the blueprint on which the chips will be modelled. In 2010, Singh successfully mapped a macaque monkey brain and is now working on the human brain. “The brain’s highways and functional segregations decide the speed of data flow and are directly linked to efficiency,” he says.

The average human brain weighs around 1.5 kg. It has 100 billion neurons (cells that process information) connected to around 100 trillion synapses (the grey matter that stores the information) through axons. The chip that Modha and his team are trying to build will, like the brain, ingest and process information from its multiple sensory nodes, and then act in a coordinated, context-dependent manner. What this also means is that the chip will have no set programs; it will run on dynamic and self-adapting algorithms, mimicking the brain’s event-driven, distributed and parallel processing.

Will all this science translate into anything practical? Consider this: The brain-like chip can be implanted in a glove that a grocer uses when stocking shelves. The chip monitors the look, smell, texture, and temperature of products to identify rotting or contaminated produce. In a different context, the chip can be used to accurately predict natural disasters based on historical information and current geographical data. It might even help overcome physical ailments such as blindness. Making sense of such inputs is a Herculean task for today’s computers, but natural for a brain-inspired system.

Dharmendra S. Modha heads the SyNAPSE project.
Dharmendra S. Modha heads the SyNAPSE project.

Modha’s team already has two working prototypes. The “neurosynaptic cores” contain integrated memory (replicated synapses), computation (replicated neurons) and communication (replicated axons). The 45 nanometre chips contain 256 neurons. One core contains 262,144 programmable synapses and the other contains 65,536 learning synapses. The team has successfully demonstrated simple applications, including navigation, machine vision, associated memory, and classification. The chips operate in a massively parallel way with integrated memory, using a fraction of the power needed by a traditional processor. When one neuron spikes in the brain, the total active energy used is 4.5-11 joules—a minuscule amount of energy. (One joule is the energy required to produce a single watt of power for one second.)

At best, Modha says the current chips are close to a worm’s brain; the Holy Grail is a chip with a million neurons and 10 billion synapses in one square centimetre, consuming one-tenth of a watt.

Modha’s team is not the only one working on such a project. In November 2011, MIT showcased a chip that mimics ion-based communication in a synapse between two neurons with 400 transistors. They use analogue technology to mimic the chemical, ion-based communication channels that flow between synapses, which is different from IBM’s neuron simulation. However, it’s difficult to predict the commercial viability of either project. Will these efforts lead to a real-life destructive Skynet from the Terminator series, or the creation of robots with emotions such as Chitti from Robot?

Follow us on Facebook, X, YouTube, Instagram and WhatsApp to never miss an update from Fortune India. To buy a copy, visit Amazon.