Making AI systems biologically plausible

A new study, attempting to bridge neuroscience and machine learning, offers insights into the potential role of astrocytes in the human brain.
Image used for representational purpose.
Image used for representational purpose.
Updated on
3 min read

In a revolutionary development that could see biologics cross-breed with artificial intelligence (AI) to better understand the brain as well as improve AI systems with brain cells and neurons, researchers at the Massachusetts Institute of Technology (MIT), the MIT-IBM Watson AI Lab, and Harvard Medical School are at work trying to make artificial intelligence models biologically plausible.

The new study, attempting to bridge neuroscience and machine learning, offers insights into the potential role of astrocytes in the human brain, according to MIT News, citing the researchers’ paper “Building transformers from neurons and astrocytes”, published in open-access format by the Proceedings of the National Academy of Sciences (PNAS).

The revolutionary aspect involves implementing a powerful artificial intelligence model called a transformer in the brain using networks of neurons and other brain cells called astrocytes. A transformer is a more powerful neural network model that can achieve unprecedented performance, including generating text from prompts with near-human accuracy. It was discovered six years ago and forms the basis of AI systems like ChatGPT.

The researchers, in the study, have come up with a hypothesis that could explain how a transformer can be built using biological elements in the brain, and suggest that a biological network of neurons and astrocytes could perform the same core computation as a transformer. The hypothesis exposes the potential of future neuroscience research into how human brains work, while also helping machine-learning researchers explain why transformers are so successful across a diverse set of complex tasks.

MIT News quotes Dmitry Krotov, a research staff member at the MIT-IBM Watson AI Lab and senior author of the research paper: “The brain is far superior to even the best artificial neural networks that we have developed, but we don’t really know exactly how the brain works. There is scientific value in thinking about connections between biological hardware and large-scale artificial intelligence networks. This is neuroscience for AI and AI for neuroscience.”

Transformers are found to be operating in a different manner as compared to artificial neural network models. According to the researchers, a recurrent neural network compares each word in a sentence with the previous words to determine what the next word would be; but a transformer compares all the words in the sentence at once to generate a prediction. This is a process called self-attention.

According to Krotov, for self-attention to work, the transformer must keep all the words ready in some form of memory, which is biologically not possible in a brain due to the way neurons communicate. Neurons communicate one-on-one, but for self-attention to be realised, there needs to be a third neuron or brain cell to get involved to make it a three-way communication.

This is where the researchers are considering using the brain cells called astrocytes. Falling back on an earlier and a different type of machine-learning model called Dense Associated Memory, the researchers learnt that self-attention is possible in the human brain by involving three neurons.

The researchers’ hypothesis targets using astrocytes to play the role of the third neuron, and they are confident of realising transformer-like functions in the brain. There is a reason for considering astrocytes: during communication between two neurons, they send chemicals called neurotransmitters across the synapse that connects one neuron with the other. In this process, often an astrocyte gets connected by wrapping its tentacle around the synapse to create a tripartite (three-part) synapse. One astrocyte may form millions of tripartite synapses. And non-neuronal cells like astrocytes are abundant in the brain, playing their role in physiological processes.

Analysis showed the researchers that their hypothetical biophysical neuron-astrocyte network matches a transformer. According to MIT News, the researchers conducted numerical simulations by feeding images and paragraphs of text to transformer models and comparing the responses to those of their simulated neuron-astrocyte network. Both responded to the prompts in similar ways, confirming their theoretical model. The researchers now plan to convert theory into practice.

Playing the role of the third neuron

The researchers’ hypothesis targets using astrocytes to play the role of the third neuron, and they are confident of realising transformer-like functions in the brain. There is a reason for considering astrocytes: during communication between two neurons, they send chemicals called neurotransmitters across the synapse that connects one neuron with the other

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com