Giving Neural Nets an Innate Brain-Like Structure Could Bolster Deep Learning

How many times have you heard the following idea? Deep learning,
the machine learning technique that has taken the AI world by
storm, is loosely inspired by the human brain.

I myself have repeated the statement so many times it’s easy
to forget that the emphasis is on the word “loosely.” Even
among academics, the draw of mapping deep learning to brain
computations has led many to ask whether techniques that work for
AI—such as Bayesian inference—are
also present in human cognition
.

But here’s the thing: for all their similarities to the human
brain, artificial deep neural nets are highly reductive models of
the seemingly chaotic electro-chemical transmissions that populate
every synapse of our own heads. With the big data era in
neuroscience upon us, in which we can tease out the delicate wiring
and diverse neuronal types (and non-neuron brain cells) that
contribute to cognition, current deep learning models seem terribly
over-simplistic.

Even as deep learning breakthroughs yielded AIs victories over
humans in game play, or revolutionized machine vision, translation,
and other perception-based modalities, their weaknesses have become
increasingly apparent.

They’re brittle, in that they can’t generalize far from the
examples that they’re trained on. Asking AlphaGo to play Dota 2
will result in utter algorithmic embarrassment. Deep learning
algorithms are also greedy, in that they (often) require millions
of training examples, whereas humans, especially kids, can pick up
a new concept or motor skill with one shot.


These limitations
, along with deep learning’s current
dominance in AI, have even prompted experts to ask if we’re close
to
hitting the limits
of this
one-trick pony
.

Not so, argues Dr. Shimon Ullman at the Weizmann Institute of
Science in Rehovot, Israel. In a perspectives paper published in
Science last week, Ullman argued that neuroscience still has a lot
to offer deep learning—and combining the AI darling with
brain-like innate structures could lead us towards machines that
learn as quickly, flexibly, and intuitively as humans.

“Additional aspects of brain circuitry could supply cues for
guiding network models toward broader aspects of cognition and
general AI,” Ullman said.

Neurons and Learning

From a high-level, intuitive concept,
deep learning
and
neural learning
sound remarkably similar. But Ullman argues
that we’re just scratching the surface of how neuroscience can
bolster deep learning.

“From the standpoint of using neurosci­ence to guide AI, this
[deep learning’s] success is surpris­ing, given the highly
reduced form of the network models compared with cortical
circuitry,” he said.

In general, almost everything we know about neurons, such as
their structure, types, and interconnectivity, has been left out of
deep learning models. Over the past few years, work from the
Allen Institute for Brain
Science
and other researchers has relentlessly documented the
wide variety of neurons in the brain—each with its own shape,
size, pattern of activation, and connectivity. The rise of
automated nanoscale imaging methods is allowing neuroscientists to
explore the brain’s diverse neuronal population at a level of
detail never achievable before. And right off the bat—from giant
neurons that wrap around the entire brain to excitatory neurons
that seem to only exist in humans—it’s clear that there’s so
much we still don’t know about the inhabitants of our brain.

Each single neuron—whether it excites or inhibits downstream
partners—receives multiple inputs from its processes. Its shape
and physiology, in turn, controls how it decides to transmit that
information. What’s more, scientists are finding that the
brain’s other cellular inhabitants, the often-overlooked glia
cells and immune microglias, actively shape neuronal transmission
and information processing.

“None of this heterogeneity and other complexities are
included in typical deep-net models, which use instead a limited
set of highly simplified homogeneous artificial neurons,” Ullman
said.

Circuits and Learning

When it comes to circuit connectivity, biological neural
networks also put their artificial counterparts to shame.

As Ullman pointed out, deep network models currently capture
early processing stages in perception—vision, hearing and so
on—rather than later, more cognitive steps.

This is likely due to the way that the artificial neuron layers
and their connections are structured, which in turn limits efforts
that are trying to guide deep learning models towards more
cognitively-intense problems, such as generalized or one-shot
learning.

In the cortex, the outermost layer of the brain, biological
networks include extremely rich connections. Neurons are loosely
partitioned into layers, and sprout connections both local and
long-range between individuals in the same layer. During early
brain development, they also establish top-down connections from
high to low levels in the hierarchy.

In contrast, the connections between artificial neurons in deep
learning are much more simplistic, with few offering “top-down”
guidance and even fewer with pre-programmed “canonical
circuits” that can be further refined with learning.

What’s more,
diverse circuits in the brain talk to one another
, which allows
the brain to focus on salient factors in the environment. Scrap or
inhibit attention mechanisms in the brain—in the case of ADHD,
for example—and learning efficacy is greatly reduced. What’s
more, neurons with diverse groups often “vote” on a specific
information trace. Whether that trace is further processed and
passed on into consciousness depends on the outcome of that
vote.

“It is currently unclear which aspects of the biological
circuitry are computationally essential and could also be useful
for network-based AI systems, but the differences in structure are
prominent,” concluded Ullman.

Innate Structure May Be Key

There’s a major problem with using the brain as inspiration:
often times, it’s hard to tease out what’s necessary for
efficient, low-power information processing, and what are
redundancies left over from evolution.

And although some argue that reconstituting entire brains based
on their mapped connections may lead to general AI, that may not be
the best way forward. After all, AI isn’t shackled by biological
constraints—why should we impose the side effects of evolution on
our algorithms?

Nevertheless, Ullman argues that neuroscience is still the place
to look for AI guidance.

“A major open question is the degree to which current
approaches will be able to produce ‘real’ and human-like
understanding, or whether additional, perhaps radically different,
directions will be needed to deal with broad aspects of cognition,
and artificial
general intelligence
(AGI),” he said.

One potential direction is to look at innate cognitive
structures present at birth in the human brain. A main way that
deep learning and brains differ is in the “relative roles of
innate cognitive structures and general learning mechanisms”—in
a sense, nature versus nurture.

Currently, deep learning models rely heavily on “nurture,”
relying on
extended learning
to achieve their goals, while humans often
rely and build upon specific preexisting networks already
encoded—by evolution—in the circuitry before learning.

Infants, for example, naturally recognize complex structures
such as human hands and learn to follow them as they perform tasks.
In fact, neuroscientists now know that “the human cognitive
system is equipped with basic innate structures that facilitate the
acquisition of meaningful concepts and cognitive skills,” which
in turn contribute to our superior cognitive learning and
understanding skills compared to deep learning networks, said
Ullman.

There may be a way to give deep learning algorithms “proto
concepts,” structures that help guide learning towards
progressive acquisition and allow deep nets to learn to organize
complex concepts with little explicit training. The key is to
unveil which biological circuits underlie innate learning
abilities, such as attention, and implement them in deep nets. For
example, brain mapping methods may lead to mechanisms that can be
modeled in deep learning algorithms, or scientists can start from
scratch to discover connective structure that helps the AI flexibly
understand its environment.

That’s the next big challenge, said Ullman. But the potential
rewards, for both AI and
neuroscience, are
too grand to pass on by.

“In general, the computational problem of ‘learning innate
structures’ is different from current learning procedures, and it
is poorly understood. Combining the empirical and computational
approaches to the problem is likely to benefit in the long run both
neuro­science and AGI, and could eventually be a component of a
theory of intelligent process­ing that will be applicable to
both,” he said.

Image Credit:
enzozo
/ Shutterstock.com

Source: *FS – All – Science News 2 Net
Giving Neural Nets an Innate Brain-Like Structure Could Bolster Deep Learning