Researchers from Rice University and the Baylor College of Medicine have created a new type of neural network algorithm that allows machines to learn independently using visual input. The work is inspired by the developing human brain, so this discovery may prove to be a milestone in how we understand the ‘brains’ of both machine and man.
To learn new things in life, we often consult a nurturing mentor to show us the ropes. However, this is not how we are introduced into the world. Our brains learn much before we can even speak or understand the teaching of others. As babies, our brains take in visual input and begin a more independent learning process that has inspired two scientists, Ankit Patel and Tan Nguyen, to develop a new type of deep learning algorithm that lets AI learn in the same way.
A way to see the World
This deep learning algorithm is known as a “convolutional neural network“, and it uses layers of artificial neurons that assemble along specific patterns in order to identify visual content. Each layer examines an image for certain complex patterns before pushing the image onto the next layer in what is called a nonlinear process.a 'convolutional #neuralnetwork' uses layers of artificial neurons that assemble along specific patterns in order to identify visual content.Click To Tweet
“It’s essentially a very simple visual cortex,” says Patel, “You give it an image, and each layer processes the image a little bit more and understands it in a deeper way, and by the last layer, you’ve got a really deep and abstract understanding of the image.”
Neural networks resemble human brains in that they are relatively untrained when they have been newly formed. As each layer of the convolutional network is exposed to visual stimuli, they become specialized over time, eventually allowing them to look for abstract images such as eyeballs or vertical grating patterns.
Gathering an abstract understanding of three-dimensional objects is absolutely critical for technology such as self-driving cars, which require this type of understanding to be able to safely navigate. Human brains already have this ability, but the process by which we utilize it is still a mystery. Artificial neural networks may help us understand just how we interpret visual input.
Making the Deep Learning Algorithm More Human
According to Patel, the theory of artificial neural networks could help neuroscientists better understand how the human brain works. There are some similarities about how the visual cortex and convolutional nets represent the world, and the ability of the brain to learn unsupervised is something that he and his colleagues are trying to figure out.
“artificial neural networks could help neuroscientists better understand how the human brain works.”
Patel noted that it is still unclear what kind of algorithm the neural circuits in our visual cortex uses. Yet, the team wants to understand what it is and how it could be related to their theory of deep learning.
The human brain is still far better at unsupervised learning than neural networks are, but if the team can discover how it accomplishes that feat then they may be able to greatly improve the visual understanding of machine intelligence.
As this research illuminates certain functions of visual learning, it may be put to use to enhance education for man and machine alike.