As a parent, watching an infant as he or she begins this journey of life can be an amazing experience. If each day we take time to really watch them watching us, trying to mimic the movements we make as we talk to them, trying to communicate with us without yet being able to form words. It’s an exercise in sheer wonder and sometimes, for our babies, in utter frustration. After a time, you learn which cry means hungry, or sad, or angry, and as the child grows, they experiment with how they can make you react to their needs. It’s a great dance.
You may be wondering what this has to do with computers, so hang in there, I am about to explain. I was reading a post in Daily Wireless about where computers are headed in the future. It was fascinating. Much of the initial post was about something they called Deep Learning. It explains the process of teaching machines to think and process information independently from the process of programming. And it made me think of how wonderful, and powerful the human brain is. How it has taken years and years to teach a computer deep learning, and the amount of sheer computing power it consumes, and then to watch a child learn. How relatively quickly they pick up the ability to think. Not only that but how to physically navigate the world. When to use their legs, how fast they should go, when to stop. Never thinking about how complex each of these processes is on its own.
Here is a small excerpt of a post I read, I wonder if you will have the same sense of wonder and awe that I had:
“In the mid-1980s, Hinton and others helped spark a revival of interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required heavy human involvement: programmers had to label data before feeding it to the network. And complex speech or image recognition required more computer power than was then available.”
Finally, however, in the last decade, Hinton and other researchers made some fundamental conceptual breakthroughs. In 2006, Hinton developed a more efficient way to teach individual layers of neurons. The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects.
Like cats. Last June, Google demonstrated one of the largest neural networks yet, with more than a billion connections. A team led by Stanford computer science professor Andrew Ng and Google Fellow Jeff Dean showed the system images from 10 million randomly selected YouTube videos. One simulated neuron in the software model fixated on images of cats. Others focused on human faces, yellow flowers, and other objects. And thanks to the power of deep learning, the system identified these discrete objects even though no humans had ever defined or labeled them.
What stunned some AI experts, though, was the magnitude of improvement in image recognition. The system correctly categorized objects and themes in the YouTube images 16 percent of the time. That might not sound impressive, but it was 70 percent better than previous methods. And, Dean notes, there were 22,000 categories to choose from; correctly slotting objects into some of them required, for example, distinguishing between two similar varieties of skate fish. That would have been challenging even for most humans. When the system was asked to sort the images into 1,000 more general categories, the accuracy rate jumped above 50 percent. ”
Hinton developed a better way to teach individual layers of neurons..how complex that sounds. And yet, our babies teach themselves so much over a period of just months. Sometimes, as I age, the wonders of life get a little lost in the everyday struggles, but then I read an article like this and I say, “how wonderful”.
Social Cindy