For some time now, Google’s uber-secretive and appropriately titled “X Laboratory” research center (responsible for such cutting-edge innovations as Google Glasses) has been quietly creating one of the largest artificial intelligences in existence. After linking together over 16,000 computer processors to create a neural network with over 1 billion connections, researchers at the X Lab exposed their new creation to 10 Million digital images found in YouTube videos (sounds like my kids).
How did Google’s artificial brain do? It performed far better than any previous effort, roughly doubling its accuracy in recognizing objects in a list of 20,000 items. Perhaps most significantly, Google’s AI brain taught itself to recognize a cat without any guidance from the research team.
While this may seem like a silly exercise, it actually represents a profound application of “big data” that may have lasting and unforeseen repercussions.
The experiment is but one example of the many possibilities unleashed by such large scale software simulations, known as “deep learning models.” These simulations tap into the power of massive computing data centers to build programs that can mimic higher-level brain functions such as vision and perception, speech recognition, and language translation. In fact, just last year Microsoft scientists presented research showing how such systems could be utilized to understand human speech.¹
The key point is that Google’s researchers didn’t give their machine any help finding the features of a cat. “We never told it during the training, ‘This is a cat,’” noted Dr. Dean, one of the project researchers, “it basically invented the concept of a cat.”¹
Related Resources from B2C
» Free Webcast: Blogging in the Age of Modern Marketers
To do this, the scientists were able to mimic what naturally takes place in the brain’s visual cortex. “A loose and frankly awful analogy is that our numerical parameters correspond to synapses,” said Dr. Ng, another member of the Google team. “It is worth noting that our network is still tiny compared to the human visual cortex, which is a million times larger in terms of the number of neurons and synapses,” the researchers wrote.¹
The experiment reflected an additional profound conclusion: even though (at present) our biological brains may be more advanced, Google’s research shows that existing machine learning algorithms vastly improve as they are given access to large pools of data.¹
Like that culled by Google’s revamped software foundation, named “Colossus.” Rolled out two years ago, Colossus currently underpins not only Google’s search engine, but virtually all of its web services, from Gmail, Google Docs, and YouTube to the Google Cloud Storage service the company offers to third-party developers.²
Google’s previous software foundation, the GFS system, was built for batch operations (operations that happen before they’re applied to a live website); Colossus is built for real-time, nearly simultaneous processing. Whereas GFS updated its search index every few hours, Colossus’s new search infrastructure (aptly named “Caffeine”) can update its search index instantly.²
Remember back in May of 2012 when Google announced its new “Knowledge Graph” semantic search function? Touted by the search giant as “the next generation of search,” Google’s Knowledge Graph algorithm now collects tons of data about people, places and things, and then forms its own context based on the relationships that exist between the data.
For example, much like our own learning process, it forms context-based connections to discern the difference between an apple you eat, and a company you invest in.
In the short term, the Knowledge Graph makes Google’s search engine more relevant for users. When underpinned by Colossus and applied to the research at the X Laboratory, its long-term implications may be much more profound.
When news of the Knowledge Graph first broke earlier in the year, I wrote a blog expressing my concern. One of the creators of the Graph, Google’s Amit Singai, was very bullish about the future of his AI search engine, even reflecting on the possibility it being the first step in the creation of a benign intelligence, much like the Star Trek computer.
I was less sanguine, asking whether such a machine might end up acting more like the infamous HAL 9000 in 2001, A Space Odyssey.
Google’s cat experiment gives me even greater cause for skepticism.
When I first heard about the Knowledge Graph, I assumed that scientists were still years away from the creating an artificial neural network that could teach itself, especially given existing limitations in processing power, coupled with the bewildering complexity of the human brain. Like many developments in technology, these obstacles were overcome much faster than expected.
Granted, we are still a long-way off from creating an artificial neural network to rival that of our own. But the foundation stone has now been laid; from here, it’s only a matter time.
With regard to the creation of a science fiction-esque AI, I truly believe imagination has turned into inevitability.
A FUTURE UNCERTAIN
Now that the cat’s out of the bag (sorry for that), how will our quest for AI end?
Here’s what I think.
With time, aided by the power of big data and integrated networks, researchers will create a neural net that first rivals, than quickly exceeds, the intelligence of the average human brain. As scientists get closer to doing so, they will be ever-more reliant on increasingly-intelligent computers to do so.
This may take a few years, decades, or longer, depending on the limitations of processing capacity and speed, and the pace of innovation in such fields as quantum and bio-computing.
In a parallel trend, our daily lives will become more dependent on all forms of computers, and the software programs than run them. Fewer humans, as a percentage of the general population, will understand how these complex machines work. We will have smart appliances, smart cars, smart cities, smart grids, smart weapons and smart (AI) armies, (to an extent, we already have these things).
As we become ever-more reliant on these highly-networked and highly-complex computer systems to control macro-functions such as our energy processing plants, cities, nuclear defense installments, and financial markets, the prospect of “turning them off” or “unplugging them” will be untenable.
Further on, at some point, these computers, at least some of them, will reach the point of existential cognition. In other words, they will know they exist. Furthermore, they will be able to create iterations of themselves (cloning, etc).
Thus we will have a world in which machines are capable of replicating, are increasingly all-knowing (omniscient) and all-powerful (omnipotent), at least when compared to humans. For those of you sci-fi buffs, this sounds a lot like the Cylons, the AI race created by humans in the popular TV series Battlestar Gallactica.
In short, for all intents and purposes, we will have created gods. But will they be all-good (omnibenevolent), or even partially so? What if they are decent-minded, or worse still purely logic-driven, and eventually come to recognize humanity as flawed, weak, worthless, and outdated?
Will they pull the plug on us?
I told this theory of mine to a friend recently, suggesting that I really don’t see how we can escape such a dismal future. He just laughed.
“Computers are dumb machines,” he said with contempt, “just a bunch of ones and zeros. They’ll always be dependent on humans…period.”
For the sake of my children, grandchildren, and the future of humanity, I sure hope he’s right.
¹ NY Times, “How Many Computers to Identify a Cat? 16,000”
² Wired, “Google Remakes Online Empire with Colossus”