Since the dawn of the personal computer, engineers have been striving to enhance their machines’ ability to interact with the world around them.
The first task was to replicate the sights and sounds around them. Originally starting with cartoonish 8-color images and clicking noises, this ability has been refined over the years to the point where we can now type in any address in the United States and get an interactive, 360-degree view from that location using Google Maps. 3-D printers allow us to conjure tangible objects out of plastic in minutes with the mere touch of a button. 3-D TVs can trick my brain into thinking a sword is being hurled at me. It’s all pretty incredible.
But teaching a computer to analyze and understand the real world — a.k.a. Artificial Intelligence — has proven much more difficult. A solid 10 years after I first tried dictating a social studies paper to my computer, I still groan when I hear an automated Customer Service message ask me in Robot-speak to please state my problem. I know “she” is going to screw up.
My goal in writing this isn’t to demean the person who wrote that software, but rather point out the enormous complexity of the task. Our understanding of how light and sound interacts with our senses is fairly well developed, so it’s not too difficult to teach computers to replicate that process. But when it comes to understanding how our brain accomplishes fairly menial tasks, we’re pretty much in the dark.
The failure of Artificial Intelligence to deliver satisfactory results in this area has kept it a niche R&D field with large up-front investments and deeply uncertain outcomes. Viable AI ventures are few and far-between in a sea of Social Media and Daily Deals websites. We haven’t yet hit the tipping point where AI becomes good enough to create mass demand, and that mass demand creates massive profits and investment.
In my opinion, however, the tipping point is approaching quickly.
Recommended for YouWebcast: Your Viral Voice: How to Create Conversations that Convert to Sales
Take Facebook, for instance. I have literally hundreds of pictures of my face tagged in my account. Any human being could look at a handful of these and pick me out of a lineup. But Facebook, with all their engineering talent and computing power, can’t seem to figure it out on their own. In comes Face.com and their Facial Recognition Software, and the rumors of a $100m acquisition by Facebook. For a company with only $5 million in funding, this will likely turn some heads in both the VC and startup communities.
Apple, too, seems to be headed down this path. After acquiring Siri, whose technology was developed by a non-profit on a research grant, it’s likely they’ll be pouring more investment dollars into Natural Language Processing and Semantic Analysis before their next iteration. The same technology can be used in a B2B context to enhance the contact elimination rate of customer service programs, and alleviate my frustration when I hear a robot ask me what I’m calling about.
If you ask me, Artificial Intelligence is about to become a monster business. In the next decade, your software’s ability to interact with the wide world around it, rather than simply produce a carefully structured and sterilized representation of that world, is going to be what makes or breaks your business model.
You better get ready.