“Depending on who you ask, AI is either man’s greatest invention since the discovery of fire”, as Google’s CEO said at Google’s I/O 2017 keynote, or it is a technology that might one day make man superfluous. What’s inarguable is major companies have embraced AI as if it was one of the most important discoveries ever invented. In the US, Amazon, Apple, Microsoft, Facebook, IBM, SAS, and Adobe have all infused AI and machine learning throughout their operations, while in China the big four – Baidu, Alibaba, Tencent, Xiaomi – are coordinating with the government and all working on unique and almost siloed AI initiatives.
In her article Understanding Three Types of Artificial Intelligence, Anjali UJ explains “The term AI was coined by John McCarthy, an American computer scientist in 1956.” Anjali speaks of the following three types of AI, including:
- Narrow Artificial Intelligence: AI that has been trained for a narrow task.
- Artificial General Intelligence: AI containing generalized cognitive abilities, which understand and reason the environment the way humans do.
- Artificial Super Intelligence: AI that surpasses human intelligence and allows machines to mimic human thought.
AI is not a new technology, in reality, it’s decades old. In his MIT Technology Review article Is AI Riding a One-Trick Pony?, James Somers states “Just about every AI advance you’ve heard of depends on a breakthrough that’s three decades old.” Recent advances in chip technology, as well as improvements in hardware, software, and electronics have turned AI’s enormous potential into reality.
Neural Nets
AI is founded on Artificial Neural Networks (ANN) or just “Neural Nets”, which are non-linear statistical data modelling tools used when the true nature of a relationship between input and output is unknown. In his article Machine Learning Applications for Data Center Optimization, Jim Gao describes neural nets as “a class of machine learning algorithms that mimic cognitive behavior via interactions between artificial neurons.” Neural nets search for patterns and interactions between features to automatically generate a best fit model.
They do not require the user to predefine a model’s feature interactions. Speech recognition, image processing, chatbots, recommendation systems, and autonomous software agents are common examples of machine learning. There are three types of training in neural networks; supervised, which is the most common, as well as unsupervised training and reinforcement learning. AI can be broken down into three areas:
Machine Learning
A branch of computer science, machine learning explores the composition and application of algorithms that learn from data. These algorithms build models based on inputs and use those results to predict or determine actions and results, rather than following strict instructions.
Supervised learning’s goal is to learn a general rule that maps inputs to outputs and the computer is provided with example inputs as well as the desired outputs. With unsupervised learning, however, labeled data isn’t provided to the learning algorithm and it must find the input’s structure on its own. In reinforcement learning, the computer utilizes trial and error to solve a problem. Like Pavlov’s dog, the computer is rewarded for good actions it performs and the goal of the program is to maximize reward.
Deep learning
A subset of machine learning, deep learning utilizes multi-layered neural nets to perform classification tasks directly from image, text, and/or sound data. In some cases, deep learning models are already exceeding human-level performance. Google Meet’s ability to transcribe a human voice during a live conference call is an example of deep learning’s impressive capabilities.
ML and deep learning are useful for personalization marketing, customer recommendation, spam filtering, fraud detection, network security, optical character recognition (OCR), computer vision, voice recognition, predictive asset maintenance, sentiments analysis, language translations, and online search, among others.
7 Patterns of AI
In her Forbes article The Seven Patterns of AI, Kathleen Walch lays out a theory that, regardless of the application of AI, there are seven commonalities to all AI applications. These are “hyperpersonalization, autonomous systems, predictive analytics and decision support, conversational/human interactions, patterns and anomalies, recognition systems, and goal-driven systems.” Walch adds that, while AI might require its own programming and pattern recognition, each type can be combined with others, but they all follow their own pretty standard set of rules.
The ‘Hyperpersonalization Pattern’ can be boiled down to the slogan, ‘Treat each customer as an individual’. ‘Autonomous systems’ will reduce the need for manual labor. Predictive analytics portends “some future value for data, predicting behavior, predicting failure, assisted problem resolution, identifying and selecting best fit, identifying matches in data, optimization activities, giving advice, and intelligent navigation,” says Walch. The ‘Conversational Pattern’ includes chatbots, which allow humans to communicate with machines via voice, text, or image.
The ‘Patterns and Anomalies’ type utilizes machine learning to discern patterns in data and it attempts to discover higher-order connections between data points, explains Walch. The recognition pattern helps identify and determine objects within image, video, audio, text, or other highly unstructured data notes Walch. The ‘Goal-Driven Systems Pattern’ utilizes the power of reinforcement learning to help computers beat humans on some of the most complex games imaginable, including Go and Dota 2, a complicated multiplayer online battle arena video game.
Conclusion
A few years ago, the AI hype had reached such a fever pitch that companies just had to add ‘AI’, ‘ML’, or ‘Deep Learning’ to their pitch decks, and funding flooded through the door. However, businesses are investing in AI powered solutions like AIOps to reduce IT operations cost. Today, investors are a little wiser to the fact that not all that glitters is AI gold, and a lot of companies who pitched themselves as AI experts really didn’t know the difference between a neural net and a k-means algorithm.
Jumping head-first into AI is a recipe for disaster. Only “1 in 3 AI projects are successful and it takes more than 6 months to go from concept to production, with a significant portion of them never making it to production—creating an AI dilemma for organizations,” says Databricks. Not only is AI old, but it is also a difficult technology to implement. Anyone delving into AI needs to have a strong understanding of technology, what it is, where it came from, what limitations might hold it back, so although AI is exceptional technology, the waters are deep. It is far from the panacea that many software companies claim it is. AI has had not one but two AI winters. CEOs looking to make a substantial investment in AI should be well aware of the old saying that ‘a fool and his money are easily parted’, as that fool could be an AI fool, too.