Machine learning empowers computers with the ability to learn by themselves, as opposed to being explicitly programmed. In other words, computers have intuition instead of just following rules.

The ideas have been around for 60 years or so, but it’s safe to say that machine learning is presently a hot topic, and is having a big impact on mobile-first businesses. Nowhere was this more apparent than at this year’s Apple Worldwide Developers Conference (WWDC 2017).

One of the main themes throughout WWDC 2017 was the further development of device-based machine learning to anticipate customer needs. And just as importantly, to surface relevant information or an interaction to solve those particular needs.

To improve on this, Apple is adding to its machine learning framework, Core ML, in two ways:

First, developers will have direct access to the GPU to run their own machine-learning models on the device to make apps more intelligent.

Second, Apple will make available pre-built machine-learning models. These include real-time image recognition, text prediction, sentiment analysis, face detection, handwriting detection, emotion detection, and entity recognition.

WWDC Machine Learning

This is all terrific, but the larger point is that machine learning is now just an API call, and is available to all. It will make user experiences more prescient, and apps more intelligent.

As an example, see how Amazon deploys image recognition to simplify purchasing. Note that this is for purchases on Amazon via the app, and not in-store where the photo was likely taken. Pinterest, Google and, more recently, eBay have introduced similar visual search tools to drive purchases.

Another illustration is the new Siri-powered watch face which uses machine learning to customize its content in real time throughout the day, including reminders, traffic information, upcoming meetings, news, smart home controls, etc. This is the perfect example of information and interactions that customers need and want, right when they need it.