There are a lot of new things to love about iOS 11, Apple’s latest version of its mobile operating system. There are features that make life easier for users, and plenty of new tools and frameworks that developers can leverage—some of which are directly dependent on new hardware features available in the latest iPhone X.

One of these new tools is the ARKit framework, which is bringing augmented reality (AR) to iOS devices. Here’s a look at what ARKit can do and what it means for the future of AR in the mobile space.

What is Augmented Reality and what’s it being used for?

“Augmented reality or AR is the technology that adds some digital objects to the present experience. It doesn’t create a virtual world but overlays the real one with computer graphics. Usually, such digital overlays can include 3D models and objects, textual data, videos.” – The Difference Between VR and AR App Development

AR is transforming how we consume all types of digital media, from interactive advertising to gaming and browsing. AR apps use cameras and either location- or marker-based technology to create the experience. Imagine an app that lets you scan a QR code to watch a video about a product, or an app that lets you try on sunglasses virtually using your forward-facing camera—that’s AR at work.

The good news? AR is relatively easy to put to work in your apps.

An Intro to ARKit

ARKit (along with Xcode 9 and the iOS 11 SDK) makes it possible for developers to create AR experiences for iPhone and iPad apps that are more realistic than ever by incorporating you and the actual world around you.

A few features to know:

Face tracking. Ever used Snapchat or Instagram filters that give you real-time dog ears, cool sunglasses, or turn your face into an animated strawberry? That’s thanks to a technology that enables the camera to detect the “topology” of your face with a face mesh kit. With ARKit, developers can create “face-based AR experiences,”enabling both live selfie effects and the creation of 3D characters based off of your face and facial expressions. Developers can get started with this documentation.

The TrueDepth Camera. The above is possible thanks to the forward-facing TrueDepth camera available on iPhone X. The TrueDepth Camera can “detect the position, topology, and expression of the user’s face” unlike a traditional camera.

Getting lighting and angles right. ARKit has two other tricks up its sleeve to make AR experiences even more lifelike: “scene understanding,” which uses the camera to locate horizontal planes in a room or environment, and “lighting estimation,” which uses camera sensors to make sure virtual objects have the right lighting to fit the scene.

Visual Inertial Odometry (VIO). VIO uses data from both the device’s sensors and the Core Motion framework, which provides data from processing what the device’s accelerometer, gyroscope, and pedometer are doing—in other words, the speed, steps, motion, orientation, and rotation of a device.

Powerful processing to match. When it comes the hardware required to run this technology, ARKit utilizes Apple A9, A10, and A11 processors—and it has optimizations in the Metal graphics API, the SceneKit animation framework, and Unity, as well.

Get the full run-down of ARKit features and the documentation to get started here.