The amount of visual data in the world—and on the web—grows exponentially every day. This is thanks in part to the popularity of video, millions of networked IoT sensors, and the number of cameras, which are close to outnumbering people on the planet. That much data is a complex problem, but visual data, in particular, is very difficult for computers to understand.

To make computers get better at seeing—and not just seeing, but extracting high-level information—scientists have worked to recreate human vision in computers for the past 50 years. It’s called computer vision, and it’s the science behind advancements like self-driving cars and Facebook’s facial and image recognition technologies.

The Challenge of Helping Computers to See

“Just like to hear is not the same as to listen, to take pictures is not the same as to see.”—Fei-Fei Li, computer scientist and director of Stanford Vision Lab

Human vision is complicated enough. That’s mostly because how humans understand what we see depends largely on our experiences and memories. We’ve been training our brains since the day we were born, which puts computers at a disadvantage. Unless every image that a computer processes is annotated, which would require countless hours for humans to do—tagging an image of an apple “apple,” “fruit,” “red,” “food,” etc.—computers must rely on algorithms to understand what they’re seeing.

And that’s where the genius of computer vision comes in. With support from artificial intelligence, neural networks, deep learning, parallel computing, and machine learning, it’s helping to bridge the gap between computers seeing and computers comprehending what they see.

Previously we covered image recognition and compared a few image recognition APIs. Here, we’ll take a step back and briefly look at the broader field of computer vision.

Human Hardware and Software: How People See and Understand What They’re Seeing

It’s easy to take for granted the way our eyes and brains work in tandem to instantaneously help us do something like duck when an object is coming at us at a high rate of speed. It’s not just our eyes at work here; there’s a lot going on to make that split-second response possible, and what we’re seeing is only part of it. A mix of hardware (our eyes) and software (our brains) makes it all work.

We understand an apple is an apple, regardless of shadows, light, colors, or size. These comprehensions happen subconsciously, thanks to interactions we’ve had with the world over time.

So how can you recreate this in a computer? Let’s first look at how a human does this, then see what components a computer would need to do the same.

HUMAN VISION

Let’s say you see an apple—it could be a piece of fruit, a drawing, or the logo on the back of a laptop. Here’s how a human processes that, step by step.

  1. Our eyes (with their retinas, photoreceptors, and millions of neurons feeding data to our optical nerves) are the lenses that gather information about objects and images, including light, colors, shadows, depth, and movement. Our eyes are the hardware, but they require software to understand what we’re seeing. So here’s the first step: Our eyes gather light bouncing off an apple.
  2. Next, that light is transformed into information for the brain. Neurons behind the lenses of our eyes process that raw visual data before it makes its way to the brain, working fast to turn light, edges, and motion into usable information for the visual cortex.
  3. The visual cortex is the part of the brain that processes what we’re seeing—and it’s so complex and staggeringly fast, scientists understand only some of what it can do. That it’s still largely a mystery makes it difficult to recreate in computers, but algorithms and convolutional neural networks are getting us closer. At this point, the apple is understood to be an apple, whether it’s green, red, or a drawing of an apple.
  4. The visual part of our brain relies on the rest of the brain for context around what we’re seeing. Our brain, including our memory and other powers of deduction we learn from the day we’re born, provides this context. In the apple example, if we noticed the apple looked moldy or bruised, that would allow us to infer it was a rotten apple and, subsequently, not fit to eat.

COMPUTER VISION

Now let’s look at how those steps translate to computers.

1. Cameras, lenses, and sensors gather raw visual input from images and objects (in many cases, with more precision and sensitivity than the human eye!). But without the software components, they’re still just sophisticated camera equipment.

2. When we see an apple, we instinctively know what it is, but a computer sees data about that apple—numbers and RGB values that represent different colors and intensities. Carnegie Mellon University’s Field Robotics Center notes, “It takes robot vision programs about 100 computer instructions to derive single edge or motion detections from comparable video images. A hundred million instructions are needed to do a million detections, and 1,000 MIPS to repeat them 10 times per second to match the retina.” This presents one of the first challenges for computer vision: How can we equip computers to mimic human vision without it taking an impractical amount of time and resources?

Numerous algorithms have been designed to detect kernels—clusters of pixels that indicate certain features in an image. These algorithms can mimic the behavior of the visual cortex, but they need many layers to do it effectively.

3. That’s where giving a computer more context is helpful, but the amount of data required to let computers recognize objects the way the human memory can is immense. The computing power required would be impractical. Neural networks mimic the biological neural networks in our brains, and they help replace all those years of learning humans have. By accessing these networks, computers can teach themselves things we’ve learned over time, removing the need for millions of computer instructions.

A convolutional neural network provides an even smarter way to process the values in an image using banks of artificial neurons and learned kernels that can detect interesting features in an image. Layers and layers of learned kernels with increasing degrees of complexity can process an image in parallel—one layer for edges, one for shapes, one for different facial features, and one for surrounding objects, for example—then run those through a final neuron that puts it all together: an image of “a female smiling on a beach.” This layered approach is deep learning in action.

Likewise, recurrent neural networks can process images in videos, and machine learning and artificial intelligence help them get smarter along the way.

What Can Computer Vision Do?

In an article published on TechCrunch in 2016, author Devin Coldewey says that “computer vision even in its nascent stage is still incredibly useful. It’s in our cameras, recognizing faces and smiles. It’s in self-driving cars, reading traffic signs and watching for pedestrians. It’s in factory robots, monitoring for problems and navigating around human coworkers.”

So, two years later, how has computer vision progressed?

Computer vision is responsible for biometric data, such as a visual scan of your face that grants you access to your smartphone. Recent advancements in parallel computing and neural networks have made image recognition more feasible and more accurate. It’s one of the most compelling applications for computer vision:

“Image recognition, and computer vision more broadly, is integral to a number of emerging technologies, from high-profile advances like driverless cars and facial recognition software to more prosaic but no less important developments, like building smart factories that can spot defects and irregularities on the assembly line, or developing software to allow insurance companies to process and categorize photographs of claims automatically.”—Tyler Keenan, “How Image Recognition Works

As computer vision gets smarter, computers will be more accurate and better able to sift through the millions of images and hours of video flooding the web. Convolutional neural networks will allow computer vision to take on more-complex challenges, with fewer errors.

Whether it’s simple barcode scanners or video content analysis with recurrent neural networks, computer vision isn’t just here to stay—it’s only just beginning.