How we’re teaching computers to understand pictures

0

When a very young child looks at a picture, she can identify simple elements: “cat,” “book,” “chair.” Now, computers are getting smart enough to do that too. What’s next? In a thrilling talk, computer vision expert Fei-Fei Li describes the state of the art — including the database of 15 million photos her team built to “teach” a computer to understand pictures — and the key insights yet to come.

As Director of Stanford’s Artificial Intelligence Lab and Vision Lab, Fei-Fei Li is working to solve AI’s trickiest problems — including image recognition, learning and language processing.

Why you should listen

Using algorithms built on machine learning methods such as neural network models, the Stanford Artificial Intelligence Lab led by Fei-Fei Li has created software capable of recognizing scenes in still photographs — and accurately describe them using natural language.

Li’s work with neural networks and computer vision (with Stanford’s Vision Lab) marks a significant step forward for AI research, and could lead to applications ranging from more intuitive image searches to robots able to make autonomous decisions in unfamiliar situations.

What others say

“Computer software only recently became smart enough to recognize objects in photographs. Now, Stanford researchers using machine learning have created a system that takes the next step, writing a simple story of what’s happening in any digital image.” — Stanford News, November 18, 2014

Story Source:

The above story is based on materials provided by TED.

Leave A Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.