7749079028_a5815e7469_o

Facebook’s three AI labs are working on a way to understand exactly what you share so it can serve that content to people with matching interests.  

Last week in a public Q&A on his Facebook Page, CEO Mark Zuckerberg gave a closer look at why Facebook is investing in AI, as well as detailing his philosophy on happiness, exercise and the future of the company.

At the end of 2013, Facebook poached NYU’s top “deep learning” professor Yann LeCun to head up a new AI lab in New York. It’s since opened another in Menlo Park at Facebook’s HQ, and just this month added a Paris AI lab.

Still, the company’s always been a bit vague about how AI will improve its product, beyond saying it will boost the relevance of the News Feed. It revealed some back-end recognition tech at F8, but was cagey about exactly how it would be used.

Today we got more info. When asked to “tell us more about the AI initiatives that Facebook are involved in,” Zuckerberg replied:

“Most of our AI research is focused on understanding the meaning of what people share.

For example, if you take a photo that has a friend in it, then we should make sure that friend sees it. If you take a photo of a dog or write a post about politics, we should understand that so we can show that post and help you connect to people who like dogs and politics.

In order to do this really well, our goal is to build AI systems that are better than humans at our primary senses: vision, listening, etc.

For vision, we’re building systems that can recognize everything that’s in an image or a video. This includes people, objects, scenes, etc. These systems need to understand the context of the images and videos as well as whatever is in them.

For listening and language, we’re focusing on translating speech to text, text between any languages, and also being able to answer any natural language question you ask.

You can imagine how AI would let Facebook scan a photo you upload, recognize the people, places, and things in it, then show it to people with similar interests, while also showing you more posts about this stuff in the future.

Zuck’s comments on AI for listening also reveal more about the strategy of Facebook’s secretive Language Technology Group. Facebook has been quietly staffing up the department, and early this year acquired Wit.AI, a Y Combinator startup that builds voice interface APIs for apps. Facebook also announced it’s begun testing a feature where you can record a voice snippet into Messenger, and the company’s technology will transcribe it into text so the recipient can read it rather than listen.

Transcription and translation technology could fuel Facebook’s mission to connect the whole world. Removing language barriers could allow people from cultures that have historically had tense relations to become friends. Messenger seeks to be “cross-platform” but you still have to consume messages in the format they were created. Speech AI could one day let you type into Messenger, then let a friend listen to it as audio while they’re driving or have their hands full. They could speak back, and let you read a transcription while you’re in a noisy or silent room.

And of course, Facebook could potentially use the same artificial intelligence algorithms to mine meaning and interests out of your speech or messages, as well as your News Feed posts.

Image credit:  Jurgen Appelo | Flickr
Via TechCrunch