6990245370_ec72e43836_z

Google is becoming increasingly ingrained in the fabric of our daily health-and-wellbeing habits, from answering heath-related questions in its search results to a fitness data platform for developers.  

But behind the scenes, the Internet giant is also working to expedite the discovery of drugs that could prove vital to finding cures for many human ills.

Working with Stanford University’s Pande Lab, Google Research has introduced a paper called “Massively Multitask Networks for Drug Discovery” [PDF], which looks at how using data from a myriad of different sources can better determine which chemical compounds will serve as “effective drug treatments for a variety of diseases.”

While the paper itself doesn’t reveal any major medical breakthroughs, it does point to how deep learning can be used to crunch huge data-sets and accelerate drug discovery. Deep learning is a system that involves training systems called artificial neural networks on lots of information derived from key data inputs, and then introducing new information to the mix. You might want to check out our guide to five emerging deep learning startups to watch in 2015.

“One encouraging conclusion from this work is that our models are able to utilize data from many different experiments to increase prediction accuracy across many diseases,” explained the multi-authored Google Research blog post. “To our knowledge, this is the first time the effect of adding additional data has been quantified in this domain, and our results suggest that even more data could improve performance even further.”

Google said it worked at a scale “18x larger than previous work,” and tapped a total of 37.8 million data points across 200+ individual biological processes.

“Because of our large scale, we were able to carefully probe the sensitivity of these models to a variety of changes in model structure and input data,” Google said. “In the paper, we examine not just the performance of the model but why it performs well and what we can expect for similar models in the future.”

This feeds into a bigger trend we’ve seen of late, with many of the big tech companies investing resources in deep learning. Last year, Twitter, Google, and Yahoo all acquired deep learning startups, while Facebook and Baidu made significant hires in this realm. Netflix and Spotify carried out work in this area too.

At VentureBeat’s HealthBeat conference last October, we looked at how the future of health care could lean heavily on robotics, analytics, and artificial intelligence (AI). Feeding into this diagnostic element is treatment discovery, which is increasingly turning to AI, big data, and deep learning too, as we’re seeing with this latest research from Google and Stanford.

By automating and improving predictive techniques, this should not only speed up the drug discovery process but cut the costs. From the Google report:

Discovering new treatments for human diseases is an immensely complicated challenge. Prospective drugs must attack the source of an illness, but must do so while satisfying restrictive metabolic and toxicity constraints. Traditionally, drug discovery is an extended process that takes years to move from start to finish, with high rates of failure along the way.

In short, testing millions of compounds can take a long time, so anything that can increase the chances of striking a successful combination can only be a good thing. This is where machine learning at scale may help.

 

Image credit:  RDECOM | Flickr
Via VentureBeat