To define how the world should look, neural networks are making up their own rules
Researchers demonstrate how deep learning could eventually replace traditional anesthetic practices.
Academics from the Massachusetts Institute of Technology (MIT) and Massachusetts General Hospital have demonstrated how neural networks can be trained to administer anesthetic during surgery.
Over the past decade, machine learning (ML), artificial intelligence (AI), and deep learning algorithms have been developed and applied to a range of sectors and applications, including in the medical field.
Extremely energy-efficient artificial intelligence is now closer to reality after a study by UCL researchers found a way to improve the accuracy of a brain-inspired computing system.
The system, which uses memristors to create artificial neural networks, is at least 1,000 times more energy efficient than conventional transistor-based AI hardware, but has until now been more prone to error.
Existing AI is extremely energy-intensive—training one AI model can generate 284 tons of carbon dioxide, equivalent to the lifetime emissions of five cars. Replacing the transistors that make up all digital devices with memristors, a novel electronic device first built in 2008, could reduce this to a fraction of a ton of carbon dioxide—equivalent to emissions generated in an afternoon’s drive.
Since memristors are so much more energy-efficient than existing computing systems, they can potentially pack huge amounts of computing power into hand-held devices, removing the need to be connected to the Internet.
First programmable memristor computer aims to bring AI processing down from the cloud
The memristor array chip plugs into the custom computer chip, forming the first programmable memristor computer. The team demonstrated that it could run three standard types of machine learning algorithms. Credit: Robert Coelius, Michigan Engineering
The first programmable memristor computer—not just a memristor array operated through an external computer—has been developed at the University of Michigan.
It could lead to the processing of artificial intelligence directly on small, energy-constrained devices such as smartphones and sensors. A smartphone AI processor would mean that voice commands would no longer have to be sent to the cloud for interpretation, speeding up response time.
Framework improves ‘continual learning’ for artificial intelligence
Researchers have developed a new framework for deep neural networks that allows artificial intelligence (AI) systems to better learn new tasks while “forgetting” less of what it has learned regarding previous tasks. The researchers have also demonstrated that using the framework to learn a new task can make the AI better at performing previous tasks, a phenomenon called backward transfer.
“People are capable of continual learning; we learn new tasks all the time, without forgetting what we already know,” says Tianfu Wu, an assistant professor of electrical and computer engineering at NC State and co-author of a paper on the work. “To date, AI systems using deep neural networks have not been very good at this.”
“Deep neural network AI systems are designed for learning narrow tasks,” says Xilai Li, a co-lead author of the paper and a Ph.D. candidate at NC State. “As a result, one of several things can happen when learning new tasks. Systems can forget old tasks when learning new ones, which is called catastrophic forgetting. Systems can forget some of the things they knew about old tasks, while not learning to do new ones as well. Or systems can fix old tasks in place while adding new tasks – which limits improvement and quickly leads to an AI system that is too large to operate efficiently. Continual learning, also called lifelong-learning or learning-to-learn, is trying to address the issue.”
Generative adversarial networks are not just good for causing mischief. They can also show us how AI algorithms “think.”
GANs, or generative adversarial networks, are the social-media starlet of AI algorithms. They are responsible for creating the first AI painting ever sold at an art auction and for superimposing celebrity faces on the bodies of porn stars. They work by pitting two neural networks against each other to create realistic outputs based on what they are fed. Feed one lots of dog photos, and it can create completely new dogs; feed it lots of faces, and it can create new faces.
As good as they are at causing mischief, researchers from the MIT-IBM Watson AI Lab realized GANs are also a powerful tool: because they paint what they’re “thinking,” they could give humans insight into how neural networks learn and reason. This has been something the broader research community has sought for a long time—and it’s become more important with our increasing reliance on algorithms.
Nvidia’s new AI represents a major leap forward in graphics generation based on neural networks.
Crafting an interactive virtual world of the kind found in many modern video games is a labor-intensive process that can require years of work, hundreds of people, and millions of dollars. Soon, some of that work may be done by machines.
Computer hardware company Nvidia, which specializes in graphics cards, announced on Monday that it developed a new AI model that can take video of the real world and use it to generate a realistic and interactive virtual world. According to Nvidia, its new AI could be used to drastically lower the cost of generating virtual environments, which will be particularly useful in the video game and film industries.
Search engine now returns answers instead of just links.
Training The Network
Today, if you ask the Google search engine on your desktop a question like “How big is the Milky Way,” you’ll no longer just get a list of links where you could find the answer — you’ll get the answer: “100,000 light years.”
While this question/answer tech may seem simple enough, it’s actually a complex development rooted in Google’s powerful deep neural networks. These networks are a form of artificial intelligence that aims to mimic how human brains work, relating together bits of information to comprehend data and predict patterns.
The human brain is nature’s most powerful processor, so it’s not surprising that developing computers that mimic it has been a long-term goal. Neural networks, the artificial intelligence systems that learn in a very human-like way, are the closest models we have, and now Stanford scientists have developed an organic artificial synapse, inching us closer to making computers more efficient learners.
Project Loon aims to bring internet access to the two-thirds of the world.
Microsoft CEO Satya Nadella, last month, thumbed his nose at Google for its various “moonshot” projects, currently housed at Google’s semi-secret X Labs. When Nadella was asked if Microsoft could learn a thing or two from X Labs, he said that there’s always something to learn from “from people who market themselves well.”