Despite the huge contributions of deep learning to the field of artificial intelligence, there’s something very wrong with it: It requires huge amounts of data. This is one thing that both the pioneers and critics of deep learning agree on. In fact, deep learning didn’t emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.
Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.
A new reinforcement-learning algorithm has learned to optimize the placement of components on a computer chip to make it more efficient and less power-hungry.
3D Tetris: Chip placement, also known as chip floor planning, is a complex three-dimensional design problem. It requires the careful configuration of hundreds, sometimes thousands, of components across multiple layers in a constrained area. Traditionally, engineers will manually design configurations that minimize the amount of wire used between components as a proxy for efficiency. They then use electronic design automation software to simulate and verify their performance, which can take up to 30 hours for a single floor plan.
Time lag: Because of the time investment put into each chip design, chips are traditionally supposed to last between two and five years. But as machine-learning algorithms have rapidly advanced, the need for new chip architectures has also accelerated. In recent years, several algorithms for optimizing chip floor planning have sought to speed up the design process, but they’ve been limited in their ability to optimize across multiple goals, including the chip’s power draw, computational performance, and area.
While smell-o-vision may be a long way from being ready for your PC, Intel is partnering with Cornell University to bring it closer to reality. Intel’s Loihi neuromorphic research chip, a a powerful electronic nose with a wide range of applications, can recognize dangerous chemicals in the air.
“In the future, portable electronic nose systems with neuromorphic chips could be used by doctors to diagnose diseases, by airport security to detect weapons and explosives, by police and border control to more easily find and seize narcotics, and even to create more effective at home smoke and carbon monoxide detectors,” Intel said in a press statement.
With machine learning, Loihi can recognize hazardous chemicals “in the presence of significant noise and occlusion,” Intel said, suggesting the chip can be used in the real world where smells — such as perfumes, food, and other odors — are often found in the same area as a harmful chemical. Machine learning trained Loihi to learn and identify each hazardous odor with just a single sample, and learning a new smell didn’t disrupt previously learned scents.
Three humans and a robot form a team and start playing a game together. No, this isn’t the beginning of a joke, it’s the premise of a fascinating new study just released by Yale University.
Researchers were interested to see how the robot’s actions and statements would influence the three humans’ interactions among one another. They discovered that when the robot wasn’t afraid to admit it had made a mistake, this outward showing of vulnerability led to more open communication between the people involved as well.
Google is creating AI-powered robots that navigate without human intervention—a prerequisite to being useful in the real world.
Within 10 minutes of its birth, a baby fawn is able to stand. Within seven hours, it is able to walk. Between those two milestones, it engages in a highly adorable, highly frenetic flailing of limbs to figure it all out.
That’s the idea behind AI-powered robotics. While autonomous robots, like self-driving cars, are already a familiar concept, autonomously learning robots are still just an aspiration. Existing reinforcement-learning algorithms that allow robots to learn movements through trial and error still rely heavily on human intervention. Every time the robot falls down or walks out of its training environment, it needs someone to pick it up and set it back to the right position.
Now a new study from researchers at Google has made an important advancement toward robots that can learn to navigate without this help. Within a few hours, relying purely on tweaks to current state-of-the-art algorithms, they successfully got a four-legged robot to learn to walk forward and backward, and turn left and right, completely on its own.
A device like the one in the study (right), and an electron microscope image showing the device’s neuron-like arrangement of nanowires.
UCLA scientists James Gimzewski and Adam Stieg are part of an international research team that has taken a significant stride toward the goal of creating thinking machines.
Led by researchers at Japan’s National Institute for Materials Science, the team created an experimental device that exhibited characteristics analogous to certain behaviors of the brain—learning, memorization, forgetting, wakefulness and sleep. The paper, published in Scientific Reports, describes a network in a state of continuous flux.
“This is a system between order and chaos, on the edge of chaos,” said Gimzewski, a UCLA distinguished professor of chemistry and biochemistry, a member of the California NanoSystems Institute at UCLA and a co-author of the study. “The way that the device constantly evolves and shifts mimics the human brain. It can come up with different types of behavior patterns that don’t repeat themselves.”
The ITT works continuously day and night without any interruption or supervision
The Intelligent Tow Tank conducts experiments and changes experimental values to seek out new and useful results, conducting 100,000 experiments a year
Scientists are always warning us that our jobs are under threat from artificial intelligence. Self-driving technology will replace van drivers . Humanoid robots could replace builders, shelf stackers, even waitresses .
Even sex workers are under threat from automation.
But the latest, and perhaps most surprising, job that’s under threat from AI is…scientists.
Today, Amazon announced a new approach that it says will put machine learning technology in reach of more developers and line of business users. Amazon has been making a flurry of announcements ahead of its re:Invent customer conference next week in Las Vegas.
While the company offers plenty of tools for data scientists to build machine learning models and to process, store and visualize data, it wants to put that capability directly in the hands of developers with the help of the popular database query language, SQL.
By taking advantage of tools like Amazon QuickSight, Aurora and Athena in combination with SQL queries, developers can have much more direct access to machine learning models and underlying data without any additional coding, says VP of artificial intelligence at AWS, Matt Wood.
“This announcement is all about making it easier for developers to add machine learning predictions to their products and their processes by integrating those predictions directly with their databases,” Wood told TechCrunch.
Project Debater argued both for and against the benefits of artificial intelligence
An artificial intelligence has debated the dangers of AI – narrowly convincing audience members that the technology will do more good than harm.
Project Debater, a robot developed by IBM, spoke on both sides of the argument, with two human teammates for each side helping it out. Talking in a female American voice to a crowd at the University of Cambridge Union on Thursday evening, the AI gave each side’s opening statements, using arguments drawn from more than 1100 human submissions made ahead of time.
The new project is focused on building robots capable of useful, everyday tasks, like sorting recycling.
Alphabet’s X group, the R&D lab formerly known as Google X, introduced the Everyday Robot Project on Thursday.
The project comes out of Alphabet’s string of robotics acquisitions several years ago, which had been put on hold.
The new project is focused on building robots capable of useful, everyday tasks, like sorting recycling.
Alphabet’s X group said it will focus on AI-enabled robots that can be learn tasks on their own, rather than being programmed to do specific things.
Alphabet, the parent company of Google, is getting back into robotics after a first attempt several years ago fizzled. But this time the company wants to create robots with minds of their own.
Reinforcement learning (RL) is a widely used machine-learning technique that entails training AI agents or robots using a system of reward and punishment. So far, researchers in the field of robotics have primarily applied RL techniques in tasks that are completed over relatively short periods of time, such as moving forward or grasping objects.
A team of researchers at Google and Berkeley AI Research has recently developed a new approach that combines RL with learning by imitation, a process called relay policy learning. This approach, introduced in a paper prepublished on arXiv and presented at the Conference on Robot Learning (CoRL) 2019 in Osaka, can be used to train artificial agents to tackle multi-stage and long-horizon tasks, such as object manipulation tasks that span over longer periods of time.
Machine learning reveals that news coverage of people in creative industries such as design and art is shaped by gender. Can it guide us toward parity?
How long would it take you to review half a million articles? Not just to read, but to tally for particular keywords, such as “he,” “she,” and the words that immediately follow them? Well, let’s just say you’d have to quit your day job.
Undeterred, the Creative Industries Policy and Evidence Centre, which provides independent research and policy recommendations for the U.K.’s creative industry, in partnership with the innovation foundation Nesta, made it their day job. They had some help: AI.