Endel highlights how AI could change the way music is both created and experienced.
Artificial Intelligence and Machine Learning are slowly but surely infiltrating multiple industries, slowly weaving their way into our daily lives. Medical professionals are using deep learning models to identify cancer, weak AI to construct better buildings and machine learning to drive the world of robotics.
Another day, another deepfake: but this time they can sing.
Finally, technology that can make Rasputin sing like Beyoncé
New research from Imperial College in London and Samsung’s AI research center in the UK shows how a single photo and audio file can be used to generate a singing or talking video portrait. Like previous deepfake programs we’ve seen, the researchers uses machine learning to generate their output. And although the fakes are far from 100 percent realistic, the results are amazing considering how little data is needed.
Cassie Kozyrkov, “chief decision scientist” at Google, speaking at AI Summit (London) 2019
Google’s chief decision scientist: Humans can fix AI’s shortcomings
Cassie Kozyrkov has served in various technical roles at Google over the past five years, but she now holds the somewhat curious position of “chief decision scientist.” Decision science sits at the intersection of data and behavioral science and involves statistics, machine learning, psychology, economics, and more.
In effect, this means Kozyrkov helps Google push a positive AI agenda — or, at the very least, convince people that AI isn’t as bad as the headlines claim.
An example of a manipulated photo, the defects spotted by the algorithm, and the original image. Credit: Adobe
Though it’s just a research project for the moment.
The world is becoming increasingly anxious about the spread of fake videos and pictures, and Adobe — a name synonymous with edited imagery — says it shares those concerns. Today, it’s sharing new research in collaboration with scientists from UC Berkeley that uses machine learning to automatically detect when images of faces have been manipulated.
It’s the latest sign the company is committing more resources to this problem. Last year its engineers created an AI tool that detects edited media created by splicing, cloning, and removing objects.
A new type of artificial intelligence can generate a “living portrait” from just one image. Original Image
The enigmatic, painted smile of the “Mona Lisa” is known around the world, but that famous face recently displayed a startling new range of expressions, courtesy of artificial intelligence (AI).
In a video shared to YouTube on May 21, three video clips show disconcerting examples of the Mona Lisa as she moves her lips and turns her head. She was created by a convolutional neural network — a type of AI that processes information much as a human brain does, to analyze and process images.
Robotics provides important opportunities for advancing artificial intelligence, because teaching machines to learn on their own in the physical world will help us develop more capable and flexible AI systems in other scenarios as well. Working with a variety of robots — including walking hexapods, articulated arms, and robotic hands fitted with tactile sensors — Facebook AI researchers are exploring new techniques to push the boundaries of what artificial intelligence can accomplish.
Doing this work means addressing the complexity inherent in using sophisticated physical mechanisms and conducting experiments in the real world, where the data is noisier, conditions are more variable and uncertain, and experiments have additional time constraints (because they cannot be accelerated when learning in a simulation). These are not simple issues to address, but they offer useful test cases for AI.
Digital technologies drive business disruption. Today, artificial intelligence (AI) is at the forefront of financial industry disruption, allowing these firms to look differently at operations, staffing, processes, and the way work is done in a human-machine partnership. In PwC’s 2019 AI survey of US executives, financial services executives said they expect their AI efforts to result in increased revenue and profits (50%), better customer experiences (48%), and innovative new products (42%).
AI encompasses an array of technologies, from fully automated or autonomous intelligence to assisted or augmented intelligence. Financial firms are already deploying some relatively simple AI tools, such as intelligent process automation (IPA), which handles non-routine tasks and processes that require judgment and problem-solving to free employees to work on more valuable jobs. Banks have been using AI to redesign their fraud detection and anti-money laundering efforts for a while, and investment firms are starting to use AI to execute trades, manage portfolios, and provide personalized service to their clients. Insurance organizations, in turn, have been turning to AI—and especially machine learning (ML)—to enhance products, pricing, and underwriting; strengthen the claims process; predict and prevent fraud; and improve customer service and billing.
Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do—take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions. The team, led by professor Kristen Grauman, Ph.D. candidate Santhosh Ramakrishnan and former Ph.D. candidate Dinesh Jayaraman (now at the University of California, Berkeley) published their results today in the journal Science Robotics.
Most AI agents—computer systems that could endow robots or other machines with intelligence—are trained for very specific tasks—such as to recognize an object or estimate its volume—in an environment they have experienced before, like a factory. But the agent developed by Grauman and Ramakrishnan is general purpose, gathering visual information that can then be used for a wide range of tasks.
“We want an agent that’s generally equipped to enter environments and be ready for new perception tasks as they arise,” Grauman said. “It behaves in a way that’s versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.”
Framework improves ‘continual learning’ for artificial intelligence
Researchers have developed a new framework for deep neural networks that allows artificial intelligence (AI) systems to better learn new tasks while “forgetting” less of what it has learned regarding previous tasks. The researchers have also demonstrated that using the framework to learn a new task can make the AI better at performing previous tasks, a phenomenon called backward transfer.
“People are capable of continual learning; we learn new tasks all the time, without forgetting what we already know,” says Tianfu Wu, an assistant professor of electrical and computer engineering at NC State and co-author of a paper on the work. “To date, AI systems using deep neural networks have not been very good at this.”
“Deep neural network AI systems are designed for learning narrow tasks,” says Xilai Li, a co-lead author of the paper and a Ph.D. candidate at NC State. “As a result, one of several things can happen when learning new tasks. Systems can forget old tasks when learning new ones, which is called catastrophic forgetting. Systems can forget some of the things they knew about old tasks, while not learning to do new ones as well. Or systems can fix old tasks in place while adding new tasks – which limits improvement and quickly leads to an AI system that is too large to operate efficiently. Continual learning, also called lifelong-learning or learning-to-learn, is trying to address the issue.”
San Francisco bans city use of facial recognition technology tools
Pedestrians walk along Post Street in San Francisco. The city became the first in the United States to ban facial recognition technology by police and city agencies. (Justin Sullivan / Getty Images)
Concerned that some new surveillance technologies may be too intrusive, San Francisco became the first U.S. city to ban the use of facial recognition tools by its police and other municipal departments.
The Board of Supervisors approved the Stop Secret Surveillance ordinance Tuesday, culminating a reexamination of city policy that began with the false arrest of Denise Green in 2014. Green’s Lexus was misidentified as a stolen vehicle by an automated license-plate reader. She was pulled over by police, forced out of the car and onto her knees at gunpoint by six officers. The city spent $500,000 to settle lawsuits linked to her detention.
The model can find breast cancer earlier and eliminates racial disparities in screening.
MIT researchers have invented a new AI-driven way of looking at mammograms that can help detect breast cancer in women up to five years in advance. A deep learning model created by a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Massachusetts General Hospital can predict — based on just a mammogram — whether a woman will develop breast cancer in the future. And unlike older methods, it works just as well on black patients as it does on white patients.