Cassie Kozyrkov, “chief decision scientist” at Google, speaking at AI Summit (London) 2019
Google’s chief decision scientist: Humans can fix AI’s shortcomings
Cassie Kozyrkov has served in various technical roles at Google over the past five years, but she now holds the somewhat curious position of “chief decision scientist.” Decision science sits at the intersection of data and behavioral science and involves statistics, machine learning, psychology, economics, and more.
In effect, this means Kozyrkov helps Google push a positive AI agenda — or, at the very least, convince people that AI isn’t as bad as the headlines claim.
An example of a manipulated photo, the defects spotted by the algorithm, and the original image. Credit: Adobe
Though it’s just a research project for the moment.
The world is becoming increasingly anxious about the spread of fake videos and pictures, and Adobe — a name synonymous with edited imagery — says it shares those concerns. Today, it’s sharing new research in collaboration with scientists from UC Berkeley that uses machine learning to automatically detect when images of faces have been manipulated.
It’s the latest sign the company is committing more resources to this problem. Last year its engineers created an AI tool that detects edited media created by splicing, cloning, and removing objects.
WASHINGTON (AP) — “Deepfake” videos pose a clear and growing threat to America’s national security, lawmakers and experts say. The question is what to do about it, and that’s not easily answered.
A House Intelligence Committee hearing Thursday served up a public warning about the deceptive powers of artificial intelligence software and offered a sobering assessment of how fast the technology is outpacing efforts to stop it.
With a crudely altered video of House Speaker Nancy Pelosi, D-Calif., fresh on everyone’s minds, lawmakers heard from experts how difficult it will be to combat these fakes and prevent them from being used to interfere in the 2020 election.
Boston Dynamics’ faintly terrifying quadruped dog robot—SpotMini—was first announced in 2016 and is expected to go on sale later this year. The robot has a whole lot of whizzy sensors and cameras, spindly mechanical legs, a creepy grabber arm that opens doors, and mind-bogglingly impressive robotics technology. (It’s expected to carry a five-digit price tag—a fitting sum to bring the uncanny valley direct to your home.)
But it’s never been very clear what, exactly, the point of Spot is—especially as a consumer product.
A new type of artificial intelligence can generate a “living portrait” from just one image. Original Image
The enigmatic, painted smile of the “Mona Lisa” is known around the world, but that famous face recently displayed a startling new range of expressions, courtesy of artificial intelligence (AI).
In a video shared to YouTube on May 21, three video clips show disconcerting examples of the Mona Lisa as she moves her lips and turns her head. She was created by a convolutional neural network — a type of AI that processes information much as a human brain does, to analyze and process images.
Robotics provides important opportunities for advancing artificial intelligence, because teaching machines to learn on their own in the physical world will help us develop more capable and flexible AI systems in other scenarios as well. Working with a variety of robots — including walking hexapods, articulated arms, and robotic hands fitted with tactile sensors — Facebook AI researchers are exploring new techniques to push the boundaries of what artificial intelligence can accomplish.
Doing this work means addressing the complexity inherent in using sophisticated physical mechanisms and conducting experiments in the real world, where the data is noisier, conditions are more variable and uncertain, and experiments have additional time constraints (because they cannot be accelerated when learning in a simulation). These are not simple issues to address, but they offer useful test cases for AI.
Digital technologies drive business disruption. Today, artificial intelligence (AI) is at the forefront of financial industry disruption, allowing these firms to look differently at operations, staffing, processes, and the way work is done in a human-machine partnership. In PwC’s 2019 AI survey of US executives, financial services executives said they expect their AI efforts to result in increased revenue and profits (50%), better customer experiences (48%), and innovative new products (42%).
AI encompasses an array of technologies, from fully automated or autonomous intelligence to assisted or augmented intelligence. Financial firms are already deploying some relatively simple AI tools, such as intelligent process automation (IPA), which handles non-routine tasks and processes that require judgment and problem-solving to free employees to work on more valuable jobs. Banks have been using AI to redesign their fraud detection and anti-money laundering efforts for a while, and investment firms are starting to use AI to execute trades, manage portfolios, and provide personalized service to their clients. Insurance organizations, in turn, have been turning to AI—and especially machine learning (ML)—to enhance products, pricing, and underwriting; strengthen the claims process; predict and prevent fraud; and improve customer service and billing.
Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do—take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions. The team, led by professor Kristen Grauman, Ph.D. candidate Santhosh Ramakrishnan and former Ph.D. candidate Dinesh Jayaraman (now at the University of California, Berkeley) published their results today in the journal Science Robotics.
Most AI agents—computer systems that could endow robots or other machines with intelligence—are trained for very specific tasks—such as to recognize an object or estimate its volume—in an environment they have experienced before, like a factory. But the agent developed by Grauman and Ramakrishnan is general purpose, gathering visual information that can then be used for a wide range of tasks.
“We want an agent that’s generally equipped to enter environments and be ready for new perception tasks as they arise,” Grauman said. “It behaves in a way that’s versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.”
Framework improves ‘continual learning’ for artificial intelligence
Researchers have developed a new framework for deep neural networks that allows artificial intelligence (AI) systems to better learn new tasks while “forgetting” less of what it has learned regarding previous tasks. The researchers have also demonstrated that using the framework to learn a new task can make the AI better at performing previous tasks, a phenomenon called backward transfer.
“People are capable of continual learning; we learn new tasks all the time, without forgetting what we already know,” says Tianfu Wu, an assistant professor of electrical and computer engineering at NC State and co-author of a paper on the work. “To date, AI systems using deep neural networks have not been very good at this.”
“Deep neural network AI systems are designed for learning narrow tasks,” says Xilai Li, a co-lead author of the paper and a Ph.D. candidate at NC State. “As a result, one of several things can happen when learning new tasks. Systems can forget old tasks when learning new ones, which is called catastrophic forgetting. Systems can forget some of the things they knew about old tasks, while not learning to do new ones as well. Or systems can fix old tasks in place while adding new tasks – which limits improvement and quickly leads to an AI system that is too large to operate efficiently. Continual learning, also called lifelong-learning or learning-to-learn, is trying to address the issue.”
The crypto friendly island of Malta wants to give civil liberties to bots and other forms of artificial intelligence. Some experts say this is a profoundly bad idea.
Six years ago, when Amazon started talking about using delivery drones, many people thought they must be joking. Far from it—drones are now very much a reality: in April, Google offshoot, Wing Aviation, won certification from the US Federal Aviation Administration to begin commercial drone deliveries. If you live in Blacksburg, Virginia, drones could be landing on your porch by the end of the year.
In a similar vein, it’s tempting to scoff at Malta’s plans, announced in November, to give citizenship to bots. Voting rights, healthcare, civil liberties—everything is on the table. In fact, states such as the UAE and Saudi Arabia have already granted robots citizens rights.
If you’ve been hanging out with techie friends at a conference lately, you’ve probably heard the term “Web 3.0.” And if you haven’t yet, you probably will soon. But if it’s one of those questions you’re a little ashamed to ask, don’t be. Not many people know what Web 3.0 is, so it’s understandable if you’re confused.
On top of that, a really succinct description and tight enough narrative have yet to emerge, making its definition open to interpretation. Experts are also still arguing over what pertains to Web 3.0 and what will come way in the future.