A new type of artificial intelligence can generate a “living portrait” from just one image. Original Image
The enigmatic, painted smile of the “Mona Lisa” is known around the world, but that famous face recently displayed a startling new range of expressions, courtesy of artificial intelligence (AI).
In a video shared to YouTube on May 21, three video clips show disconcerting examples of the Mona Lisa as she moves her lips and turns her head. She was created by a convolutional neural network — a type of AI that processes information much as a human brain does, to analyze and process images.
Robotics provides important opportunities for advancing artificial intelligence, because teaching machines to learn on their own in the physical world will help us develop more capable and flexible AI systems in other scenarios as well. Working with a variety of robots — including walking hexapods, articulated arms, and robotic hands fitted with tactile sensors — Facebook AI researchers are exploring new techniques to push the boundaries of what artificial intelligence can accomplish.
Doing this work means addressing the complexity inherent in using sophisticated physical mechanisms and conducting experiments in the real world, where the data is noisier, conditions are more variable and uncertain, and experiments have additional time constraints (because they cannot be accelerated when learning in a simulation). These are not simple issues to address, but they offer useful test cases for AI.
Digital technologies drive business disruption. Today, artificial intelligence (AI) is at the forefront of financial industry disruption, allowing these firms to look differently at operations, staffing, processes, and the way work is done in a human-machine partnership. In PwC’s 2019 AI survey of US executives, financial services executives said they expect their AI efforts to result in increased revenue and profits (50%), better customer experiences (48%), and innovative new products (42%).
AI encompasses an array of technologies, from fully automated or autonomous intelligence to assisted or augmented intelligence. Financial firms are already deploying some relatively simple AI tools, such as intelligent process automation (IPA), which handles non-routine tasks and processes that require judgment and problem-solving to free employees to work on more valuable jobs. Banks have been using AI to redesign their fraud detection and anti-money laundering efforts for a while, and investment firms are starting to use AI to execute trades, manage portfolios, and provide personalized service to their clients. Insurance organizations, in turn, have been turning to AI—and especially machine learning (ML)—to enhance products, pricing, and underwriting; strengthen the claims process; predict and prevent fraud; and improve customer service and billing.
Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do—take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions. The team, led by professor Kristen Grauman, Ph.D. candidate Santhosh Ramakrishnan and former Ph.D. candidate Dinesh Jayaraman (now at the University of California, Berkeley) published their results today in the journal Science Robotics.
Most AI agents—computer systems that could endow robots or other machines with intelligence—are trained for very specific tasks—such as to recognize an object or estimate its volume—in an environment they have experienced before, like a factory. But the agent developed by Grauman and Ramakrishnan is general purpose, gathering visual information that can then be used for a wide range of tasks.
“We want an agent that’s generally equipped to enter environments and be ready for new perception tasks as they arise,” Grauman said. “It behaves in a way that’s versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.”
Framework improves ‘continual learning’ for artificial intelligence
Researchers have developed a new framework for deep neural networks that allows artificial intelligence (AI) systems to better learn new tasks while “forgetting” less of what it has learned regarding previous tasks. The researchers have also demonstrated that using the framework to learn a new task can make the AI better at performing previous tasks, a phenomenon called backward transfer.
“People are capable of continual learning; we learn new tasks all the time, without forgetting what we already know,” says Tianfu Wu, an assistant professor of electrical and computer engineering at NC State and co-author of a paper on the work. “To date, AI systems using deep neural networks have not been very good at this.”
“Deep neural network AI systems are designed for learning narrow tasks,” says Xilai Li, a co-lead author of the paper and a Ph.D. candidate at NC State. “As a result, one of several things can happen when learning new tasks. Systems can forget old tasks when learning new ones, which is called catastrophic forgetting. Systems can forget some of the things they knew about old tasks, while not learning to do new ones as well. Or systems can fix old tasks in place while adding new tasks – which limits improvement and quickly leads to an AI system that is too large to operate efficiently. Continual learning, also called lifelong-learning or learning-to-learn, is trying to address the issue.”
San Francisco bans city use of facial recognition technology tools
Pedestrians walk along Post Street in San Francisco. The city became the first in the United States to ban facial recognition technology by police and city agencies. (Justin Sullivan / Getty Images)
Concerned that some new surveillance technologies may be too intrusive, San Francisco became the first U.S. city to ban the use of facial recognition tools by its police and other municipal departments.
The Board of Supervisors approved the Stop Secret Surveillance ordinance Tuesday, culminating a reexamination of city policy that began with the false arrest of Denise Green in 2014. Green’s Lexus was misidentified as a stolen vehicle by an automated license-plate reader. She was pulled over by police, forced out of the car and onto her knees at gunpoint by six officers. The city spent $500,000 to settle lawsuits linked to her detention.
The model can find breast cancer earlier and eliminates racial disparities in screening.
MIT researchers have invented a new AI-driven way of looking at mammograms that can help detect breast cancer in women up to five years in advance. A deep learning model created by a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Massachusetts General Hospital can predict — based on just a mammogram — whether a woman will develop breast cancer in the future. And unlike older methods, it works just as well on black patients as it does on white patients.
Every aspect of life can be guided by artificial intelligence algorithms—from choosing what route to take for your morning commute, to deciding whom to take on a date, to complex legal and judicial matters such as predictive policing.
Big tech companies like Google and Facebook use AI to obtain insights on their gargantuan trove of detailed customer data. This allows them to monetize users’ collective preferences through practices such as micro-targeting, a strategy used by advertisers to narrowly target specific sets of users.
In parallel, many people now trust platforms and algorithms more than their own governments and civic society. An October 2018 study suggested that people demonstrate “algorithm appreciation,” to the extent that they would rely on advice more when they think it is from an algorithm than from a human.
The question of how to ensure that technological innovation in machine learning and artificial intelligence leads to ethically desirable—or, more minimally, ethically defensible—impacts on society has generated much public debate in recent years. Most of these discussions have been accompanied by a strong sense of urgency: as more and more studies about algorithmic bias have shown, the risk that emerging technologies will not only reflect, but also exacerbate structural injustice in society is significant.
So which ethical principles ought to govern machine learning systems in order to prevent morally and politically objectionable outcomes? In other words: what is AI Ethics? And indeed, “is ethical AI even possible?”, as a recent New York Times article asks?
The enabling technology for insurers to use AI is the ‘ecosystem’ of sensors known as the internet of things.
It’s a new day not very far in the future. You wake up; your wristwatch has recorded how long you’ve slept, and monitored your heartbeat and breathing. You drive to work; car sensors track your speed and braking. You pick up some breakfast on your way, paying electronically; the transaction and the calorie content of your meal are recorded.
Then you have a car accident. You phone your insurance company. Your call is answered immediately. The voice on the other end knows your name and amiably chats to you about your pet cat and how your favourite football team did on the weekend.
You’re talking to a chat-bot. The reason it “knows” so much about you is because the insurance company is using artificial intelligence to scrape information about you from social media. It knows a lot more besides, because you’ve agreed to let it monitor your personal devices in exchange for cheaper insurance premiums.
Sound artist Yuri Suzuki used AI to complete Raymond Scott’s Electronium vision.
If you’ve seen Looney Tunes or The Simpsons, you’ve probably heard Raymond Scott’s music — which was adapted for those and other cartoons. But there’s a good chance you haven’t heard of Scott himself. A musician and inventor, Scott was ahead of his time. As early as the 1950s, he began working on the Electronium, a kind of music synthesizer that he hoped would perform and compose music simultaneously. While Scott invested $1 million and more than a decade in Electronium, he died before it was complete. Now, Fast Company reports, Pentagram partner and sound artist Yuri Suzuki has picked up where Scott left off.
Suzuki worked in partnership with the design studio Counterpoint and used Google’s Magenta AI to generate music the way Scott envisioned. Like the Electronium, Suzuki’s version has three panels. First, a player taps a melody, or even a few notes, on the center panel. Then, the AI uses that to compose music, which is shown on the right. And finally, the player can use the panel on the left to manipulate the music by adding effects or beats. It’s the kind of human-computer collaboration Scott dreamed of but didn’t have the digital technology to complete.
How machine learning and artificially generated images might replace photography as we know it.
When hearing the words ‘AI’, ‘Machine Learning’ or ‘bot’ most people tend to visualize a walking, talking android robot which looks like something out of a Sci-Fi movie and immediately assume about a time far away in the future.