EmoNet, a neural network model, was accurately able to pair images to 11 emotion categories.
The EmoNet research study demonstrates how AI can measure emotional significance.
Artificial intelligence might one day start communicating our emotions better than we do. EmoNet, neural network model developed by researchers at the University of Colorado and Duke University, was accurately able to classify images into 11 different emotion categories.
A neural network is a computer model that learns to map input signals to an output of interest by learning a series of filters, according to Philip Kragel, one of the researchers on the study. For example, a network trained to detect bananas would learn features unique to them, such as shape and color.
Early tests suggest artificial intelligence can improve patient care in hospitals’ intensive care units while helping curb “alarm fatigue.”Woody Harrington / for NBC News
Early tests show artificial “assistants” can help doctors and nurses spot potentially deadly problems in time to take life-saving action.
From interpreting CT scans to diagnosing eye disease, artificial intelligence is taking on medical tasks once reserved for only highly trained medical specialists — and in many cases outperforming its human counterparts.
Now AI is starting to show up in intensive care units, where hospitals treat their sickest patients. Doctors who have used the new systems say AI may be better at responding to the vast trove of medical data collected from ICU patients — and may help save patients who are teetering between life and death.
IBM recently developed three artificial intelligence tools that could help medical researchers fight cancer.
Now, the company has decided to make all three tools open-source, meaning scientists will be able to use them in their research whenever they please, according to ZDNet. The tools are designed to streamline the cancer drug development process and help scientists stay on top of newly-published research — so, if they prove useful, it could mean more cancer treatments coming through the pipeline more rapidly than before.
First programmable memristor computer aims to bring AI processing down from the cloud
The memristor array chip plugs into the custom computer chip, forming the first programmable memristor computer. The team demonstrated that it could run three standard types of machine learning algorithms. Credit: Robert Coelius, Michigan Engineering
The first programmable memristor computer—not just a memristor array operated through an external computer—has been developed at the University of Michigan.
It could lead to the processing of artificial intelligence directly on small, energy-constrained devices such as smartphones and sensors. A smartphone AI processor would mean that voice commands would no longer have to be sent to the cloud for interpretation, speeding up response time.
By identifying patterns in successful rehousing, a research team in L.A. is working to make the housing system more efficient
In Hollywood, nestled between a strip mall and a recording studio where bands like the Rolling Stones have recorded, the residents of a small homeless encampment greet passers by with a friendly “Hi, hello, how are you doing?”
Some people respond in kind; others seem nervous and terse. But according to one of the most outgoing people here, Cedric — who didn’t want to give his last name — they simply hope that if their neighbors see them as friendly and nonthreatening, they won’t call the cops and have their tents removed. L.A. police and the Bureau of Sanitation have become increasingly strict about the “cleanup” of homeless encampments, even though most residents here have nowhere to move to.
Los Angeles has the second largest homeless population in the U.S. after New York, with an estimated 52,765 homeless individuals in 2018. The numbers are compiled by the Los Angeles Homeless Services Authority (LAHSA), a city agency that helps get people off the streets — and LAHSA says the number of people experiencing homelessness for the first time is increasing.
In an initiative started in January 2018, LAHSA is now sharing data from the Homeless Management Information System (HMIS) with researchers at the Center for Artificial Intelligence in Society (CAIS) at the University of Southern California. The researchers are using the data to build a system that can identify behaviors and outcomes, and allocate the type of housing with the greatest statistical chance of long-term success, while also reducing racial discrimination in the system. The project — Housing Allocation for Homeless Persons: Fairness, Transparency, and Efficiency in Algorithmic Design — brings together researchers from both the engineering and social work schools.
The spread of intelligence machines will worsen geographic inequality, unless we take proactive measures
Historically, the worst times for labor have been those characterized by both worker-replacing technological change and slow productivity growth. If A.I. technologies turn out to be as brilliant as some of us think, we can expect some workers to see their incomes vanish in the process — even as new jobs are created elsewhere in the economy. That is what has happened in recent years, and it is also what happened during the most tumultuous years of industrialization.
If current trends continue in the coming years, the divide between the automation winners and losers will become even wider. And there are good reasons to think that it will. Looking at the automatability of existing jobs, we have seen that most occupations that require a college degree remain hard to automate, while many unskilled jobs — like those of cashiers, food preparers, call center agents, and truck drivers — seem set to vanish, though how soon is highly uncertain. But there are also unskilled jobs that remain outside the realms of A.I. Many in-person service jobs that center on complex social interactions — like those of fitness trainers, hairstylists, concierges, and massage therapists — will remain safe from automation.
“It’s like teaching image recognition software with lots of pictures of cats and dogs, but then it’s able to recognize elephants.”
Since we can’t travel billions of years back in time — not yet, anyways — one of the best ways to understand how our universe evolved is to create computer simulations of the process using what we do know about it.
Most of those simulations fall into one of two categories: slow and more accurate, or fast and less accurate. But now, an international team of researchers has built an AI that can quickly generate highly-accurate, three-dimensional simulations of the universe — even when they tweak parameters the system wasn’t trained on.
Endel highlights how AI could change the way music is both created and experienced.
Artificial Intelligence and Machine Learning are slowly but surely infiltrating multiple industries, slowly weaving their way into our daily lives. Medical professionals are using deep learning models to identify cancer, weak AI to construct better buildings and machine learning to drive the world of robotics.
Another day, another deepfake: but this time they can sing.
Finally, technology that can make Rasputin sing like Beyoncé
New research from Imperial College in London and Samsung’s AI research center in the UK shows how a single photo and audio file can be used to generate a singing or talking video portrait. Like previous deepfake programs we’ve seen, the researchers uses machine learning to generate their output. And although the fakes are far from 100 percent realistic, the results are amazing considering how little data is needed.
Cassie Kozyrkov, “chief decision scientist” at Google, speaking at AI Summit (London) 2019
Google’s chief decision scientist: Humans can fix AI’s shortcomings
Cassie Kozyrkov has served in various technical roles at Google over the past five years, but she now holds the somewhat curious position of “chief decision scientist.” Decision science sits at the intersection of data and behavioral science and involves statistics, machine learning, psychology, economics, and more.
In effect, this means Kozyrkov helps Google push a positive AI agenda — or, at the very least, convince people that AI isn’t as bad as the headlines claim.
An example of a manipulated photo, the defects spotted by the algorithm, and the original image. Credit: Adobe
Though it’s just a research project for the moment.
The world is becoming increasingly anxious about the spread of fake videos and pictures, and Adobe — a name synonymous with edited imagery — says it shares those concerns. Today, it’s sharing new research in collaboration with scientists from UC Berkeley that uses machine learning to automatically detect when images of faces have been manipulated.
It’s the latest sign the company is committing more resources to this problem. Last year its engineers created an AI tool that detects edited media created by splicing, cloning, and removing objects.