Russian researchers from HSE University and Open University for the Humanities and Economics have demonstrated that artificial intelligence is able to infer people’s personality from ‘selfie’ photographs better than human raters do. Conscientiousness emerged to be more easily recognizable than the other four traits. Personality predictions based on female faces appeared to be more reliable than those for male faces. The technology can be used to find the ‘best matches’ in customer service, dating or online tutoring.
‘We want to make anything and everything on the platform shoppable’
Facebook is launching what it’s calling a “universal product recognition model” that uses artificial intelligence to identify consumer goods, from furniture to fast fashion to fast cars.
It’s the first step toward a future where the products in every image on its site can be identified and potentially shopped for. “We want to make anything and everything on the platform shoppable, whenever the experience feels right,” Manohar Paluri, head of Applied Computer Vision at Facebook, told The Verge. “It’s a grand vision.”
In a time of COVID-19 disruption, futurists can accelerate organizational recovery and capacity. When partnered with purpose-built AI, augmented intelligence can also spur radical innovation.
Machine learning, task automation and robotics are already widely used in business. These and other AI technologies are about to multiply, and we look at how organizations can best take advantage of them.
COVID-19 disruption has left enterprises with no choice but to reassess digital transformation investments and roadmaps. While less important projects are delayed, transformation projects involving AI and automation are receiving a lot of attention right now. In just the last 60 days, the adoption of varying levels of AI technologies across the enterprise surged with an incredible sense of urgency.
One area where AI can make a tremendous impact — yet one we’re not really talking about it — is modeling future scenarios based on myriads of new data stemming from pandemic disruption. Beyond automation, adding an AI Futurist as a virtual strategic advisor to the C-Suite can help executives navigate this Novel Economy as it takes shape over the next 36 months. In a time when no playbook, expertise, or best practices exist, perhaps this is AI’s moment to shine.
Machine-learning models trained on normal behavior are showing cracks —forcing humans to step in to set them straight.
In the week of April 12-18, the top 10 search terms on Amazon.com were: toilet paper, face mask, hand sanitizer, paper towels, Lysol spray, Clorox wipes, mask, Lysol, masks for germ protection, and N95 mask. People weren’t just searching, they were buying too—and in bulk. The majority of people looking for masks ended up buying the new Amazon #1 Best Seller, “Face Mask, Pack of 50”.
When covid-19 hit, we started buying things we’d never bought before. The shift was sudden: the mainstays of Amazon’s top ten—phone cases, phone chargers, Lego—were knocked off the charts in just a few days. Nozzle, a London-based consultancy specializing in algorithmic advertising for Amazon sellers, captured the rapid change in this simple graph.
Researchers unveil electronics that mimic the human brain in efficient learning
A graphic depiction of protein nanowires (green) harvested from microbe Geobacter (orange) facilitate the electronic memristor device (silver) to function with biological voltages, emulating the neuronal components (blue junctions) in a brain. Credit: UMass Amherst/Yao lab
Only 10 years ago, scientists working on what they hoped would open a new frontier of neuromorphic computing could only dream of a device using miniature tools called memristors that would function/operate like real brain synapses.
But now a team at the University of Massachusetts Amherst has discovered, while on their way to better understanding protein nanowires, how to use these biological, electricity conducting filaments to make a neuromorphic memristor, or “memory transistor,” device. It runs extremely efficiently on very low power, as brains do, to carry signals between neurons. Details are in Nature Communications.
THE “DIGITAL COURT” WOULD USE SMART CONTRACTS TO RESOLVE DISPUTES
Researchers from the University of Tokyo have developed a blockchain-powered mechanism for settling legal disputes.
Researchers have developed a “digital court” mechanism using blockchain-powered smart contracts.
Despite the huge contributions of deep learning to the field of artificial intelligence, there’s something very wrong with it: It requires huge amounts of data. This is one thing that both the pioneers and critics of deep learning agree on. In fact, deep learning didn’t emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.
Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.
A new reinforcement-learning algorithm has learned to optimize the placement of components on a computer chip to make it more efficient and less power-hungry.
3D Tetris: Chip placement, also known as chip floor planning, is a complex three-dimensional design problem. It requires the careful configuration of hundreds, sometimes thousands, of components across multiple layers in a constrained area. Traditionally, engineers will manually design configurations that minimize the amount of wire used between components as a proxy for efficiency. They then use electronic design automation software to simulate and verify their performance, which can take up to 30 hours for a single floor plan.
Time lag: Because of the time investment put into each chip design, chips are traditionally supposed to last between two and five years. But as machine-learning algorithms have rapidly advanced, the need for new chip architectures has also accelerated. In recent years, several algorithms for optimizing chip floor planning have sought to speed up the design process, but they’ve been limited in their ability to optimize across multiple goals, including the chip’s power draw, computational performance, and area.
While smell-o-vision may be a long way from being ready for your PC, Intel is partnering with Cornell University to bring it closer to reality. Intel’s Loihi neuromorphic research chip, a a powerful electronic nose with a wide range of applications, can recognize dangerous chemicals in the air.
“In the future, portable electronic nose systems with neuromorphic chips could be used by doctors to diagnose diseases, by airport security to detect weapons and explosives, by police and border control to more easily find and seize narcotics, and even to create more effective at home smoke and carbon monoxide detectors,” Intel said in a press statement.
With machine learning, Loihi can recognize hazardous chemicals “in the presence of significant noise and occlusion,” Intel said, suggesting the chip can be used in the real world where smells — such as perfumes, food, and other odors — are often found in the same area as a harmful chemical. Machine learning trained Loihi to learn and identify each hazardous odor with just a single sample, and learning a new smell didn’t disrupt previously learned scents.
Three humans and a robot form a team and start playing a game together. No, this isn’t the beginning of a joke, it’s the premise of a fascinating new study just released by Yale University.
Researchers were interested to see how the robot’s actions and statements would influence the three humans’ interactions among one another. They discovered that when the robot wasn’t afraid to admit it had made a mistake, this outward showing of vulnerability led to more open communication between the people involved as well.
Google is creating AI-powered robots that navigate without human intervention—a prerequisite to being useful in the real world.
Within 10 minutes of its birth, a baby fawn is able to stand. Within seven hours, it is able to walk. Between those two milestones, it engages in a highly adorable, highly frenetic flailing of limbs to figure it all out.
That’s the idea behind AI-powered robotics. While autonomous robots, like self-driving cars, are already a familiar concept, autonomously learning robots are still just an aspiration. Existing reinforcement-learning algorithms that allow robots to learn movements through trial and error still rely heavily on human intervention. Every time the robot falls down or walks out of its training environment, it needs someone to pick it up and set it back to the right position.
Now a new study from researchers at Google has made an important advancement toward robots that can learn to navigate without this help. Within a few hours, relying purely on tweaks to current state-of-the-art algorithms, they successfully got a four-legged robot to learn to walk forward and backward, and turn left and right, completely on its own.
A device like the one in the study (right), and an electron microscope image showing the device’s neuron-like arrangement of nanowires.
UCLA scientists James Gimzewski and Adam Stieg are part of an international research team that has taken a significant stride toward the goal of creating thinking machines.
Led by researchers at Japan’s National Institute for Materials Science, the team created an experimental device that exhibited characteristics analogous to certain behaviors of the brain—learning, memorization, forgetting, wakefulness and sleep. The paper, published in Scientific Reports, describes a network in a state of continuous flux.
“This is a system between order and chaos, on the edge of chaos,” said Gimzewski, a UCLA distinguished professor of chemistry and biochemistry, a member of the California NanoSystems Institute at UCLA and a co-author of the study. “The way that the device constantly evolves and shifts mimics the human brain. It can come up with different types of behavior patterns that don’t repeat themselves.”