Despite the huge contributions of deep learning to the field of artificial intelligence, there’s something very wrong with it: It requires huge amounts of data. This is one thing that both the pioneers and critics of deep learning agree on. In fact, deep learning didn’t emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.
Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.
A new reinforcement-learning algorithm has learned to optimize the placement of components on a computer chip to make it more efficient and less power-hungry.
3D Tetris: Chip placement, also known as chip floor planning, is a complex three-dimensional design problem. It requires the careful configuration of hundreds, sometimes thousands, of components across multiple layers in a constrained area. Traditionally, engineers will manually design configurations that minimize the amount of wire used between components as a proxy for efficiency. They then use electronic design automation software to simulate and verify their performance, which can take up to 30 hours for a single floor plan.
Time lag: Because of the time investment put into each chip design, chips are traditionally supposed to last between two and five years. But as machine-learning algorithms have rapidly advanced, the need for new chip architectures has also accelerated. In recent years, several algorithms for optimizing chip floor planning have sought to speed up the design process, but they’ve been limited in their ability to optimize across multiple goals, including the chip’s power draw, computational performance, and area.
Even finance is being affected by the onslaught of human vs. machine with a recent Deloitte survey revealing some startling stats.
The finance function is experiencing rapid change, and a recent Deloitte survey found that 73% percent of respondents are planning to implement technology to replace humans in their workforce this year—up from 58% a year ago.
While the finance workforce will grow smaller, companies need to adjust existing staff and bring in new skills that typically aren’t found in the finance department, according to a new Deloitte report.
Three humans and a robot form a team and start playing a game together. No, this isn’t the beginning of a joke, it’s the premise of a fascinating new study just released by Yale University.
Researchers were interested to see how the robot’s actions and statements would influence the three humans’ interactions among one another. They discovered that when the robot wasn’t afraid to admit it had made a mistake, this outward showing of vulnerability led to more open communication between the people involved as well.
Recent data out of the World Economic Forum in Davos has shed new light on the role that AI and customer service are playing in shaping the future of work. Jobs of Tomorrow: Mapping Opportunity in the New Economy provides much-needed insights into emerging global employment opportunities and the skill sets needed to maximize those opportunities. Interestingly, the report, supported by data from LinkedIn, found that demand for both “digital” and “human” factors is fueling growth in the jobs of tomorrow, raising important considerations for a breadth of industries worldwide.
The report predicts that in the next three years, 37% of job openings in emerging professions will be in the care economy; 17% in sales, marketing and content; 16% in data and AI; 12% in engineering and cloud computing; and 8% in people and culture. Among the roles with fastest projected growth include specialists in both AI and customer success, underscoring the need for technology, yes, but technology that incorporates the human touch.
Google is creating AI-powered robots that navigate without human intervention—a prerequisite to being useful in the real world.
Within 10 minutes of its birth, a baby fawn is able to stand. Within seven hours, it is able to walk. Between those two milestones, it engages in a highly adorable, highly frenetic flailing of limbs to figure it all out.
That’s the idea behind AI-powered robotics. While autonomous robots, like self-driving cars, are already a familiar concept, autonomously learning robots are still just an aspiration. Existing reinforcement-learning algorithms that allow robots to learn movements through trial and error still rely heavily on human intervention. Every time the robot falls down or walks out of its training environment, it needs someone to pick it up and set it back to the right position.
Now a new study from researchers at Google has made an important advancement toward robots that can learn to navigate without this help. Within a few hours, relying purely on tweaks to current state-of-the-art algorithms, they successfully got a four-legged robot to learn to walk forward and backward, and turn left and right, completely on its own.
While less public than the Pentagon’s Joint Artificial Intelligence Center, the intelligence community has been developing its own set of principles for the ethical use of artificial intelligence.
The Pentagon made headlines last month when it adopted its five principles for using artificial intelligence, marking the end of a months-long effort over what guidelines the department should follow as it develops new AI tools and AI-enabled technologies.
Less well known is that the intelligence community is developing its own principles governing the use of AI.
“The intelligence community has been doing it’s own work in this space as well. We’ve been doing it for quite a bit of time,” Ben Huebner, chief of the Office of Director of National Intelligence’s Civil Liberties, Privacy, and Transparency Office, said at an Intelligence and National Security Alliance event March 4.
In warehouses, call centers, and other sectors, intelligent machines are managing humans, and they’re making work more stressful, grueling, and dangerous
On conference stages and at campaign rallies, tech executives and politicians warn of a looming automation crisis — one where workers are gradually, then all at once, replaced by intelligent machines. But their warnings mask the fact that an automation crisis has already arrived. The robots are here, they’re working in management, and they’re grinding workers into the ground.
The robots are watching over hotel housekeepers, telling them which room to clean and tracking how quickly they do it. They’re managing software developers, monitoring their clicks and scrolls and docking their pay if they work too slowly. They’re listening to call center workers, telling them what to say, how to say it, and keeping them constantly, maximally busy. While we’ve been watching the horizon for the self-driving trucks, perpetually five years away, the robots arrived in the form of the supervisor, the foreman, the middle manager.
These automated systems can detect inefficiencies that a human manager never would — a moment’s downtime between calls, a habit of lingering at the coffee machine after finishing a task, a new route that, if all goes perfectly, could get a few more packages delivered in a day. But for workers, what look like inefficiencies to an algorithm were their last reserves of respite and autonomy, and as these little breaks and minor freedoms get optimized out, their jobs are becoming more intense, stressful, and dangerous. Over the last several months, I’ve spoken with more than 20 workers in six countries. For many of them, their greatest fear isn’t that robots might come for their jobs: it’s that robots have already become their boss.
In few sectors are the perils of automated management more apparent than at Amazon. Almost every aspect of management at the company’s warehouses is directed by software, from when people work to how fast they work to when they get fired for falling behind. Every worker has a “rate,” a certain number of items they have to process per hour, and if they fail to meet it, they can be automatically fired.
As Sunday morning, January 1, 2040, dawns, Coloradans will wake up to a breakfast of lab-cultured sausage, mung bean–based eggs, and tiger-nut-flour banana bread—all prepared by robots who talk like Alexa’s much smarter granddaughter. There is no kale in sight, and almond milk was banned long ago for being an environmental threat.
The first month of the year is still filled with new diets, new calendars, new dire warnings, and the traditional predictions from culinary prognosticators.
I’ve been the guy predicting the next big food thing in newspapers and magazines since the early 1980s. See how official I just sounded?
Admittedly, I’m a food data geek who soaks up stats from the market research firm NPD Group, Whole Foods, food industry insight source Technomic, Forbes, the National Restaurant Association, and similar sources. Tell me what you’ll eat, and I’ll tell you who you’ll be.
Looking forward 20 years in nutrition, there are dining, grocery shopping, and farming trends that I think will be going strong.
Escherichia coli bacteria, coloured green, in a scanning electron micrograph.
A pioneering machine-learning approach has identified powerful new types of antibiotic from a pool of more than 100 million molecules — including one that works against a wide range of bacteria, including tuberculosis and strains considered untreatable.
Tesla and SpaceX CEO Elon Musk is once again sounding a warning note regarding the development of artificial intelligence. The executive and founder tweeted on Monday evening that “all org[anizations] developing advance AI should be regulated, including Tesla.”
Musk was responding to a new MIT Technology Review profile of OpenAI, an organization founded in 2015 by Musk, along with Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba and John Schulman. At first, OpenAI was formed as a non-profit backed by $1 billion in funding from its pooled initial investors, with the aim of pursuing open research into advanced AI with a focus on ensuring it was pursued in the interest of benefiting society, rather than leaving its development in the hands of a small and narrowly-interested few (i.e., for-profit technology companies).