What can AI learn from Human intelligence?

463B4E56-6F28-4B0E-8618-9FB011B66E7F

At HAI’s fall conference, scholars discussed novel ways AI can learn from human intelligence – and vice versa.

Can we teach robots to generalize their learning? How can algorithms become more commonsensical? Can a child’s learning style influence AI?

Stanford Institute for Human-Centered Artificial Intelligence’s fall conference considered those and other questions to understand how to mutually improve and better understand artificial and human intelligence. The event featured the theme of “triangulating intelligence” among the fields of AI, neuroscience, and psychology to develop research and applications for large-scale impact.

HAI faculty associate directors Christopher Manning, a Stanford professor of machine learning, linguistics, and computer science, and Surya Ganguli, a Stanford associate professor of neurobiology, served as hosts and panel moderators for the conference, which was co-sponsored by Stanford’s Wu-Tsai Neurosciences Institute, Department of Psychology, and Symbolic Systems program.

Speakers described cutting-edge approaches—some established, some new—to create a two-way flow of insights between research on human and machine-based intelligence, for powerful application. Here are some of their key takeaways.

Continue reading… “What can AI learn from Human intelligence?”

If you train robots like dogs, they learn faster

4CD57C33-C5A4-4A74-83F9-2248F6EC949E

Instead of needing a month, it mastered new “tricks” in just days with reinforcement learning.

Treats-for-tricks works for training dogs — and apparently AI robots, too.

That’s the takeaway from a new study out of Johns Hopkins, where researchers have developed a new training system that allowed a robot to quickly learn how to do multi-step tasks in the real world — by mimicking the way canines learn new tricks.

Continue reading… “If you train robots like dogs, they learn faster”

Harnessing deep neural networks to predict future self-harm based on clinical notes

DE76288D-7B52-4377-A38B-77D55E2D4482

According to the American Foundation for Suicide Prevention, suicide is the 10th leading cause of death in the U.S., with over 1.4 million suicide attempts recorded in 2018. Although effective treatments are available for those at risk, clinicians do not have a reliable way of predicting which patients are likely to make a suicide attempt.

Researchers at the Medical University of South Carolina and University of South Florida report in JMIR Medical Informatics that they have taken important steps toward addressing the problem by creating an artificial intelligence algorithm that can automatically identify patients at high risk of intentional self-harm, based on the information in the clinical notes in the electronic health record.

The study was led by Jihad Obeid, M.D., co-director of the MUSC Biomedical Informatics Center, and Brian Bunnell, Ph.D., formerly at MUSC and currently an assistant professor in the Department of Psychiatry and Behavioral Neurosciences at the University of South Florida.

Continue reading… “Harnessing deep neural networks to predict future self-harm based on clinical notes”

The next generation of Artificial Intelligence

7E8519E4-C191-4C33-99B1-4D496C633242

AI legend Yann LeCun, one of the godfathers of AI

 The field of artificial intelligence moves fast. It has only been 8 years since the modern era of deep learning began at the 2012 ImageNet competition. Progress in the field since then has been breathtaking and relentless.

If anything, this breakneck pace is only accelerating. Five years from now, the field of AI will look very different than it does today. Methods that are currently considered cutting-edge will have become outdated; methods that today are nascent or on the fringes will be mainstream.

What will the next generation of artificial intelligence look like? Which novel AI approaches will unlock currently unimaginable possibilities in technology and business? This article highlights three emerging areas within AI that are poised to redefine the field—and society—in the years ahead. Study up now.

  Continue reading… “The next generation of Artificial Intelligence”

Neural network trained to control anesthetic doses, keep patients under during surgery

To define how the world should look, neural networks are making up their own rules

 Researchers demonstrate how deep learning could eventually replace traditional anesthetic practices.

Academics from the Massachusetts Institute of Technology (MIT) and Massachusetts General Hospital have demonstrated how neural networks can be trained to administer anesthetic during surgery.

Over the past decade, machine learning (ML), artificial intelligence (AI), and deep learning algorithms have been developed and applied to a range of sectors and applications, including in the medical field.

Continue reading… “Neural network trained to control anesthetic doses, keep patients under during surgery”

Self-driving cars will hit the Indianapolis Motor Speedway in a landmark A.I. race

52075E8D-4D3E-46B5-BF84-25BE28488BB2

Take a look at the ‘Road of the Future’

Next year, a squad of souped-up Dallara race cars will reach speeds of up to 200 miles per hour as they zoom around the legendary Indianapolis Motor Speedway to discover whether a computer could be the next Mario Andretti.

The planned Indy Autonomous Challenge—taking place in October 2021 in Indianapolis—is intended for 31 university computer science and engineering teams to push the limits of current self-driving car technology. There will be no human racers sitting inside the cramped cockpits of the Dallara IL-15 race cars. Instead, onboard computer systems will take their place, outfitted with deep-learning software enabling the vehicles to drive themselves.

In order to win, a team’s autonomous car must be able to complete 20 laps—which equates to a little less than 50 miles in distance—and cross the finish line first in 25 minutes or less. At stake is a $1 million prize, with second- and third-place winners receiving a $250,000 and $50,000 award, respectively.

Continue reading… “Self-driving cars will hit the Indianapolis Motor Speedway in a landmark A.I. race”

Machine learning takes on synthetic biology: algorithms can bioengineer cells for you

1E71BCED-80C3-4894-9B47-39C7764F77CC

Berkeley Lab scientists Tijana Radivojevic (left) and Hector Garcia Martin working on mechanistic and statistical modeling, data visualizations, and metabolic maps at the Agile BioFoundry last year.

 Machine learning takes on synthetic biology: algorithms can bioengineer cells for you.

If you’ve eaten vegan burgers that taste like meat or used synthetic collagen in your beauty routine—both products that are “grown” in the lab—then you’ve benefited from synthetic biology. It’s a field rife with potential, as it allows scientists to design biological systems to specification, such as engineering a microbe to produce a cancer-fighting agent. Yet conventional methods of bioengineering are slow and laborious, with trial and error being the main approach.

Now scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a new tool that adapts machine learning algorithms to the needs of synthetic biology to guide development systematically. The innovation means scientists will not have to spend years developing a meticulous understanding of each part of a cell and what it does in order to manipulate it; instead, with a limited set of training data, the algorithms are able to predict how changes in a cell’s DNA or biochemistry will affect its behavior, then make recommendations for the next engineering cycle along with probabilistic predictions for attaining the desired goal.

Continue reading… “Machine learning takes on synthetic biology: algorithms can bioengineer cells for you”

Self-learning robot autonomously moves molecules, setting stage for molecular 3D printing

ABD88B38-38DE-4901-808E-DF959081FC31

If you know even just a little bit about science, you probably already know that molecules are often referred to as “the building blocks of life.” Made of a group of atoms that have bonded together, molecules make up all kinds of materials, but behave totally differently in regards to macroscopic objects than atoms do. Picture how a LEGO model is made of many teeny tiny bricks—it’s easy for us to move these bricks around, but if you think of molecules as these bricks, it’s much more difficult to do so, as each one basically requires its own separate set of instructions.

Continue reading… “Self-learning robot autonomously moves molecules, setting stage for molecular 3D printing”

A robot wrote this entire article. Are you scared yet, human?

79A830FA-6BCD-4126-A3F4-CB05E3010AAD

We asked GPT-3, OpenAI’s powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace

‘We are not plotting to take over the human populace.’

I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.
Continue reading… “A robot wrote this entire article. Are you scared yet, human?”

The fourth generation of AI is here, and it’s called ‘Artificial Intuition’

robot-evolution 8h6gf4d

Artificial Intelligence (AI) is one of the most powerful technologies ever developed, but it’s not nearly as new as you might think. In fact, it’s undergone several evolutions since its inception in the 1950s. The first generation of AI was ‘descriptive analytics,’ which answers the question, “What happened?” The second, ‘diagnostic analytics,’ addresses, “Why did it happen?” The third and current generation is ‘predictive analytics,’ which answers the question, “Based on what has already happened, what could happen in the future?”

While predictive analytics can be very helpful and save time for data scientists, it is still fully dependent on historic data. Data scientists are therefore left helpless when faced with new, unknown scenarios. In order to have true “artificial intelligence,” we need machines that can “think” on their own, especially when faced with an unfamiliar situation. We need AI that can not just analyze the data it is shown, but express a “gut feeling” when something doesn’t add up. In short, we need AI that can mimic human intuition. Thankfully, we have it.

Continue reading… “The fourth generation of AI is here, and it’s called ‘Artificial Intuition’”

A.I. can tell if you’re a good surgeon just by scanning your brain

06F08940-6511-4A3F-A6F0-75B76AF0E6CA

Could a brain scan be the best way to tell a top-notch surgeon? Well, kind of. Researchers at Rensselaer Polytechnic Institute and the University at Buffalo have developed Brain-NET, a deep learning A.I. tool that can accurately predict a surgeon’s certification scores based on their neuroimaging data.

This certification score, known as the Fundamentals of Laparoscopic Surgery program (FLS), is currently calculated manually using a formula that is extremely time and labor-consuming. The idea behind it is to give an objective assessment of surgical skills, thereby demonstrating effective training.

“The Fundamental of Laparoscopic Surgery program has been adopted nationally for surgical residents, fellows and practicing physicians to learn and practice laparoscopic skills to have the opportunity to definitely measure and document those skills,” Xavier Intes, a professor of biomedical engineering at Rensselaer, told Digital Trends. “One key aspect of such [a] program is a scoring metric that is computed based on the time of the surgical task execution, as well as error estimation.”

Continue reading… “A.I. can tell if you’re a good surgeon just by scanning your brain”

Helm.ai pioneers breakthrough…. “Deep Teaching” of neural networks

 

1356BC15-58BE-4154-98AD-E6F8D9DE8203

Helm.ai today announced a breakthrough in unsupervised learning technology. This new methodology, called Deep Teaching, enables Helm.ai to train neural networks without human annotation or simulation for the purpose of advancing AI systems. Deep Teaching offers far-reaching implications for the future of computer vision and autonomous driving, as well as industries including aviation, robotics, manufacturing and even retail.

Artificial intelligence, or AI, is commonly understood as the science of simulating human intelligence processed by machines. Supervised learning refers to the process of training neural networks to perform certain tasks using training examples, typically provided by a human annotator or synthetic simulator to machines to perform certain tasks, while unsupervised learning is the process of enabling AI systems to learn from unlabelled information, infer inputs and produce solutions without the assistance of pre-established input and output patterns.

Continue reading… “Helm.ai pioneers breakthrough…. “Deep Teaching” of neural networks”