Quantum machine learning (QML) is poised to make the leap in 2023

In classic machine learning (ML) proven algorithms to be powerful tools for many tasks, including image and speech recognition, natural language processing (NLP) and predictive modeling. However, classical algorithms are limited by the constraints of classical computers and may have difficulty handling large files and complex data sets or to achieve a high degree of accuracy and precision.

Enter quantum machine learning (QML). 

QML combines the power of Quantum Computation with the predictive power of ML to overcome the limitations of classical algorithms and offer performance improvements. In their article “On the role of entanglement in speeding up quantum computing,” Richard Jozsa and Neil Linden, of the University of Bristol in the UK, write that “QML algorithms promise to provide exponential speedups over their classical algorithms for some of the most tasks such as data classification, feature selection, and cluster analysis. In particular, the use of quantum algorithms for supervised and unsupervised learning has the potential to revolutionize machine learning and artificial intelligence.”

Continue reading… “Quantum machine learning (QML) is poised to make the leap in 2023”

A robotic exoskeleton adapts to wearers to help them walk faster

The boot-like device uses machine learning to provide support for an individual with mobility problems.

By Rhiannon Williams

An exoskeleton that uses machine learning to adapt to its wearers’ gait could help make it easier for people with limited mobility to walk.

The exoskeleton, which resembles a motorized boot, is lightweight and allows the wearer to move relatively freely, both increasing their walking speed and reducing the amount of energy they use while moving.

Developed by researchers from Stanford University, it consists of cheap wearable sensors, a motor, and a small Raspberry Pi computer, powered by a rechargeable battery pack worn around the waist. The sensors are embedded into the boot to measure force and motion unobtrusively. 

Continue reading… “A robotic exoskeleton adapts to wearers to help them walk faster”

An AI that can design new proteins could help unlock new cures and materials 

The machine-learning tool could help researchers discover entirely new proteins not yet known to science.

By Melissa Heikkiläarchive


A new AI tool could help researchers discover previously unknown proteins and design entirely new ones. When harnessed, it could help unlock the development of more efficient vaccines, speed up research for the cure to cancer, or lead to completely new materials.

Alphabet-owned AI lab DeepMind took the world by surprise in 2020 when it announced AlphaFold, an AI tool that used deep learning to solve one of the “grand challenges” of biology: accurately predicting the shapes of proteins. Proteins are fundamental to life, and understanding their shape is vital to working with them. Earlier this summer DeepMind announced that AlphaFold could now predict the shapes of all proteins known to science. 

The new tool, ProteinMPNN, described by a group of researchers from the University of Washington in two papers published in Science today (available here and here), offers a powerful complement to that technology. 

The papers are the latest example of how deep learning is revolutionizing protein design by giving scientists new research tools. Traditionally researchers engineer proteins by tweaking those that occur in nature, but ProteinMPNN will open an entire new universe of possible proteins for researchers to design from scratch. 

Continue reading… “An AI that can design new proteins could help unlock new cures and materials “

A machine learning system that is capable of virtually removing buildings from a live view

Fig.1. Overview of the proposed method. An image of the current landscape is acquired by the mobile terminal and sent to the server PC. The server detects the target building and generates a mask. The area to be complemented is set from the mask image, and the input image is automatically altered based on the features around the target area. The output image based on the digital completion is sent to the mobile terminal as a future landscape after demolition to be displayed on the DR display. Credit: Takuya Kikuchi et al.

Scientists at Osaka University have created a machine learning system that is capable of virtually removing buildings from a live view. By using generative adversarial networks (GAN) algorithms running on a remote server, the team was able to stream in real-time on a mobile device. This work can help accelerate the process of urban renewal based on community agreement.

Continue reading… “A machine learning system that is capable of virtually removing buildings from a live view”

GraphCore releases new 3D chip that speeds AI by 40%

The Bow processor has a higher frequency of 1.85 GHz versus 1.35 GHz of its previous version, which came out in 2020. 


UK-based AI computer company GraphCore has announced a new combination chip called Bow, which is the world’s first Wafer-on-Wafer (WoW) processor. GraphCore claims that the processor will speed up processes like deep learning by 40 per cent and use 16 per cent less energy than previous generation processors. GraphCore has partnered closely with TSMC to make the Bow IPU. 

This is the latest version of an IPU or Intelligence Processing Unit from GraphCore. The firm had previously released two versions of the IPU. The Bow processor has a higher frequency of 1.85 GHz versus 1.35 GHz of its previous version, which came out in 2020. GraphCore has stated that its superscale Bow Pod 1024 offers up to 350 PetaFLOPS of AI compute. For users who are already on GraphCore systems, the new Bow IPU uses the same software minus any modifications. 

Continue reading… “GraphCore releases new 3D chip that speeds AI by 40%”

Researchers Use Machine Learning To Repair Genetic Damage

By:Tanushree Shenwai

DNA damage is constantly occurring in cells, either due to external sources or as a result of internal cellular metabolic reactions and physiological activities. Accurate repair of such DNA damages is critical to avoid mutations and chromosomal rearrangements linked to diseases including cancer, immunodeficiencies, neurodegeneration, and premature aging. 

A team of researchers at Massachusetts General Hospital and the National Cancer Research Centre have identified a way to repair genetic damage and prevent DNA alterations using machine learning techniques. 

The researchers state that it is possible to learn more about how cancer develops and how to fight it if we understand how DNA lesions originate and repair. Therefore, they hope that their discovery will help create better cancer treatments while also protecting our healthy cells.

Continue reading… “Researchers Use Machine Learning To Repair Genetic Damage”

Machine Learning Bot Can Replace Your Gardener, It Plants and Weeds on Its Own

 By Cristina Mircea

Gardening is a rewarding activity indeed, both for your mind, as well as for your body. Unfortunately though, most of us can’t find enough time to dedicate to it, as our hectic lifestyles get in the way. That’s where this smart garden robot comes in, to make sure you don’t have to sow yourself, but just reap the benefits.

Sybil is a small, but a very capable device with machine learning capabilities. It can autonomously plant, weed, and map your entire garden.

Continue reading… “Machine Learning Bot Can Replace Your Gardener, It Plants and Weeds on Its Own”

Scientists Are Building a ‘Digital Twin’ of Earth

by Matthew Hart

The European Space Agency (ESA) is working on a “digital twin” of Earth in the hopes of better understanding our planet’s past, present, and future. The project, first announced in September of last year, will deploy AI, as well as quantum computing, to build Earth’s digital doppelgänger in virtual space. And the scientists hope this Digital Twin Earth will help them forecast extreme, climate change-induced weather events.

Popular Mechanics reported on the digital Earth, which ESA scientists discussed during the agency’s 2020 Φ-week event. The scientists say their digital model will help humanity to “monitor the health of the planet,” as well as simulate the effects of human behavior on the environment.

The scientists are going to evolve the digital twin over the next decade, constantly feeding real-world data into the model; data that will come from the EU’s Copernicus program, which captures atmospheric data, such as air quality changes. They’ll then use neural networks (computer algorithms) to identify patterns in Earth’s weather systems, and hopefully begin making accurate predictions.

“Machine learning and artificial intelligence could improve the realism and efficiency of the Digital Twin Earth—especially for extreme weather events and numerical forecast models,” European Center for Medium-Range Weather Forecasts (ECMWF) Director General, Florence Rabier, said at the event. Rabier and her colleagues also noted that the satellites collecting the data for the models are deploying AI programs.

Continue reading… “Scientists Are Building a ‘Digital Twin’ of Earth”

Going Beyond Machine Learning To Machine Reasoning

By Ron Schmelzer

The conversation around Artificial Intelligence usually revolves around technology-focused topics: machine learning, conversational interfaces, autonomous agents, and other aspects of data science, math, and implementation. However, the history and evolution of AI is more than  just a technology story. The story of AI is also inextricably linked with waves of innovation and research breakthroughs that run headfirst into economic and technology roadblocks. There seems to be a continuous pattern of discovery, innovation, interest, investment, cautious optimism, boundless enthusiasm, realization of limitations, technological roadblocks, withdrawal of interest, and retreat of AI research back to academic settings. These waves of advance and retreat seem to be as consistent as the back and forth of sea waves on the shore.

This pattern of interest, investment, hype, then decline, and rinse-and-repeat is particularly vexing to technologists and investors because it doesn’t follow the usual technology adoption lifecycle. Popularized by Geoffrey Moore in his book “Crossing the Chasm”,  technology adoption usually follows a well-defined path. Technology is developed and finds early interest by innovators, and then early adopters, and if the technology can make the leap across the “chasm”, it gets adopted by the early majority market and then it’s off to the races with demand by the late majority and finally technology laggards. If the technology can’t cross the chasm, then it ends up in the dustbin of history. However, what makes AI distinct is that it doesn’t fit the technology adoption lifecycle pattern.

But AI isn’t a discrete technology. Rather it’s a series of technologies, concepts, and approaches all aligning towards the quest for the intelligent machine. This quest inspires academicians and researchers to come up with theories of how the brain and intelligence works, and their concepts of how to mimic these aspects with technology. AI is a generator of technologies, which individually go through the technology lifecycle. Investors aren’t investing in “AI”, but rather they’re investing in the output of AI research and technologies that can help achieve the goals of AI. As researchers discover new insights that help them surmount previous challenges, or as technology infrastructure finally catches up with concepts that were previously infeasible, then new technology implementations are spawned and the cycle of investment renews.

Continue reading… “Going Beyond Machine Learning To Machine Reasoning”

What can AI learn from Human intelligence?


At HAI’s fall conference, scholars discussed novel ways AI can learn from human intelligence – and vice versa.

Can we teach robots to generalize their learning? How can algorithms become more commonsensical? Can a child’s learning style influence AI?

Stanford Institute for Human-Centered Artificial Intelligence’s fall conference considered those and other questions to understand how to mutually improve and better understand artificial and human intelligence. The event featured the theme of “triangulating intelligence” among the fields of AI, neuroscience, and psychology to develop research and applications for large-scale impact.

HAI faculty associate directors Christopher Manning, a Stanford professor of machine learning, linguistics, and computer science, and Surya Ganguli, a Stanford associate professor of neurobiology, served as hosts and panel moderators for the conference, which was co-sponsored by Stanford’s Wu-Tsai Neurosciences Institute, Department of Psychology, and Symbolic Systems program.

Speakers described cutting-edge approaches—some established, some new—to create a two-way flow of insights between research on human and machine-based intelligence, for powerful application. Here are some of their key takeaways.

Continue reading… “What can AI learn from Human intelligence?”


If you train robots like dogs, they learn faster


Instead of needing a month, it mastered new “tricks” in just days with reinforcement learning.

Treats-for-tricks works for training dogs — and apparently AI robots, too.

That’s the takeaway from a new study out of Johns Hopkins, where researchers have developed a new training system that allowed a robot to quickly learn how to do multi-step tasks in the real world — by mimicking the way canines learn new tricks.

Continue reading… “If you train robots like dogs, they learn faster”


Harnessing deep neural networks to predict future self-harm based on clinical notes


According to the American Foundation for Suicide Prevention, suicide is the 10th leading cause of death in the U.S., with over 1.4 million suicide attempts recorded in 2018. Although effective treatments are available for those at risk, clinicians do not have a reliable way of predicting which patients are likely to make a suicide attempt.

Researchers at the Medical University of South Carolina and University of South Florida report in JMIR Medical Informatics that they have taken important steps toward addressing the problem by creating an artificial intelligence algorithm that can automatically identify patients at high risk of intentional self-harm, based on the information in the clinical notes in the electronic health record.

The study was led by Jihad Obeid, M.D., co-director of the MUSC Biomedical Informatics Center, and Brian Bunnell, Ph.D., formerly at MUSC and currently an assistant professor in the Department of Psychiatry and Behavioral Neurosciences at the University of South Florida.

Continue reading… “Harnessing deep neural networks to predict future self-harm based on clinical notes”