A machine learning system that is capable of virtually removing buildings from a live view

Fig.1. Overview of the proposed method. An image of the current landscape is acquired by the mobile terminal and sent to the server PC. The server detects the target building and generates a mask. The area to be complemented is set from the mask image, and the input image is automatically altered based on the features around the target area. The output image based on the digital completion is sent to the mobile terminal as a future landscape after demolition to be displayed on the DR display. Credit: Takuya Kikuchi et al.

Scientists at Osaka University have created a machine learning system that is capable of virtually removing buildings from a live view. By using generative adversarial networks (GAN) algorithms running on a remote server, the team was able to stream in real-time on a mobile device. This work can help accelerate the process of urban renewal based on community agreement.

Continue reading… “A machine learning system that is capable of virtually removing buildings from a live view”

GraphCore releases new 3D chip that speeds AI by 40%

The Bow processor has a higher frequency of 1.85 GHz versus 1.35 GHz of its previous version, which came out in 2020. 


UK-based AI computer company GraphCore has announced a new combination chip called Bow, which is the world’s first Wafer-on-Wafer (WoW) processor. GraphCore claims that the processor will speed up processes like deep learning by 40 per cent and use 16 per cent less energy than previous generation processors. GraphCore has partnered closely with TSMC to make the Bow IPU. 

This is the latest version of an IPU or Intelligence Processing Unit from GraphCore. The firm had previously released two versions of the IPU. The Bow processor has a higher frequency of 1.85 GHz versus 1.35 GHz of its previous version, which came out in 2020. GraphCore has stated that its superscale Bow Pod 1024 offers up to 350 PetaFLOPS of AI compute. For users who are already on GraphCore systems, the new Bow IPU uses the same software minus any modifications. 

Continue reading… “GraphCore releases new 3D chip that speeds AI by 40%”

Researchers Use Machine Learning To Repair Genetic Damage

By:Tanushree Shenwai

DNA damage is constantly occurring in cells, either due to external sources or as a result of internal cellular metabolic reactions and physiological activities. Accurate repair of such DNA damages is critical to avoid mutations and chromosomal rearrangements linked to diseases including cancer, immunodeficiencies, neurodegeneration, and premature aging. 

A team of researchers at Massachusetts General Hospital and the National Cancer Research Centre have identified a way to repair genetic damage and prevent DNA alterations using machine learning techniques. 

The researchers state that it is possible to learn more about how cancer develops and how to fight it if we understand how DNA lesions originate and repair. Therefore, they hope that their discovery will help create better cancer treatments while also protecting our healthy cells.

Continue reading… “Researchers Use Machine Learning To Repair Genetic Damage”

Machine Learning Bot Can Replace Your Gardener, It Plants and Weeds on Its Own

 By Cristina Mircea

Gardening is a rewarding activity indeed, both for your mind, as well as for your body. Unfortunately though, most of us can’t find enough time to dedicate to it, as our hectic lifestyles get in the way. That’s where this smart garden robot comes in, to make sure you don’t have to sow yourself, but just reap the benefits.

Sybil is a small, but a very capable device with machine learning capabilities. It can autonomously plant, weed, and map your entire garden.

Continue reading… “Machine Learning Bot Can Replace Your Gardener, It Plants and Weeds on Its Own”

Scientists Are Building a ‘Digital Twin’ of Earth

by Matthew Hart

The European Space Agency (ESA) is working on a “digital twin” of Earth in the hopes of better understanding our planet’s past, present, and future. The project, first announced in September of last year, will deploy AI, as well as quantum computing, to build Earth’s digital doppelgänger in virtual space. And the scientists hope this Digital Twin Earth will help them forecast extreme, climate change-induced weather events.

Popular Mechanics reported on the digital Earth, which ESA scientists discussed during the agency’s 2020 Φ-week event. The scientists say their digital model will help humanity to “monitor the health of the planet,” as well as simulate the effects of human behavior on the environment.

The scientists are going to evolve the digital twin over the next decade, constantly feeding real-world data into the model; data that will come from the EU’s Copernicus program, which captures atmospheric data, such as air quality changes. They’ll then use neural networks (computer algorithms) to identify patterns in Earth’s weather systems, and hopefully begin making accurate predictions.

“Machine learning and artificial intelligence could improve the realism and efficiency of the Digital Twin Earth—especially for extreme weather events and numerical forecast models,” European Center for Medium-Range Weather Forecasts (ECMWF) Director General, Florence Rabier, said at the event. Rabier and her colleagues also noted that the satellites collecting the data for the models are deploying AI programs.

Continue reading… “Scientists Are Building a ‘Digital Twin’ of Earth”

Going Beyond Machine Learning To Machine Reasoning

By Ron Schmelzer

The conversation around Artificial Intelligence usually revolves around technology-focused topics: machine learning, conversational interfaces, autonomous agents, and other aspects of data science, math, and implementation. However, the history and evolution of AI is more than  just a technology story. The story of AI is also inextricably linked with waves of innovation and research breakthroughs that run headfirst into economic and technology roadblocks. There seems to be a continuous pattern of discovery, innovation, interest, investment, cautious optimism, boundless enthusiasm, realization of limitations, technological roadblocks, withdrawal of interest, and retreat of AI research back to academic settings. These waves of advance and retreat seem to be as consistent as the back and forth of sea waves on the shore.

This pattern of interest, investment, hype, then decline, and rinse-and-repeat is particularly vexing to technologists and investors because it doesn’t follow the usual technology adoption lifecycle. Popularized by Geoffrey Moore in his book “Crossing the Chasm”,  technology adoption usually follows a well-defined path. Technology is developed and finds early interest by innovators, and then early adopters, and if the technology can make the leap across the “chasm”, it gets adopted by the early majority market and then it’s off to the races with demand by the late majority and finally technology laggards. If the technology can’t cross the chasm, then it ends up in the dustbin of history. However, what makes AI distinct is that it doesn’t fit the technology adoption lifecycle pattern.

But AI isn’t a discrete technology. Rather it’s a series of technologies, concepts, and approaches all aligning towards the quest for the intelligent machine. This quest inspires academicians and researchers to come up with theories of how the brain and intelligence works, and their concepts of how to mimic these aspects with technology. AI is a generator of technologies, which individually go through the technology lifecycle. Investors aren’t investing in “AI”, but rather they’re investing in the output of AI research and technologies that can help achieve the goals of AI. As researchers discover new insights that help them surmount previous challenges, or as technology infrastructure finally catches up with concepts that were previously infeasible, then new technology implementations are spawned and the cycle of investment renews.

Continue reading… “Going Beyond Machine Learning To Machine Reasoning”

What can AI learn from Human intelligence?


At HAI’s fall conference, scholars discussed novel ways AI can learn from human intelligence – and vice versa.

Can we teach robots to generalize their learning? How can algorithms become more commonsensical? Can a child’s learning style influence AI?

Stanford Institute for Human-Centered Artificial Intelligence’s fall conference considered those and other questions to understand how to mutually improve and better understand artificial and human intelligence. The event featured the theme of “triangulating intelligence” among the fields of AI, neuroscience, and psychology to develop research and applications for large-scale impact.

HAI faculty associate directors Christopher Manning, a Stanford professor of machine learning, linguistics, and computer science, and Surya Ganguli, a Stanford associate professor of neurobiology, served as hosts and panel moderators for the conference, which was co-sponsored by Stanford’s Wu-Tsai Neurosciences Institute, Department of Psychology, and Symbolic Systems program.

Speakers described cutting-edge approaches—some established, some new—to create a two-way flow of insights between research on human and machine-based intelligence, for powerful application. Here are some of their key takeaways.

Continue reading… “What can AI learn from Human intelligence?”


If you train robots like dogs, they learn faster


Instead of needing a month, it mastered new “tricks” in just days with reinforcement learning.

Treats-for-tricks works for training dogs — and apparently AI robots, too.

That’s the takeaway from a new study out of Johns Hopkins, where researchers have developed a new training system that allowed a robot to quickly learn how to do multi-step tasks in the real world — by mimicking the way canines learn new tricks.

Continue reading… “If you train robots like dogs, they learn faster”


Harnessing deep neural networks to predict future self-harm based on clinical notes


According to the American Foundation for Suicide Prevention, suicide is the 10th leading cause of death in the U.S., with over 1.4 million suicide attempts recorded in 2018. Although effective treatments are available for those at risk, clinicians do not have a reliable way of predicting which patients are likely to make a suicide attempt.

Researchers at the Medical University of South Carolina and University of South Florida report in JMIR Medical Informatics that they have taken important steps toward addressing the problem by creating an artificial intelligence algorithm that can automatically identify patients at high risk of intentional self-harm, based on the information in the clinical notes in the electronic health record.

The study was led by Jihad Obeid, M.D., co-director of the MUSC Biomedical Informatics Center, and Brian Bunnell, Ph.D., formerly at MUSC and currently an assistant professor in the Department of Psychiatry and Behavioral Neurosciences at the University of South Florida.

Continue reading… “Harnessing deep neural networks to predict future self-harm based on clinical notes”


The next generation of Artificial Intelligence


AI legend Yann LeCun, one of the godfathers of AI

 The field of artificial intelligence moves fast. It has only been 8 years since the modern era of deep learning began at the 2012 ImageNet competition. Progress in the field since then has been breathtaking and relentless.

If anything, this breakneck pace is only accelerating. Five years from now, the field of AI will look very different than it does today. Methods that are currently considered cutting-edge will have become outdated; methods that today are nascent or on the fringes will be mainstream.

What will the next generation of artificial intelligence look like? Which novel AI approaches will unlock currently unimaginable possibilities in technology and business? This article highlights three emerging areas within AI that are poised to redefine the field—and society—in the years ahead. Study up now.

  Continue reading… “The next generation of Artificial Intelligence”


Neural network trained to control anesthetic doses, keep patients under during surgery

To define how the world should look, neural networks are making up their own rules

 Researchers demonstrate how deep learning could eventually replace traditional anesthetic practices.

Academics from the Massachusetts Institute of Technology (MIT) and Massachusetts General Hospital have demonstrated how neural networks can be trained to administer anesthetic during surgery.

Over the past decade, machine learning (ML), artificial intelligence (AI), and deep learning algorithms have been developed and applied to a range of sectors and applications, including in the medical field.

Continue reading… “Neural network trained to control anesthetic doses, keep patients under during surgery”


Self-driving cars will hit the Indianapolis Motor Speedway in a landmark A.I. race


Take a look at the ‘Road of the Future’

Next year, a squad of souped-up Dallara race cars will reach speeds of up to 200 miles per hour as they zoom around the legendary Indianapolis Motor Speedway to discover whether a computer could be the next Mario Andretti.

The planned Indy Autonomous Challenge—taking place in October 2021 in Indianapolis—is intended for 31 university computer science and engineering teams to push the limits of current self-driving car technology. There will be no human racers sitting inside the cramped cockpits of the Dallara IL-15 race cars. Instead, onboard computer systems will take their place, outfitted with deep-learning software enabling the vehicles to drive themselves.

In order to win, a team’s autonomous car must be able to complete 20 laps—which equates to a little less than 50 miles in distance—and cross the finish line first in 25 minutes or less. At stake is a $1 million prize, with second- and third-place winners receiving a $250,000 and $50,000 award, respectively.

Continue reading… “Self-driving cars will hit the Indianapolis Motor Speedway in a landmark A.I. race”