UAH collaboration creates self-learning AI platform to discover new drugs

A UAH team is applying self-learning artificial intelligence and big data analytics to discover new drugs.

Newswise — HUNTSVILLE, Ala. (May 4, 2022) – A cross-college collaboration at The University of Alabama in Huntsville (UAH) has developed a self-learning artificial intelligence (AI) platform that uses big data analytics to discover how new pharmaceutical drugs and various molecules work inside living cells.

The cutting-edge research at UAH, a part of the University of Alabama System, involves Dr. Jerome Baudry, a molecular biophysicist, the Mrs. Pei-Ling Chan Chair in the Department of Biological Sciences and director of the Baudry Lab; Dr. Vineetha Menon, an assistant professor in the Department of Computer Science and the director of the Big Data Analytics Lab; computer science doctoral student Shivangi Gupta, the lead author of a paper on the research; and engineering doctoral student Armin Ahmadi, who is conducting his doctoral research in the Baudry Lab.

Supported by UAH’s Office of Technology Commercialization, the scientists are developing their research into intellectual property for industrial applications in drug discovery.

“This is a strong, integrated collaboration and we all bring our own expertise, but the main novelty in this work is in machine learning and data mining, and the lead on the overall project is Dr. Menon, who is an internationally recognized expert in these areas,” says Dr. Baudry.

Continue reading… “UAH collaboration creates self-learning AI platform to discover new drugs”

Waste plastic broken down not in centuries but in days by an AI-engineered enzyme

Breaking up is hard to do.

That’s certainly true for common plastics like polyethylene terephthalate (PET).  A water bottle made of a thin film of PET (perhaps half a millimeter thick) takes about 450 years to degrade.  Along the way it will exist as microplastics, which are so pervasive they are even turning up in living people’s lung tissue, as we saw for the first time just a month ago.  And even those kinds of numbers are guesstimates because many studies don’t last long enough to see any appreciable degradation of PET at all.

A lot of efforts to produce biodegradable and bioresorbable plastics are making good progress, and that’s great for the future, but what about the mountains of plastic that already exist and that we keep on generating?  In the U.S., the landfill rate for discarded plastics is still about 75%!  We have a lot of work to do.

Good thing Hal Alper’s chemical engineering lab at the University of Texas is on the job.  In the April 27 issue of Nature, they report on an enzyme they developed called FAST-PETase.  Designed with the help of artificial intelligence, it degrades untreated postconsumer PET not in centuries, but in days.  And this can be done at temperatures of 50°C and below, where many types of bacteria can thrive.  See where this is going?

Continue reading… “Waste plastic broken down not in centuries but in days by an AI-engineered enzyme”

John Deere is slowly becoming one of the world’s most important AI companies

Nothing runs (autonomously) like a Deere

By Tristan Greene

John Deere has been in business for nearly 200 years. For those in the agriculture industry, the company that makes green tractors is as well-known as Santa Claus, McDonald’s, or John Wayne.

Heck, even city folk who’ve never seen a tractor that wasn’t on a television screen know John Deere. The company’s so popular even celebrities such as Ashton Kutcher and George Clooney have been known to rock a Deere hat.

What most outsiders don’t know is that John Deere’s not so much a farming vehicle manufacturer these days as it is an agricultural technology company. And, judging by how things are going in 2022, we’re predicting it’ll be a full-on artificial intelligence company within the next 15 years.

Continue reading… “John Deere is slowly becoming one of the world’s most important AI companies”

AI Speeds Precision Medicine for Parkinson’s Disease

Robotics combined with AI machine learning spots Parkinson’s disease signatures.

By Vanessa Lancaster


  • Over 10 million people worldwide live with Parkinson’s disease, including nearly a million Americans.
  • A new study uses AI deep learning that finds cellular disease signatures to help accelerate the discovery of novel therapeutics for Parkinson’s. 
  • This unique AI deep learning platform solution is not limited to Parkinson’s disease. It can be repurposed for other disease signatures.

Artificial intelligence (AI), machine learning, and robotics are accelerating precision medicine for neurodegenerative diseases and brain disorders.

A new study published in Nature Communications reveals a high-throughput screening platform using AI deep learning that finds cellular disease signatures to help accelerate the discovery of novel therapeutics for Parkinson’s disease (PD).

Continue reading… “AI Speeds Precision Medicine for Parkinson’s Disease”


Deep neural networks, a type of Artificial Intelligence began outperforming standard algorithms 10 years ago.

by Madhurjya Chowdhury

The majority of artificial intelligence (AI) is a game of numbers. Deep neural networks, a type of AI that learns to recognize patterns in data, began outperforming standard algorithms 10 years ago because we ultimately had enough data and processing capabilities to fully utilize them.

Today’s neural nets are even more data and power-hungry. Training them necessitates fine-tuning the values of millions, if not billions, of parameters that define these networks and represent the strength of interconnections between artificial neurons. The goal is to obtain near-ideal settings for them, a process called optimization, but teaching the networks to get there is difficult.


Researchers develop new AI form that can adapt to perform tasks in changeable environments

Robot Tiego is ready to stack cubes.

by Sandra Tavakoli and Karin Wik,  Chalmers University of Technology

Can robots adapt their own working methods to solve complex tasks? Researchers at Chalmers University of Technology, Sweden, have developed a new form of AI, which, by observing human behavior, can adapt to perform its tasks in a changeable environment. The hope is that robots that can be flexible in this way will be able to work alongside humans to a much greater degree.

“Robots that work in human environments need to be adaptable to the fact that humans are unique, and that we might all solve the same task in a different way. An important area in robot development, therefore, is to teach robots how to work alongside humans in dynamic environments,” says Maximilian Diehl, Doctoral Student at the Department of Electrical Engineering at Chalmers University of Technology and main researcher behind the project.

Continue reading… “Researchers develop new AI form that can adapt to perform tasks in changeable environments”

How AR and AI are making Zurich a smarter and safer city

David Weber, Head of Smart City Zurich, gives an inside look of how Zurich uses tech like AR, AI and digital twins to improve the city’s infrastructure and safety. 

By Liew Ming En
It can be difficult for Ironman to differentiate friend from foe when speeding through the skies. But he can easily do so with the augmented reality (AR) technology inbuilt in his suit, which automatically targets enemies and avoids civilians.

AR in the real world is still a long way away from science fiction films like Ironman, but progress in recent years is revealing use cases in various sectors. In Zurich, for example, AR is used to visualise how planned buildings will look before they are constructed.

David Weber, the Head of Smart City Zurich, shares how Zurich uses tech like AR, AI and digital twins to improve urban planning, increase citizen participation, and better citizen safety.

Continue reading… “How AR and AI are making Zurich a smarter and safer city”

New AI-driven algorithm can detect autism in brain ‘fingerprints’

By Adam Hadhazy,  Stanford University

Stanford researchers have developed an algorithm that may help discern if someone has autism by looking at brain scans. The novel algorithm, driven by recent advances in artificial intelligence (AI), also successfully predicts the severity of autism symptoms in individual patients. With further honing, the algorithm could lead to earlier diagnoses, more targeted therapies, and broadened understanding of autism’s origins in the brain.

The algorithm pores over data gathered through functional magnetic resonance imaging (fMRI) scans. These scans capture patterns of neural activity throughout the brain. By mapping this activity over time in the brain’s many regions, the algorithm generates neural activity “fingerprints.” Although unique for each individual just like real fingerprints, the brain fingerprints nevertheless share similar features, allowing them to be sorted and classified.

As described in a new study published in Biological Psychiatry, the algorithm assessed brain scans from a sample of approximately 1,100 patients. With 82% accuracy, the algorithm selected out a group of patients whom human clinicians had diagnosed with autism.

“Although autism is one of the most common neurodevelopmental disorders, there is so much about it that we still don’t understand,” says lead author Kaustubh Supekar, a Stanford clinical assistant professor of psychiatry and behavioral sciences and Stanford HAI affiliate faculty. “In this study, we’ve shown that our AI-driven brain ‘fingerprinting’ model could potentially be a powerful new tool in advancing diagnosis and treatment.”

Continue reading… “New AI-driven algorithm can detect autism in brain ‘fingerprints’”

First autonomous X-ray-analyzing AI is cleared in the EU

The AI imaging tool reads chest X-rays without the involvement of a radiologist

By Nicole Wetsman  

An artificial intelligence tool that reads chest X-rays without oversight from a radiologist got regulatory clearance in the European Union last week — a first for a fully autonomous medical imaging AI, the company, called Oxipit, said in a statement. It’s a big milestone for AI and likely to be contentious, as radiologists have spent the last few years pushing back on efforts to fully automate parts of their job. 

The tool, called ChestLink, scans chest X-rays and automatically sends patient reports on those that it sees as totally healthy, with no abnormalities. Any images that the tool flags as having a potential problem are sent to a radiologist for review. Most X-rays in primary care don’t have any problems, so automating the process for those scans could cut down on radiologists’ workloads, the Oxipit said in informational materials. 

Continue reading… “First autonomous X-ray-analyzing AI is cleared in the EU”

Microsoft and HPE tests AI on International Space Station

HPE’s Spaceborne Computer-2 deployed in the ISS analyzes the images clicked by crew members using the Glove Analyzer model to search for damages in real-time.

By Dipayan Mitra

Technology giants Microsoft and Hewlett Packard Enterprise (HPE) have partnered with NASA to test artificial intelligence (AI) technology on the International Space Station to perform multiple tasks. 

According to the companies, the tasks they plan to perform using AI include checking the wear and tear of gloves on astronauts’ gloves. 

Once the images are received, NASA analysts evaluate photographs of the gloves for any damage that could constitute a concern, then report back to the astronauts on the International Space Station. 

However, this is a lengthy process, and when astronauts get farther away from Earth, the communication weakens, which might lead to delays in the process. 

Therefore to solve this challenge, Microsoft and HPE engineers are working with NASA scientists on a system that uses artificial intelligence and HPE’s Spaceborne Computer-2 to scan and analyze glove images directly on the International Space Station, potentially giving astronauts onboard autonomy with limited support from Earth. 

Continue reading… “Microsoft and HPE tests AI on International Space Station”

DARPA to build life-saving AI models that think like medics

Project is a M*A*S*H-up of machine learning and battlefield decision-making

Via Brandon Vigliarolo

A new DARPA initiative aims to ultimately give AI systems the same complex, rapid decision-making capabilities as military medical staff and trauma surgeons who are in the field of battle.

The In the Moment (ITM) program, which is right now soliciting research proposals, aims to develop the foundations of expert machine-learning models that can make difficult judgment calls – where there is no right answer – that humans can trust. This study could lead to the deployment of algorithms that can help medics and other personnel make tough decisions in moments of life and death.

“DoD missions involve making many decisions rapidly in challenging circumstances and algorithmic decision-making systems could address and lighten this load on operators … ITM seeks to develop techniques that enable building, evaluating, and fielding trusted algorithmic decision-makers for mission-critical DoD operations where there is no right answer and, consequently, ground truth does not exist,” DARPA said. 

At the heart of this problem is that these sorts of AI systems need to be trained even when there is no ground truth or consensus among experts. Generals may disagree over how exactly a confrontation between two opposing units should unfold. Doctors may have differing opinions on how to treat someone. Teaching machine-learning software how to figure out the best course of action from these stances is non-obvious, and what ITM seems to be set up to tackle.

Continue reading… “DARPA to build life-saving AI models that think like medics”

Google helps develop AI-driven lab machine to diagnose Parkinson’s

Robo-worker manipulates test tubes and pipettes, images skin cells to classify disease

By Katyanna Quach 

A robotic system armed with AI-powered cameras can grow and image skin cells from test tubes to diagnose Parkinson’s disease with minimal human help, according to researchers from Google and the New York Stem Cell Foundation.

Parkinson’s disease is estimated to affect 2 to 3 percent of the population over the age of 65. Nerve cells located deep within the basal ganglia region of the brain slowly die over time, impacting motion. Patients find it difficult to control their movements; their limbs may shake or feel stiff. Scientists aren’t sure what causes the disease, and it is currently incurable.

“Traditional drug discovery isn’t working very well, particularly for complex diseases like Parkinson’s,” NYSCF’s CEO Susan Solomon explained in a statement. “The robotic technology NYSCF has built allows us to generate vast amounts of data from large populations of patients, and discover new signatures of disease as an entirely new basis for discovering drugs that actually work.”

Continue reading… “Google helps develop AI-driven lab machine to diagnose Parkinson’s”