AI invents millions of materials that don’t yet exist

Artistic image of a graphene bolometer controlled by electric field

By Anthony Cuthbertson

‘Transformative tool’ is already being used in the hunt for more energy-dense electrodes for lithium-ion batteries.

Scientists have developed an artificial intelligence algorithm capable of predicting the structure and properties of more than 31 million materials that do not yet exist.

The AI tool, named M3GNet, could lead to the discovery of new materials with exceptional properties, according to the team from the University of California San Diego who created it.

M3GNet was able to populate a vast database of yet-to-be-synthesized materials instantaneously, which the engineers are already using in their hunt for more energy-dense electrodes for lithium-ion batteries used in everything from smartphones to electric cars.

The matterverse.ai database and the M3GNet algorithm could potentially expand the exploration space for materials by orders of magnitude.

UC San Diego nanoengineering professor Shyue Ping Ong described M3GNet as “an AlphaFold for materials”, referring to the breakthrough AI algorithm built by Google’s DeepMind that can predict protein structures.

“Similar to proteins, we need to know the structure of a material to predict its properties,” said Professor Ong.

“We truly believe that the M3GNet architecture is a transformative tool that can greatly expand our ability to explore new material chemistries and structures.”

Continue reading… “AI invents millions of materials that don’t yet exist”
0

Google Research Proposes an Artificial Intelligence (AI) Model to Utilize Vision Transformers on Videos

By Ekrem Çetinkaya

Transformers have played a crucial role in natural language processing tasks in the last decade. Their success attributes mainly to their ability to extract and exploit temporal information. 

When a certain method works well in a domain, it is normal to expect to see studies that try to bring that method to other domains. This was the case with transformers as well, and the domain was computer vision. Introducing transformers to vision tasks was a huge success, bringing numerous similar studies afterward. 

The vision transformer (ViT) was proposed in 2020, outperforming its convolutional neural network (CNN) counterpart in the image classification tasks. Its main benefits were at a large scale since they require more data or stronger regularisation. 

ViT inspired many researchers to dive deeper into the rabbit hole of transformers to see how further they can go in different tasks. Most of them focused on image-related tasks, and they obtained really promising results. However, the application of ViTs into the video domain remained an open problem, more or less.

When you think of it, transformers, more importantly, attention-based architectures, look like the perfect structure to be used with videos. They are the intuitive choice for modeling the dependency in natural languages and extracting contextual relationships between the words. A video also contains these properties, so why not use the transformer to process videos? This is the question the authors of ViViT asked, and they came up with an answer. 

Most state-of-the-art video-related solutions use 3D-convolutional networks, but their complexity makes it challenging to achieve proper performance on commodity devices. Some studies focused on adding the self-attention property of transformers into the 3D-CNNs to better capture long-term dependencies within the video. 

Continue reading… “Google Research Proposes an Artificial Intelligence (AI) Model to Utilize Vision Transformers on Videos”
0

Real-Life ‘Invisibility Cloak’ Stops AI Cameras From Recognizing People

Scientists have developed a real-life “invisibility cloak” that tricks artificial intelligence (AI) cameras and stops them from recognizing people. 

 By PESALA BANDARA

Researchers at the University of Maryland have created a sweater that “breaks” AI systems of human recognition and makes a person “invisible” in front of AI cameras.

“This stylish sweater is a great way to stay warm this winter,” writes the researchers on UMD’s Department of Computer Science website. “It features a waterproof microfleece lining, a modern cut, and anti-AI patterns — which will help hide from object detectors.” 

The researchers note: “In our demonstration, the YOLOv2 detector was able to fool the detector with a pattern trained on a COCO data set with a carefully constructed target.”

Continue reading… “Real-Life ‘Invisibility Cloak’ Stops AI Cameras From Recognizing People”
0

a BREakthrough AI can track real-time cell changes revealing a key mystery in biology

The study shows how deep learning can be used to detect cell image analysis.

By Brittney Grimes

Researchers have found a way to observe cell samples to study morphological changes — or the change in form and structure — of cells. This is significant because cells are the basic unit of life, the building blocks of living organisms, and researchers need to be able to observe what could influence the parameters of cells, such as size, shape, and density. 

Conventionally, cell samples were observed directly through microscopes by scientists to observe and discover any changes of the cells. They would look for morphological changes in the cell structures. However, they can now use artificial intelligence to make observations. Through using both computer science and a subset of artificial intelligence known as deep learning, researchers can now combine the methods to detect cell analysis. 

The study was published in the journal Intelligent Computing.

Continue reading… “a BREakthrough AI can track real-time cell changes revealing a key mystery in biology”
0

MIT Researchers Discover A New, Faster AI Using Liquid Neural Neurons

The “liquid” neural network allows AI algorithms to adapt to new input data.

By Jace Dela Cruz

Artificial neural networks are a method that artificial intelligence utilizes to simulate how the human brain functions. A neural network “learns” from input from datasets and produces a forecast based on the available data.

But now, MIT Computer Science and Artificial Intelligence Lab (MIT CSAIL) researchers found a faster method to solve an equation that is employed in the algorithms for “liquid” neural neurons, according to a report by Interesting Engineering. 

Continue reading… “MIT Researchers Discover A New, Faster AI Using Liquid Neural Neurons”
0

Flexible AI computer chips promise wearable health monitors that protect privacy

A device like this could one day monitor and assess your health.

By Sihong Wang

My colleagues and I have developed a flexible, stretchable electronic device that runs machine-learning algorithms to continuously collect and analyze health data directly on the body. The skinlike sticker, developed in my lab at the University of Chicago’s Pritzker School of Molecular Engineering, includes a soft, stretchable computing chip that mimics the human brain.

To create this type of device, we turned to electrically conductive polymers that have been used to build semiconductors and transistors. These polymers are made to be stretchable, like a rubber band. Rather than working like a typical computer chip, though, the chip we’re working with, called a neuromorphic computing chip, functions more like a human brain. It’s able to both store and analyze data.

To test the usefulness of the new device, my colleagues and I used it to analyze electrocardiogram data representing the electrical activity of the human heart. We trained the device to classify ECGs into five categories: healthy and four types of abnormal signals. Even in conditions where the device is repeatedly stretched by movements of the wearer’s body, the device could still accurately classify the heartbeats. 

Most of the signals from the human body, such as the electrical activity in the heart recorded by ECG, are typically weak and subtle. Accurately recording these small signals requires direct contact between electronic devices and the human body. This can only be achieved by fabricating electronic devices to be as soft and stretchy as skin. We envision that wearable electronics will play a key role in tracking complex indicators of human health, including body temperature, cardiac activity, levels of oxygen, sugar, metabolites and immune molecules in the blood. 

Analyzing large amounts of continuously acquired health data is challenging, however. A single piece of data must be put into the broader perspective of a patient’s full health history, and that is a big task. Cutting-edge machine-learning algorithms that identify patterns in extremely complex data sets are the most promising route to being able to pick out the most important signals of disease. 

Continue reading… “Flexible AI computer chips promise wearable health monitors that protect privacy”
0

Techman Robot launches ‘all-in-one’ AI collaborative robot series

Techman Robot has launched its “TM AI Cobot” series, describing it as a “collaborative robot which combines a powerful and precise robot arm with native AI inferencing engine and smart vision system in a complete package”.

BY MAI TAO

The company says the new machine is ready for deployment in factories and can accelerate the transition to Industry 4.0.

Techman says the TM AI Cobot works on the principle of being smart, simple and safe. By combining visual processing in the robot arm, the AI Cobot can perform fast and precise pick and place, AMR, palletizing, welding, semi-conductor and product manufacturing, automated optical inspections (AOI) and food service preparation, among many other applications that can be accelerated by AI-Vision. 

The company claims it is the only intelligent robotic arm series on the market provided with a comprehensive AI software suite. It includes TM AI+ Training Server, TM AI+ AOI Edge, TM Image Manager, and TM 3DVision, allowing companies to train and tailor their system to precisely meet their application.

Shi-chi Ho, Techman Robot president, says: “Techman Robot has redefined the future of industry robotics with the introduction of its AI Cobot series that are equipped with a native AI engine, powerful and precise robotic arm and vision system that represents a perfect combination of ‘brain, hands and eyes’.

Continue reading… “Techman Robot launches ‘all-in-one’ AI collaborative robot series”
0

How Generative AI Is Changing Creative Work

By Thomas H. Davenport and Nitin Mittal

Summary.   Generative AI models for businesses threaten to upend the world of content creation, with substantial impacts on marketing, software, design, entertainment, and interpersonal communications. These models are able to produce text and images: blog…more

Large language and image AI models, sometimes called generative AI or foundation models, have created a new set of opportunities for businesses and professionals that perform content creation. Some of these opportunities include: 

  1. Automated content generation: Large language and image AI models can be used to automatically generate content, such as articles, blog posts, or social media posts. This can be a valuable time-saving tool for businesses and professionals who create content on a regular basis. 
  2. Improved content quality: AI-generated content can be of higher quality than content created by humans, due to the fact that AI models are able to learn from a large amount of data and identify patterns that humans may not be able to see. This can result in more accurate and informative content. 
  3. Increased content variety: AI models can generate a variety of content types, including text, images, and video. This can help businesses and professionals to create more diverse and interesting content that appeals to a wider range of people. 
  4. Personalized content: AI models can generate personalized content based on the preferences of individual users. This can help businesses and professionals to create content that is more likely to be of interest to their target audience, and therefore more likely to be read or shared.

How adept is this technology at mimicking human efforts at creative work? Well, for an example, the italicized text above was written by GPT-3, a “large language model” (LLM) created by OpenAI, in response to the first sentence, which we wrote. GPT-3’s text reflects the strengths and weaknesses of most AI-generated content. First, it is sensitive to the prompts fed into it; we tried several alternative prompts before settling on that sentence. Second, the system writes reasonably well; there are no grammatical mistakes, and the word choice is appropriate. Third, it would benefit from editing; we would not normally begin an article like this one with a numbered list, for example. Finally, it came up with ideas that we didn’t think of. The last point about personalized content, for example, is not one we would have considered.

Continue reading… “How Generative AI Is Changing Creative Work”
0

Researchers hope AI can alleviate interstate traffic jams

NASHVILLE, Tenn. (AP) — Researchers at Vanderbilt University and other schools around the country are conducting an experiment in Nashville next week to try to decrease the number of stop-and-go traffic jams on a local interstate. 

The new experiment will deploy up to 100 cars equipped with adaptive cruise control technology along a 4-mile stretch of Interstate 24 during morning rush hour, according to a news release from Vanderbilt. That stretch is outfitted with hundreds of ultra-high definition cameras that will give researchers a digital model of how every vehicle behaves. 

Continue reading… “Researchers hope AI can alleviate interstate traffic jams”
0

New Chip Expands the Possibilities for AI

An energy-efficient chip called NeuRRAM fixes an old design flaw to run large-scale AI algorithms on smaller devices, reaching the same accuracy as wasteful digital computers.

By Allison Whitten

Artificial intelligence algorithms cannot keep growing at their current pace. Algorithms like deep neural networks — which are loosely inspired by the brain, with multiple layers of artificial neurons linked to each other via numerical values called weights — get bigger every year. But these days, hardware improvements are no longer keeping pace with the enormous amount of memory and processing capacity required to run these massive algorithms. Soon, the size of AI algorithms may hit a wall.

And even if we could keep scaling up hardware to meet the demands of AI, there’s another problem: running them on traditional computers wastes an enormous amount of energy. The high carbon emissions generated from running large AI algorithms is already harmful for the environment, and it will only get worse as the algorithms grow ever more gigantic.

One solution, called neuromorphic computing, takes inspiration from biological brains to create energy-efficient designs. Unfortunately, while these chips can outpace digital computers in conserving energy, they’ve lacked the computational power needed to run a sizable deep neural network. That’s made them easy for AI researchers to overlook.

That finally changed in August, when Weier Wan, H.-S. Philip Wong, Gert Cauwenberghs and their colleagues revealed a new neuromorphic chip called NeuRRAM that includes 3 million memory cells and thousands of neurons built into its hardware to run algorithms. It uses a relatively new type of memory called resistive RAM, or RRAM. Unlike previous RRAM chips, NeuRRAM is programmed to operate in an analog fashion to save more energy and space. While digital memory is binary — storing either a 1 or a 0 — analog memory cells in the NeuRRAM chip can each store multiple values along a fully continuous range. That allows the chip to store more information from massive AI algorithms in the same amount of chip space.

As a result, the new chip can perform as well as digital computers on complex AI tasks like image and speech recognition, and the authors claim it is up to 1,000 times more energy efficient, opening up the possibility for tiny chips to run increasingly complicated algorithms within small devices previously unsuitable for AI like smart watches and phones.

Researchers not involved in the work have been deeply impressed by the results. “This paper is pretty unique,” said Zhongrui Wang, a longtime RRAM researcher at the University of Hong Kong. “It makes contributions at different levels — at the device level, at the circuit architecture level, and at the algorithm level.”

Continue reading… “New Chip Expands the Possibilities for AI”
0

How AI Is Revolutionizing The Ways We Can Detect Mental Illness

By Robin Farmanfarmaian

Predictive AI applications are relatively new to mental and behavioral health, but are already showing a lot of promise. In a recent publication on detecting suicide risk through analyzing text messages, UW Medicine researchers found that algorithms performed as well as trained evaluators. This is great news for predictive AI and the ability to save lives at risk for suicide through data analysis in real-time, when and where the individual is located. This is important because some healthcare providers may be concerned when they communicate by text message with a patient, they might miss something they are trained to pick up from voice inflection, facial expression, and other auditory or physical signals. Algorithms like this can help enhance the provider’s ability to analyze the patient when communicating by text, an increasingly popular way for people to access mental health.

Beyond text messaging, there are many companies already working on analyzing a person’s speech through vocal biomarkers. Vocal biomarkers describe using someone’s voice and speech as vital signs. Digitizing the human voice and metricizing the various features of voice and speech means software programs can find patterns and detect small changes humans might not recognize. Vocal biomarker measurements and analysis for anxiety, stress, sleepiness and depression are some of the early applications.

A great example of AI voice technology that can be used directly by healthcare providers now to detect mental illness is Ellipsis Health. By harnessing the power of the human voice as a biomarker for mental health, Ellipsis Health can be used as a clinical decision support tool during clinic visits. Its technology augments the care team by helping to assess the severity of stress, anxiety, and depression.

Continue reading… “How AI Is Revolutionizing The Ways We Can Detect Mental Illness”
0

Meta’s newest AI determines proper protein folds 60 times faster | Engadget

Life on Earth would not exist as we know it, if not for the protein molecules that enable critical processes from photosynthesis and enzymatic degradation to sight and our immune system. And like most facets of the natural world, humanity has only just begun to discover the multitudes of protein types that actually exist. But rather scour the most inhospitable parts of the planet in search of novel microorganisms that might have a new flavor of organic molecule, Meta researchers have developed a first-of-its-kind metagenomic database, the ESM Metagenomic Atlas, that could accelerate existing protein-folding AI performance by 60x.

Metagenomics is just coincidentally named. It is a relatively new, but very real, scientific discipline that studies “the structure and function of entire nucleotide sequences isolated and analyzed from all the organisms (typically microbes) in a bulk sample.” Often used to identify the bacterial communities living on our skin or in the soil, these techniques are similar in function to gas chromatography, wherein you’re trying to identify what’s present in a given sample system.

Similar databases have been launched by the NCBI, the European Bioinformatics Institute, and Joint Genome Institute, and have already cataloged billions of newly uncovered protein shapes. What Meta is bringing to the table is “a new protein-folding approach that harnesses large language models to create the first comprehensive view of the structures of proteins in a metagenomics database at the scale of hundreds of millions of proteins,” according to a TK release from the company. The problem is that, while advances of genomics have revealed the sequences for slews of novel proteins, just knowing what those sequences are doesn’t actually tell us how they fit together into a functioning molecule and going figuring it out experimentally takes anywhere from a few months to a few years. Per molecule. Ain’t nobody got time for that.  

Continue reading… “Meta’s newest AI determines proper protein folds 60 times faster | Engadget”
0