New Chip Expands the Possibilities for AI

An energy-efficient chip called NeuRRAM fixes an old design flaw to run large-scale AI algorithms on smaller devices, reaching the same accuracy as wasteful digital computers.

By Allison Whitten

Artificial intelligence algorithms cannot keep growing at their current pace. Algorithms like deep neural networks — which are loosely inspired by the brain, with multiple layers of artificial neurons linked to each other via numerical values called weights — get bigger every year. But these days, hardware improvements are no longer keeping pace with the enormous amount of memory and processing capacity required to run these massive algorithms. Soon, the size of AI algorithms may hit a wall.

And even if we could keep scaling up hardware to meet the demands of AI, there’s another problem: running them on traditional computers wastes an enormous amount of energy. The high carbon emissions generated from running large AI algorithms is already harmful for the environment, and it will only get worse as the algorithms grow ever more gigantic.

One solution, called neuromorphic computing, takes inspiration from biological brains to create energy-efficient designs. Unfortunately, while these chips can outpace digital computers in conserving energy, they’ve lacked the computational power needed to run a sizable deep neural network. That’s made them easy for AI researchers to overlook.

That finally changed in August, when Weier Wan, H.-S. Philip Wong, Gert Cauwenberghs and their colleagues revealed a new neuromorphic chip called NeuRRAM that includes 3 million memory cells and thousands of neurons built into its hardware to run algorithms. It uses a relatively new type of memory called resistive RAM, or RRAM. Unlike previous RRAM chips, NeuRRAM is programmed to operate in an analog fashion to save more energy and space. While digital memory is binary — storing either a 1 or a 0 — analog memory cells in the NeuRRAM chip can each store multiple values along a fully continuous range. That allows the chip to store more information from massive AI algorithms in the same amount of chip space.

As a result, the new chip can perform as well as digital computers on complex AI tasks like image and speech recognition, and the authors claim it is up to 1,000 times more energy efficient, opening up the possibility for tiny chips to run increasingly complicated algorithms within small devices previously unsuitable for AI like smart watches and phones.

Researchers not involved in the work have been deeply impressed by the results. “This paper is pretty unique,” said Zhongrui Wang, a longtime RRAM researcher at the University of Hong Kong. “It makes contributions at different levels — at the device level, at the circuit architecture level, and at the algorithm level.”

Continue reading… “New Chip Expands the Possibilities for AI”

How AI Is Revolutionizing The Ways We Can Detect Mental Illness

By Robin Farmanfarmaian

Predictive AI applications are relatively new to mental and behavioral health, but are already showing a lot of promise. In a recent publication on detecting suicide risk through analyzing text messages, UW Medicine researchers found that algorithms performed as well as trained evaluators. This is great news for predictive AI and the ability to save lives at risk for suicide through data analysis in real-time, when and where the individual is located. This is important because some healthcare providers may be concerned when they communicate by text message with a patient, they might miss something they are trained to pick up from voice inflection, facial expression, and other auditory or physical signals. Algorithms like this can help enhance the provider’s ability to analyze the patient when communicating by text, an increasingly popular way for people to access mental health.

Beyond text messaging, there are many companies already working on analyzing a person’s speech through vocal biomarkers. Vocal biomarkers describe using someone’s voice and speech as vital signs. Digitizing the human voice and metricizing the various features of voice and speech means software programs can find patterns and detect small changes humans might not recognize. Vocal biomarker measurements and analysis for anxiety, stress, sleepiness and depression are some of the early applications.

A great example of AI voice technology that can be used directly by healthcare providers now to detect mental illness is Ellipsis Health. By harnessing the power of the human voice as a biomarker for mental health, Ellipsis Health can be used as a clinical decision support tool during clinic visits. Its technology augments the care team by helping to assess the severity of stress, anxiety, and depression.

Continue reading… “How AI Is Revolutionizing The Ways We Can Detect Mental Illness”

Meta’s newest AI determines proper protein folds 60 times faster | Engadget

Life on Earth would not exist as we know it, if not for the protein molecules that enable critical processes from photosynthesis and enzymatic degradation to sight and our immune system. And like most facets of the natural world, humanity has only just begun to discover the multitudes of protein types that actually exist. But rather scour the most inhospitable parts of the planet in search of novel microorganisms that might have a new flavor of organic molecule, Meta researchers have developed a first-of-its-kind metagenomic database, the ESM Metagenomic Atlas, that could accelerate existing protein-folding AI performance by 60x.

Metagenomics is just coincidentally named. It is a relatively new, but very real, scientific discipline that studies “the structure and function of entire nucleotide sequences isolated and analyzed from all the organisms (typically microbes) in a bulk sample.” Often used to identify the bacterial communities living on our skin or in the soil, these techniques are similar in function to gas chromatography, wherein you’re trying to identify what’s present in a given sample system.

Similar databases have been launched by the NCBI, the European Bioinformatics Institute, and Joint Genome Institute, and have already cataloged billions of newly uncovered protein shapes. What Meta is bringing to the table is “a new protein-folding approach that harnesses large language models to create the first comprehensive view of the structures of proteins in a metagenomics database at the scale of hundreds of millions of proteins,” according to a TK release from the company. The problem is that, while advances of genomics have revealed the sequences for slews of novel proteins, just knowing what those sequences are doesn’t actually tell us how they fit together into a functioning molecule and going figuring it out experimentally takes anywhere from a few months to a few years. Per molecule. Ain’t nobody got time for that.  

Continue reading… “Meta’s newest AI determines proper protein folds 60 times faster | Engadget”

CHILLING AI DEVELOPMENT MEANS THAT ROBOTS CAN NOW TALK TO ANIMALS – AND WE MIGHT BE ABLE TO NEXT

Robots can now talk to animals

By Callie Patteson

HUMANS are one step closer to talking to animals as new technologies are allowing artificial intelligence-enabled robots to speak with different species. 

Karen Bakker, a professor at the University of British Columbia, recently revealed this technology is being used to communicate with honeybees, dolphins and elephants and offered up a warning regarding the development. 

“Now, this raises a very serious ethical question, because the ability to speak to other species sounds intriguing and fascinating, but it could be used either to create a deeper sense of kinship, or a sense of dominion and manipulative ability to domesticate wild species that we’ve never as humans been able to previously control,” Bakker said in an interview published with Vox. 

She pointed to the use of artificial intelligence to communicate with honeybees in Germany. 

“A research team in Germany encoded honeybee signals into a robot that they sent into a hive,” Bakker said. 

“That robot is able to use the honeybees’ waggle dance communication to tell the honeybees to stop moving, and it’s able to tell those honeybees where to fly to for a specific nectar source.”

Continue reading… “CHILLING AI DEVELOPMENT MEANS THAT ROBOTS CAN NOW TALK TO ANIMALS – AND WE MIGHT BE ABLE TO NEXT”

AI and biobanks could open the way to longevity treatments

Kristen Fortney. Image/BioAge

BY JONATHAN SMITH

The development of longevity treatments is hampered by a lack of biomarkers and validated drug targets. BioAge’s co-founder and CEO, Kristen Fortney, explains how the firm is enlisting machine learning (ML), artificial intelligence (AI) and biobanks to fill in the gaps.

The quest for human longevity treatments is attracting big cash in the biotech industry. One of the most impressive investments entering the field was the $3 billion launch of the U.S. anti-aging company Altos Labs in January 2022.

However, efforts to extend our healthy lifespan are dogged by a lack of clear biomarkers that correlate with the aging process. There are a set of observed hallmarks of aging, such as the breakdown of cells, a lack of stem cells in tissues and unstable DNA, but the search for reliable drug targets to slow the aging process is difficult. 

In 2020, for example, the U.S. company Unity Biotechnology hit a major setback when its lead drug targeting aged, or “senescent,” cells failed to treat the age-related condition osteoarthritis in a phase 2 trial.

To overcome the challenges of developing longevity treatments, BioAge was launched in the U.S. in 2015. The firm raised a $90 million Series C round in late 2020 to finance the development of small molecule drug candidates for age-related conditions including anemia, muscle atrophy and COVID-19. BioAge also paired up with Age Lab AS in August 2022 to tap into the latter firm’s biobank — containing tissue samples collected from healthy humans over many years.

In an interview, BioAge’s co-founder and CEO, Kristen Fortney, told us how ML and AI in addition to biobank sampling can fuel the search for drug targets in longevity treatment.

Continue reading… “AI and biobanks could open the way to longevity treatments”

Developing AI Technology Enabling Robots to Communicate With Some Animal Species

By Joshua Stan

The civilization humans inhabit is filled with sounds that can’t be heard. Bats chitter and talk in ultrasonic; elephants growl infrared secrets to one another; and coastal ecosystems are aquatic clubs teeming with the cracks, hisses, as well as clicks of marine life. Mankind had no idea those noises existed for millennia. However, as technology advances, so does our ability to listen.

Drones, digital recorders, and artificial intelligence are already allowing humans to listen to the sounds of nature in new ways, altering the field of scientific inquiry and offering the intriguing potential that computers may eventually allow us to communicate with animals. Humans are one step closer to communicating with animals, thanks to new technologies that enable artificial intelligence-powered robots to converse with various species.

Karen Bakker, a University of British Columbia professor, recently disclosed that this innovation is being employed to speak with honeybees, dolphins, and elephants, and she issued a caution about the development.

Continue reading… “Developing AI Technology Enabling Robots to Communicate With Some Animal Species”

Would YOU try a ‘human washing machine’? Japanese scientists are developing a futuristic AI bath that washes you with tiny bubbles while blasting out soothing music and videos

By FIONA JACKSON

  • A ‘human washing machine’ is being developed that can ‘wash the mind’ 
  • High-speed water containing microbubbles is used to clean the user’s body
  • At the same time, their heart rate is measured to gauge their level of relaxation
  • An artificial intelligence uses this data to choose the best video for them

If the bubbles, rose petals and scented candles aren’t enough to soothe you after a long day, your dream bath may be just around the corner.

Scientists in Japan are developing a ‘human washing machine’ that cleans your body while playing a relaxing video chosen for you by artificial intelligence (AI).

The ultrasonic bath will blast users with high-speed water containing extremely fine air bubbles which remove dirt from the pores.

Continue reading… “Would YOU try a ‘human washing machine’? Japanese scientists are developing a futuristic AI bath that washes you with tiny bubbles while blasting out soothing music and videos”

Synthetic data for AI fills gaps in edge cases

Movie magic: computer generated images of automotive scenarios provide valuable synthetic data for AI.

By James Tyrrell

Self-driving car developers safely explore extreme scenarios during autonomous vehicle training thanks to the rise of synthetic data for AI. 

Deep learning has pushed the capabilities of artificial intelligence to new levels, but there are still some kinks to straighten out. Particularly in safety-critical applications such as self-driving cars. If an artificial intelligence (AI) recommendation engine gets its predictions wrong and puts a strange advert in your browser window, you might raise an eyebrow. But no long-term damage would have been done. Things are very different of course when algorithms get behind the wheel and encounter something they’ve never seen before. Rare events, or edge cases, present a tricky problem for developers of autonomous vehicles. Fortunately, synthetic data for AI – based on lifelike simulations of real-world events – could help to fill in the gaps.

Continue reading… “Synthetic data for AI fills gaps in edge cases”

A new AI model can accurately predict human response to novel drug compounds

The journey between identifying a potential therapeutic compound and Food and Drug Administration approval of a new drug can take well over a decade and cost upward of a billion dollars. A research team at the CUNY Graduate Center has created an artificial intelligence model that could significantly improve the accuracy and reduce the time and cost of the drug development process.

Described in a newly published paper in Nature Machine Intelligence, the new model, called CODE-AE, can screen novel drug compounds to accurately predict efficacy in humans. In tests, it was also able to theoretically identify personalized drugs for over 9,000 patients that could better treat their conditions. Researchers expect the technique to significantly accelerate drug discovery and precision medicine.

Accurate and robust prediction of patient-specific responses to a new chemical compound is critical to discover safe and effective therapeutics and select an existing drug for a specific patient. However, it is unethical and infeasible to do early efficacy testing of a drug in humans directly. Cell or tissue models are often used as a surrogate of the human body to evaluate the therapeutic effect of a drug molecule. Unfortunately, the drug effect in a disease model often does not correlate with the drug efficacy and toxicity in human patients. This knowledge gap is a major factor in the high costs and low productivity rates of drug discovery.

Continue reading… “A new AI model can accurately predict human response to novel drug compounds”

Google Announces Text-to-Video AI Generator that Creates HD Video

 By MATT GROWCOOT

Coming hot off the heels of Meta’s text-to-video generator, Google has announced its own artificially intelligent (AI) movie generator. 

Goggle’s Imagen Video is still in its development phase, but the company says it will be capable of producing 1280×768 videos at 24 frames per second from a written prompt. 

According to Google’s research paper, Imagen Video will have stylistic abilities, such as generating videos based on the work of famous artists like Vincent van Gough. It will also generate 3D rotating objects while preserving their structure and rendering text in various animation styles. 

Google hopes that its AI-video model can “significantly decrease the difficulty of high-quality content generation.” Imagen Video builds on Google’s Imagen, a text-to-image program similar to OpenAI’s DALL-E. 

Continue reading… “Google Announces Text-to-Video AI Generator that Creates HD Video”

Google AI Allows You to ‘Fly’ Into a Landscape Photograph

 By MATT GROWCOOT

Google has created a program where the viewer can “fly into” a still photo using artificially intelligent (AI) 3D models. 

In a new paper entitled InfiniteNature-Zero, the researchers take a landscape photo and then use AI to “fly” into it like a bird, with clever software generating a fake landscape thanks to machine learning. 

When facing the daunting task, researchers had to fill in information that a still photo doesn’t provide, such as hidden areas in a photo. For example, a spot that is hidden behind trees needs to be generated. This can be done by “inpainting,” the AI will simulate what it thinks would be there by the process of machine learning with huge datasets. 

Similarly, to get the flying effect, the AI has to generate what is outside of the photograph’s borders. This is called “outpainting” and is much like the content-aware tool in Photoshop where the AI will generate a wider image based upon the original photo and aided by its deep learning from massive datasets. 

As anyone who has ever zoomed into a photo will know, the image quality falls apart as its breaks down into blurry pixels. To stop this from happening, Google uses superresolution, a process where AI synthesizes a noisy, pixellated image into a crisp one. 

Continue reading… “Google AI Allows You to ‘Fly’ Into a Landscape Photograph”

Tesla Showed “Cybertruck On Mars” AI-Generated Images During AI Day

Tesla used artificial intelligence on its supercomputer to create images of made-up Cybertrucks. 

By: Andrei Nedelea

Tesla says it used its artificial intelligence running on its Dojo supercomputer to create the six images featuring Tesla trucks pictured on the surface of the planet Mars. The images were shown during the recent AI Day presentation; the images it produced are interesting and they show the power of such tools, as well as their limitations.

What makes the Dojo supercomputer special is that it doesn’t use third-party components, with all its internals being designed in-house by Tesla. And whereas previously Tesla used Nvidia graphics processors, now even those have been replaced with Tesla’s own chips. The manufacturer developed Dojo especially to reduce the latency that its neural network developers encounter when making updates.

The images shown during AI Day were also processed by a Dojo supercomputer, using an internal software tool that looks similar to others which are publicly available. But you don’t need your own Dojo to get similar, or even better results, as we found out using a text-to-image AI generation platform called Midjourney.

Continue reading… “Tesla Showed “Cybertruck On Mars” AI-Generated Images During AI Day”
Discover the Hidden Patterns of Tomorrow with Futurist Thomas Frey
Unlock Your Potential, Ignite Your Success.

By delving into the futuring techniques of Futurist Thomas Frey, you’ll embark on an enlightening journey.

Learn More about this exciting program.