RoboTire: The Next Frontier in Automotive Maintenance

Autonomous robot cars may be grabbing headlines, but their mechanical counterparts are not far behind. Meet RoboTire, a cutting-edge technology that has the potential to revolutionize tire changing. This innovative system can replace all four tires on a vehicle two to three times faster than a human mechanic, marking a significant leap in efficiency. While the process currently requires human assistance, RoboTire’s ultimate goal is to achieve full autonomy.

When a vehicle pulls onto the platform, RoboTire’s artificial intelligence-driven machine vision system goes into action. It swiftly identifies the wheels, locates the lugs, and guides the robot arms to unscrew and remove the nuts or lug bolts, subsequently removing the wheels and tires. The system then seamlessly transfers them to a Hunter tire-changing machine, sharing information about tire size and type for a smooth tire swap. During this process, a human technician facilitates the transition between the two systems, loads the new tire into the changer, and monitors the operation.

Continue reading… “RoboTire: The Next Frontier in Automotive Maintenance”

Unleash Your Creativity with This AI-Powered Video Transformation App

The use of AI-generated images has become increasingly accessible, with tools like DALL-E allowing users to easily create visuals from a basic prompt. While AI video generation is still in its early stages, a new app from AI startup Runway offers a glimpse into the technology’s potential.

The recently launched Runway ML iOS app enables users to transform existing videos from their photo gallery or captured on the app into entirely new videos using prompts, images, or presets. The app’s video-to-video technology, Gen-1, has been available for desktop use since February, and the app version streamlines the process, making it more accessible to users.

Although the results from Runway’s technology can sometimes appear warped or disfigured, it is important to keep in mind that unexpected outputs are common with generative AI, especially in the early stages of AI video generation.

Runway ML iOS app is free to download from the Apple App Store and already ranks 19th in the Photo & Video category. However, in-app purchases are available to access different subscription plans, which provide users with varying credit limits and features. The Free plan includes a limit of 125 credits and grants users three video projects with 720p video exports. The Standard plan costs $143.99 annually and provides 625 credits, unlimited video projects, watermark removals, 1080p exports, and additional features. The Pro plan, which costs $334.99, provides users with 2,250 credits per month, unlimited video projects, ProRes exports, 500GB of assets, and premium perks. The app also teases Runway’s Gen-2 text-to-video technology, which is “coming soon.”

By The ImpactLab

Cerebras Reveals Andromeda, a 13.5 Million Core AI Supercomputer

The world’s largest chip scales to new heights. 

By Paul Alcorn

Cerebras, the company that builds the world’s largest chip, the Wafer Scale Engine 2 (WSE-2), unveiled its Andromeda supercomputer today. Andromeda combines 16 of the wafer-sized WSE-2 chips into one cluster with 13.5 million AI-optimized cores that the company says delivers up to 1 Exaflop of AI computing horsepower, or 120 Petaflops of 16-bit half-precision. 

The chips are housed in sixteen CS-2 systems. Each chip delivers up to 12.1 TB/s of internal bandwidth (96.8 Terabits) to the AI cores, but the data is fed to the CS-2 processors via 100 GbE networking spread across 124 server nodes in 16 racks. In total, those servers are powered by 284 third-gen EPYC Milan processors wielding 64 cores apiece, totaling 18,176 cores. 

The entire system consumes 500 KW, which is a drastically lower amount of power than somewhat-comparable GPU-accelerated supercomputers. However, scaling a workload across such massively-parallel supercomputers has long been one of the primary inhibitors — at some point, scaling tends to break down, so adding more hardware results in a rapidly diminishing point of returns. 

Continue reading… “Cerebras Reveals Andromeda, a 13.5 Million Core AI Supercomputer”

Engineers Build Reconfigurable Artificial Intelligence Chip

MIT engineers have created a reconfigurable AI chip that comprises alternating layers of sensing and processing elements that can communicate with each other. (Image: Figure courtesy of the researchers and edited by MIT News)

The researchers plan to apply the design to edge computing devices.

MIT engineers are taking a modular approach with a LEGO-like design for a stackable, reconfigurable artificial intelligence chip. The design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow for the chip’s layers to communicate optically. Other modular chip designs employ conventional wiring to relay signals between layers. Such intricate connections are difficult if not impossible to sever and rewire, making such stackable designs not reconfigurable.

The MIT design uses light, rather than physical wires, to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped out or stacked on, for instance to add new sensors or updated processors.

“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” said MIT Postdoc Jihoon Kang. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”

Continue reading… “Engineers Build Reconfigurable Artificial Intelligence Chip”

Artificial intelligence model finds potential drug molecules a thousand times faster

EquiBind (cyan) predicts the ligand that could fit into a protein pocket (green). The true conformation is in pink.

by Alex Ouyang,  Massachusetts Institute of Technology

The entirety of the known universe is teeming with an infinite number of molecules. But what fraction of these molecules have potential drug-like traits that can be used to develop life-saving drug treatments? Millions? Billions? Trillions? The answer: novemdecillion, or 1060. This gargantuan number prolongs the drug development process for fast-spreading diseases like COVID-19 because it is far beyond what existing drug design models can compute. To put it into perspective, the Milky Way has about 100 thousand million, or 108, stars.

In a paper that will be presented at the International Conference on Machine Learning (ICML), MIT researchers developed a geometric deep-learning model called EquiBind that is 1,200 times faster than one of the fastest existing computational molecular docking models, QuickVina2-W, in successfully binding drug-like molecules to proteins. EquiBind is based on its predecessor, EquiDock, which specializes in binding two proteins using a technique developed by the late Octavian-Eugen Ganea, a recent MIT Computer Science and Artificial Intelligence Laboratory and Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) postdoc, who also co-authored the EquiBind paper.

Continue reading… “Artificial intelligence model finds potential drug molecules a thousand times faster”

Researchers develop artificial intelligence that can detect sarcasm in social media

by University of Central Florida

Computer science researchers at the University of Central Florida have developed a sarcasm detector.

Social media has become a dominant form of communication for individuals, and for companies looking to market and sell their products and services. Properly understanding and responding to customer feedback on Twitter, Facebook and other social media platforms is critical for success, but it is incredibly labor intensive.

That’s where sentiment analysis comes in. The term refers to the automated process of identifying the emotion—either positive, negative or neutral—associated with text. While artificial intelligence refers to logical data analysis and response, sentiment analysis is akin to correctly identifying emotional communication. A UCF team developed a technique that accurately detects sarcasm in social media text.

The team’s findings were recently published in the journal Entropy.

Continue reading… “Researchers develop artificial intelligence that can detect sarcasm in social media”

A scientist created emotion recognition AI for animals

“Emotion recognition” might be too strong a term. More like pain recognition

BY Tristan Greene

A researcher at Wageningen University & Research recently published a pre-print article detailing a system by which facial recognition AI could be used to identify and measure the emotional state of farm animals. If you’re imagining a machine that tells you if your pigs are joyous or your cows are grumpy… you’re spot on.

Up front: There’s little evidence to believe that so-called ’emotion recognition’ systems actually work. In the sense that humans and other creatures can often accurately recognize (as in: guess) other people’s emotions, an AI can be trained on a human-labeled data set to recognize emotion with similar accuracy to humans.

However, there’s no ground-truth when it comes to human emotion. Everyone experiences and interprets emotions differently and how we express emotion on our faces can vary wildly based on cultural and unique biological features.

In short: The same ‘science‘ driving systems that claim to be able to tell if someone is gay through facial recognition or if a person is likely to be aggressive, is behind emotion recognition for people and farm animals.

Continue reading… “A scientist created emotion recognition AI for animals”

Alphabet’s X moonshot division wants to bring AI to the electric grid

By Chris Davies 

Google parent Alphabet has been working on “a moonshot” for the electric grid, with a secret project in its X R&D division aiming to figure out how to make power use more stable, and more green, than it is today. The research, revealed at the White House Leaders Summit on Climate, has been underway for the past three years. 

The team at X – which began as Google X, and then was spun out into a separate division when Google created Alphabet as its overarching parent – isn’t planning to put up power lines and install solar panels and wind turbines itself. Instead, it’s looking at whether a more holistic understanding of the grid would help in the transition to environmentally stable sources. 

“Right now our work is more questions than answers,” Astro Teller, Captain of Moonshots at X, says, “but the central hypothesis we’ve been exploring is whether creating a single virtualized view of the grid – which doesn’t exist today – could make the grid easier to visualize, plan, build and operate with all kinds of clean energy.”

Teller’s use of “moonshot” is a reference to the original NASA plan to put astronauts on the Moon, a project which was generally acknowledged as being ambitious and ground-breaking, not to mention with no immediate path to making a profit. While Teller leads the division, Alphabet brought in Audrey Zibelman – former CEO of Australian energy operator AEMO, and an expert in decarbonization of the electrical system – to lead this particular moonshot.

Continue reading… “Alphabet’s X moonshot division wants to bring AI to the electric grid”

Advancing AI With a Supercomputer: A Blueprint for an Optoelectronic ‘Brain’

By Edd Gent 

Building a computer that can support artificial intelligence at the scale and complexity of the human brain will be a colossal engineering effort. Now researchers at the National Institute of Standards and Technology have outlined how they think we’ll get there.

How, when, and whether we’ll ever create machines that can match our cognitive capabilities is a topic of heated debate among both computer scientists and philosophers. One of the most contentious questions is the extent to which the solution needs to mirror our best example of intelligence so far: the human brain.

Rapid advances in AI powered by deep neural networks—which despite their name operate very differently than thebrain—have convinced many that we may be able to achieve “artificial general intelligence” without mimicking the brain’s hardware or software.

Continue reading… “Advancing AI With a Supercomputer: A Blueprint for an Optoelectronic ‘Brain’”

Cerebras launches new AI supercomputing processor with 2.6 trillion transistors

By Dean Takahashi

Cerebras Systems has unveiled its new Wafer Scale Engine 2 processor with a record-setting 2.6 trillion transistors and 850,000 AI-optimized cores. It’s built for supercomputing tasks, and it’s the second time since 2019 that Los Altos, California-based Cerebras has unveiled a chip that is basically an entire wafer.

Chipmakers normally slice a wafer from a 12-inch-diameter ingot of silicon to process in a chip factory. Once processed, the wafer is sliced into hundreds of separate chips that can be used in electronic hardware.

But Cerebras, started by SeaMicro founder Andrew Feldman, takes that wafer and makes a single, massive chip out of it. Each piece of the chip, dubbed a core, is interconnected in a sophisticated way to other cores. The interconnections are designed to keep all the cores functioning at high speeds so the transistors can work together as one.

Continue reading… “Cerebras launches new AI supercomputing processor with 2.6 trillion transistors”

Blind Spots Uncovered at the Intersection of AI and Neuroscience – Dozens of Scientific Papers Debunked

Findings debunk dozens of prominent published papers claiming to read minds with EEG.

By PURDUE UNIVERSITY 

Is it possible to read a person’s mind by analyzing the electric signals from the brain? The answer may be much more complex than most people think.

Purdue University researchers – working at the intersection of artificial intelligence and neuroscience – say a prominent dataset used to try to answer this question is confounded, and therefore many eye-popping findings that were based on this dataset and received high-profile recognition are false after all.

The Purdue team performed extensive tests over more than one year on the dataset, which looked at the brain activity of individuals taking part in a study where they looked at a series of images. Each individual wore a cap with dozens of electrodes while they viewed the images.

The Purdue team’s work is published in IEEE Transactions on Pattern Analysis and Machine Intelligence. The team received funding from the National Science Foundation.

Purdue University researchers are doing work at the intersection of artificial intelligence and neuroscience. In this photo, a research participant is wearing an EEG cap with electrodes.

Continue reading… “Blind Spots Uncovered at the Intersection of AI and Neuroscience – Dozens of Scientific Papers Debunked”

Artificial intelligence has advanced so much, it wrote this article

“Alter 3: Offloaded Agency,” part of the exhibition “AI: More than Human.”

By Jurica Dujmovic

Natural language processing rivals humans’ skills.

I did not write this article. 

In fact, it wasn’t written by any person. Every sentence you see after this introduction is the work of OpenAI’s GPT-3, a powerful language-prediction model capable of composing sequences of coherent text. The only thing I did was provide it with topics to write about. I did not even fix its grammar or spelling.

According to OpenAI, more than 300 applications are using GPT-3, which is part of a field called natural language processing. An average of 4.5 billion words are written per day. Some say the quality of GPT-3’s text is as good as that written by humans.

What follows is GPT-3’s response to topics in general investing.

Continue reading… “Artificial intelligence has advanced so much, it wrote this article”
Discover the Hidden Patterns of Tomorrow with Futurist Thomas Frey
Unlock Your Potential, Ignite Your Success.

By delving into the futuring techniques of Futurist Thomas Frey, you’ll embark on an enlightening journey.

Learn More about this exciting program.