NVIDIA Unveils New Advances in Robotics and AI at SIGGRAPH

At SIGGRAPH in Denver, NVIDIA Corporation introduced groundbreaking research and innovations in simulation, generative artificial intelligence, and robotics. The company announced a comprehensive suite of services, models, and computing platforms designed to empower robotics and AI developers to “develop, train, and build the next generation of humanoid robotics.”

“The next wave of AI is robotics, and one of the most exciting developments is humanoid robots,” stated Jensen Huang, founder and CEO of NVIDIA. “We’re advancing the entire NVIDIA robotics stack, opening access for worldwide humanoid robotics developers and companies to use the platforms, acceleration libraries, and AI models best suited for their needs.”

Continue reading… “NVIDIA Unveils New Advances in Robotics and AI at SIGGRAPH”

Mimicking Mantis Vision: A Breakthrough in Artificial Eyes for Autonomous Systems

Self-driving cars sometimes struggle with crashes because their visual systems can’t always process static or slow-moving objects in 3D space. This issue is reminiscent of the monocular vision found in many insects, which excels at motion-tracking but lacks depth perception. However, the praying mantis stands out with its exceptional vision, thanks to its binocular depth perception.

Inspired by the praying mantis, researchers at the University of Virginia School of Engineering and Applied Science have developed artificial compound eyes that address significant limitations in current visual data collection and processing systems. These limitations include accuracy issues, data processing lag times, and the need for substantial computational power.

Continue reading… “Mimicking Mantis Vision: A Breakthrough in Artificial Eyes for Autonomous Systems”

NOAA Launches Final GOES-R Satellite, Enhancing Environmental Monitoring with AI

On June 25, the National Oceanic and Atmospheric Administration (NOAA) launched GOES-U, the fourth and final satellite of the Geostationary Operational Environmental Satellites (GOES)-R program. This launch continues and extends the program’s mission to serve as the Western Hemisphere’s most advanced system for observing weather and monitoring the environment, as endorsed by the World Meteorological Organization.

Since the first GOES-R satellite launch in November 2016, these satellites have equipped NOAA with sophisticated imagery and atmospheric measurements, real-time lightning activity mapping, space weather observations, and other critical data collected by an array of sensors and imagers.

Continue reading… “NOAA Launches Final GOES-R Satellite, Enhancing Environmental Monitoring with AI”

AI Breakthrough: Predicting Rogue Waves Up to Five Minutes in Advance

Rogue waves, those unexpectedly massive ridges of water that can ambush ships and beachgoers, are now more predictable thanks to a new artificial intelligence model. Mechanical engineers Thomas Breunung and Balakumar Balachandran from the University of Maryland in College Park report their findings in the July 18 issue of Scientific Reports.

These waves, which crest more than twice as high as surrounding swells, can form where converging waves amplify a single ridge or where ocean currents compress swells into powerful billows. Despite recognizing certain patterns preceding these surges, researchers had not yet developed an effective forecasting tool (SN: 6/8/15). Such a tool could be lifesaving, given that from 2011 to 2018, rogue waves were responsible for 386 deaths and the sinking of 24 ships.

Continue reading… “AI Breakthrough: Predicting Rogue Waves Up to Five Minutes in Advance”

Advancing Autonomous Navigation: NC State Researchers Enhance 3D Mapping with AI

Researchers at North Carolina State University have developed an innovative technique that allows artificial intelligence (AI) programs to more accurately map three-dimensional (3D) spaces using two-dimensional (2D) images captured by multiple cameras. This advancement promises to significantly improve the navigation capabilities of autonomous vehicles while operating efficiently with limited computational resources.

“Most autonomous vehicles use powerful AI programs called vision transformers to take 2D images from multiple cameras and create a representation of the 3D space around the vehicle,” explained Tianfu Wu, Ph.D., Associate Professor of Electrical and Computer Engineering at North Carolina State University. “However, while each of these AI programs takes a different approach, there is still substantial room for improvement.”

Continue reading… “Advancing Autonomous Navigation: NC State Researchers Enhance 3D Mapping with AI”

Neuromorphic Computing: The Future of AI Hardware

While much of the tech world remains fixated on the latest large language models (LLMs) powered by Nvidia GPUs, a quieter revolution is brewing in AI hardware. As the limitations and energy demands of traditional deep learning architectures become increasingly apparent, a new paradigm called neuromorphic computing is emerging – one that promises to slash the computational and power requirements of AI by orders of magnitude. To delve into this promising technology, VentureBeat spoke with Sumeet Kumar, CEO and founder of Innatera, a leading startup in the neuromorphic chip space.

“Neuromorphic processors are designed to mimic the way biological brains process information,” Kumar explained. “Rather than performing sequential operations on data stored in memory, neuromorphic chips use networks of artificial neurons that communicate through spikes, much like real neurons.” This brain-inspired architecture gives neuromorphic systems distinct advantages, particularly for edge computing applications in consumer devices and industrial IoT.

Continue reading… “Neuromorphic Computing: The Future of AI Hardware”

Revolutionizing Software Engineering: The Impact of Large Language Models

The rapid advancement of large language models (LLMs) is transforming various fields, including software engineering. In just a few years, LLMs have evolved from advanced code autocomplete tools to AI agents capable of designing software, implementing and correcting entire modules, and enhancing software engineers’ productivity.

While some excitement around AI-powered software engineering agents is overhyped, there is undeniable value for developers who harness these new AI tools to accomplish more in less time. There are three main ways that LLMs are changing the coding experience:

Continue reading… “Revolutionizing Software Engineering: The Impact of Large Language Models”

Revolutionizing Agriculture: The Transformative Power of Edge AI

The transformative power of artificial intelligence (AI) is poised to make a significant impact on one of the world’s oldest and most critical sectors: agriculture. A new study suggests that “edge AI” could revolutionize farming practices, boost productivity, and achieve sustainability goals across the global food chain.

Edge AI involves programming AI algorithms directly on local devices “at the edge” of a network rather than in a centralized data center. This technology has the potential to enhance farming practices by integrating sensors and AI into smart farm vehicles and machines, facilitating precise irrigation and agrochemical application. According to the study, this precision can reduce the use of water, fertilizers, and agrochemicals, advancing sustainability strategies on farms.

Continue reading… “Revolutionizing Agriculture: The Transformative Power of Edge AI”

Ray Kurzweil: The Singularity Is Nearer and AI’s Future

Ray Kurzweil, a renowned American computer scientist and techno-optimist, is a long-time authority on artificial intelligence (AI). His 2005 bestselling book, The Singularity Is Near, captivated audiences with its sci-fi-like predictions that computers would achieve human-level intelligence by 2029 and that humans would merge with computers to become superhuman by around 2045—a phenomenon he termed “the Singularity.” Now, nearly 20 years later, Kurzweil, 76, has released a sequel, The Singularity Is Nearer, and some of his predictions no longer seem so far-fetched. Kurzweil, currently a principal researcher and AI visionary at Google, shared his insights with the Observer in his personal capacity as an author, inventor, and futurist.

Continue reading… “Ray Kurzweil: The Singularity Is Nearer and AI’s Future”

Advanced AUVs: Revolutionizing Underwater Conservation with AI

Covering nearly 80% of the planet, the underwater environment is critical in maintaining ecological balance and supporting human well-being. Effective conservation relies on a thorough understanding of underwater species distribution and ecosystem dynamics, but this process can be time-consuming and costly.

A team of U.S. National Science Foundation-funded researchers at the Minnesota Interactive Robotics and Vision Laboratory is working to overcome these challenges. They are developing advanced autonomous underwater vehicles (AUVs) powered by artificial intelligence to collect vast amounts of data, provide detailed insights into species distribution, and create comprehensive habitat maps to understand environmental drivers.

Continue reading… “Advanced AUVs: Revolutionizing Underwater Conservation with AI”

Revolutionizing Balance Assessment: AI and Wearable Sensors Lead the Way

Traditionally, physicians have relied on subjective observations and specialized equipment to gauge balance in individuals with conditions such as Parkinson’s disease, neurological injuries, and age-related decline. These methods, especially the subjective ones, can lack precision, be difficult to administer remotely, and often prove inconsistent. Addressing these limitations, researchers from Florida Atlantic University have developed a novel approach using wearable sensors and advanced machine learning algorithms that could redefine balance assessment practices.

The researchers utilized wearable Inertial Measurement Unit (IMU) sensors placed on five body locations: ankle, lumbar, sternum, wrist, and arm. Data collection followed the Modified Clinical Test of Sensory Interaction on Balance (m-CTSIB) protocol, testing four sensory conditions: eyes open and closed on stable and foam surfaces. Each test lasted roughly 11 seconds, simulating continuous balance scenarios.

Continue reading… “Revolutionizing Balance Assessment: AI and Wearable Sensors Lead the Way”

New AI Model Enhances Understanding of Human Decision-Making

Human beings often behave irrationally—or as an artificially intelligent robot might say, “sub-optimally.” Data, the emotionless yet affable android from Star Trek: The Next Generation, frequently struggled to understand humans’ flawed decision-making processes. If he had been programmed with a new model developed by researchers at MIT and the University of Washington, he might have had an easier time.

In a paper published last month, Athul Paul Jacob, a Ph.D. student in AI at MIT, Dr. Jacob Andreas, his academic advisor, and Abhishek Gupta, an assistant professor in computer science and engineering at the University of Washington, described a novel approach to modeling an agent’s behavior. They employed their method to predict human goals and actions.

Continue reading… “New AI Model Enhances Understanding of Human Decision-Making”
Discover the Hidden Patterns of Tomorrow with Futurist Thomas Frey
Unlock Your Potential, Ignite Your Success.

By delving into the futuring techniques of Futurist Thomas Frey, you’ll embark on an enlightening journey.

Learn More about this exciting program.