Tempur backs $20M investment for an AI-powered robot bed disrupting the future of sleep


The promise of evolutionary learning has long excited AI researchers, but few applications are as meaningful as solving the complex problem of sleep. Now here comes a startup from Silicon Valley dubbed as Bryte, which claims to be creating the world’s most advanced AI-connected and robotics-powered bed. The US-based tech company has now secured a $20 million strategic investment round.

The funding was led by Tempur Sealy International, the company synonymous with the mattress industry. The two companies intend to collaborate on future products, services, and technology with the latest investment. Further, ARCHina Capital and other existing Bryte investors also participated in the funding round.

“Our mission is to empower lives through restorative sleep, which starts by reaching as many people as possible, with the most technically advanced products and first-rate services at a complete range of price points. There is simply no company in the world with a more complete and desirable portfolio of brands than Tempur Sealy, and we couldn’t be more excited about their investment,” said Luke Kelly, CEO, Bryte.

“It has long been clear to us that meaningful innovation improves sleep outcomes for millions of people. With Bryte, we have invested in a company that is committed to innovation with an elegant, seamless integrated product that we believe fits our long-term brand strategy. We are excited to form a relationship with their talented team,” said Scott Thompson, Tempur Sealy Chairman and CEO.

Continue reading… “Tempur backs $20M investment for an AI-powered robot bed disrupting the future of sleep”

Google’s DeepMind AI Predicts 3D Structure of Nearly Every Protein Known to Science

This ribbon diagram shows the 3D protein structure of an antibody. Complex? It’s pretty simple for an AI.

By Monisha Ravisetti

At last, the decades-old protein folding problem may finally be put to rest.

It wasn’t until 1957 when scientists earned special access to the molecular third dimension. 

After 22 years of grueling experimentation, John Kendrew of Cambridge University finally uncovered the 3D structure of a protein. It was a twisted blueprint of myoglobin, the stringy chain of 154 amino acids that helps infuse our muscles with oxygen. As revolutionary as this discovery was, Kendrew didn’t quite open up the protein architecture floodgates. During the next decade, fewer than a dozen more would be identified. 

Fast-forward to today, 65 years since that Nobel Prize-winning breakthrough. 

On Thursday, Google’s sister company, DeepMind, announced it has successfully used artificial intelligence to predict the 3D structures of nearly every catalogued protein known to science. That’s over 200 million proteins found in plants, bacteria, animals, humans — almost anything you can imagine.

“Essentially, you can think of it as covering the entire protein universe,” Demis Hassabis, founder and CEO of DeepMind, told reporters this week.

It’s thanks to AlphaFold, DeepMind’s groundbreaking AI system, which has an open-source database so scientists worldwide can involve it in their research at will, and for free. Since AlphaFold’s official launch in July of last year — when it had only pinpointed some 350,000 3D proteins — the program has made a noticeable dent in the landscape of research. 

Continue reading… “Google’s DeepMind AI Predicts 3D Structure of Nearly Every Protein Known to Science”

Automatic recognition of jellyfish with artificial intelligence

Aequorea victoria.

The jellyfish sighting app, MedusApp, recently incorporated artificial intelligence (AI) to automatically recognize different species of jellyfish. Until now, this app only required users to select the species of jellyfish from a catalog provided; now the user can upload photos and have the species automatically identified before uploading them to the app for publication.

MedusApp, which is freely available in Spanish and English for both Android and iPhone, has been developed by researchers from the University of Alicante (UA) and two computer scientists from the Polytechnic University of Valencia (UPV), in collaboration with the CIBER of Diseases (CIBERES) and the Immunoallergy Laboratory of the Fundación Jiménez Díaz Health Research Institute (IIS-FJD). Since its launch in 2018, the platform has amassed more than 100,000 downloads and 6,000 jellyfish sightings. “Thanks to the collaboration of citizens and their sightings, we have been able to train the AI software with several thousand real photos to generate a mathematical model with a total of 25 species, that will ultimately help the app automatically recognize the most common jellyfish,” a novelty update that the programmers from the UPV Eduardo Blasco and Ramón Palacios have highlighted.

Continue reading… “Automatic recognition of jellyfish with artificial intelligence”

Ambitious Spanish start-up Trucksters uses AI to halve transit times

By Stuart Todd

Spanish start-up Trucksters has launched an express road freight service to the UK, using artificial intelligence (AI) to reduce transit time.

The operation will start with two routes, one covering the centre and north of Spain, and the other part of the Mediterranean area, carrying mostly foodstuffs.

Trips are completed in 28 to 34 hours, a reduction of almost 50% on standard transit times, with the time-saving made possible by the use of a relay system of drivers, based on AI, which allows the service to operate non-stop, Truckster co-founder and head of growth Gabor Balogh told The Loadstar.

Madrid-based Trucksters already operates a relay service between Spain and the Benelux, Germany and Poland.

Continue reading…Ambitious Spanish start-up Trucksters uses AI to halve transit times

Drover AI is using computer vision to keep scooter riders off sidewalks

By Rebecca Bellan

Shared micromobility companies have been adopting startlingly advanced new tech to correct for the thing that cities hate most — sidewalk riding. Some companies, like Bird, Neuron and Superpedestrian, have relied on hyperaccurate GPS systems to determine if a rider is riding inappropriately. Others, like Lime, have started integrating camera-based computer vision systems that rely on AI and machine learning to accurately detect where a rider is.

The latter camp has largely leaned on the innovations of Drover AI, a Los Angeles–based startup that has tested and sold its attachable IoT module to the likes of Spin, Voi, Helbiz, Beam and Fenix to help operators improve scooter safety and, most importantly, win city permits.

Drover, which was founded in May 2020, closed out a $5.4 million Series A Wednesday. The startup will use the funds to continue building on the next generation of PathPilot, Drover’s IoT module that contains a camera and a compute system that analyzes visual data and issues commands directly to the scooter. Depending on the city’s needs, the scooters will either make noises to alert a rider that they’re driving on the sidewalk or slow them down. The new version, called PathPilot Lite, will do much of the same, except it will be more integrated, better and cheaper, says Drover’s co-founder and chief business officer Alex Nesic.

Drover has modules on over 5,000 vehicles with orders for over 15,000 more that the company needs to deliver by the end of the year, according to Nesic.

Continue reading… “Drover AI is using computer vision to keep scooter riders off sidewalks”


Technology has changed the lives of a lot of people, not just in terms of having access to information or having readily available communication tools. There are a lot of companies out there that are using all these advances to create devices and tools that can help people living with disabilities have a more independent life. Those with mobility difficulties from such conditions like multiple sclerosis, stroke, cerebral palsy, etc can now use this new piece of bionic clothing that is not just functional but also easy to use and a bit fashionable.


Microsoft launches Project AirSim to train AI drone systems

Project AirSim is a flight simulator for drones which can be used by companies to develop and train software controlling them.

BySahil Pawar

Microsoft has launched a platform named Project AirSim to train the artificial intelligence systems of autonomous aircraft. Project AirSim is a flight simulator for drones that companies can use to develop and train software controlling them.

The platform makes test flights possible in places that are usually considered risky, such as near power lines. Also, millions of flights can be simulated with this platform in the future, and companies can virtually see how the vehicle flies in the rain or how strong winds might affect its battery life.

In a statement announcing the launch, Microsoft said that the platform would show the power of the industrial metaverse, a virtual world where businesses will build, test, and hone solutions and bring them into the real world. The firm envisages using the technology to train the AI systems which fly autonomous air vehicles, from air taxis to delivery drones.

Continue reading… “Microsoft launches Project AirSim to train AI drone systems”

Smart Beehives Will Monitor their Colonies with AI

A single colony of bees can pollinate up to 300 million flowers daily. And that includes human-managed honeybees. That means that, unlike some livestock or agricultural practices, this is a human activity that is beneficial to the environment and key to the food system’s sustainability. Now, robotics, artificial intelligence, and big data will bolster the collaboration between humans and bees. That is the proposal of an Israeli startup.

Continue reading… “Smart Beehives Will Monitor their Colonies with AI”


Bees are vital for the planet, given they are excellent pollinators, and perhaps the most crucial link in maintaining biodiversity. They help in ensuring food security, and also diversify the kinds of plants and animals that are nurtured on the face of the earth. Perhaps that’s the reason beekeeping and pollination need to be promoted more than other things to maintain the balance.

After the horrors of the Delta Air Lines Shipping neglect that killed five million honeybees enroute to nurseries in Alaska for pollination of apple orchards, it’s crucial to have ultra-mobile beekeeping methods to safeguard these wild insects. The 2035 Moving Beehive Mobility is something the beekeeping industry needs for good. As the name suggests, this is a high-tech beekeeping nest for responsible culturing. But we all need it before the year 2035 given all the chaos on the planet!


Harvard Developed AI Identifies the Shortest Path to Human Happiness

The researchers created a digital model of psychology aimed to improve mental health. The system offers superior personalization and identifies the shortest path toward a cluster of mental stability for any individual.

Deep Longevity, in collaboration with Harvard Medical School, presents a deep learning approach to mental health.

Deep Longevity has published a paper in Aging-US outlining a machine learning approach to human psychology in collaboration with Nancy Etcoff, Ph.D., Harvard Medical School, an authority on happiness and beauty.

The authors created two digital models of human psychology based on data from the Midlife in the United States study.

The first model is an ensemble of deep neural networks that predicts respondents’ chronological age and psychological well-being in 10 years using information from a psychological survey. This model depicts the trajectories of the human mind as it ages. It also demonstrates that the capacity to form meaningful connections, as well as mental autonomy and environmental mastery, develops with age. It also suggests that the emphasis on personal progress is constantly declining, but the sense of having a purpose in life only fades after 40-50 years. These results add to the growing body of knowledge on socioemotional selectivity and hedonic adaptation in the context of adult personality development.

Continue reading… “Harvard Developed AI Identifies the Shortest Path to Human Happiness”

Cerebras sets record for ‘largest AI model’ on a single chip

Plus: Yandex releases 100-billion-parameter language model for free, and more

By Katyanna Quach

IN BRIEF US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world’s largest Wafer Scale Engine 2 chip the size of a plate.

“Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system,” the company claimedthis week. “Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes.”

The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

Even though Cerebras has evidently managed to train the largest model on a single device, it will still struggle to win over big AI customers. The largest neural network systems contain hundreds of billions to trillions of parameters these days. In reality, many more CS-2 systems would be needed to train these models. 

Machine learning engineers will likely run into similar challenges to those they already face when distributing training over numerous machines containing GPUs or TPUs – so why switch over to a less familiar hardware system that does not have as much software support?

Continue reading… “Cerebras sets record for ‘largest AI model’ on a single chip”


By Lauren Forristal

Today, IKEA is launching a new AI-driven interactive design experience called IKEA Kreativ for IKEA.com and the IKEA app. With the new feature, U.S. customers can design and visualize their own living spaces with digitalized furniture on their smartphones instead of traveling to the brick-and-mortar store where they are likely to be distracted by the warehouse-shaped labyrinth of showrooms, blue shopping bags and Swedish meatballs.

Currently, the IKEA Kreativ feature is available on iOS devices and desktops. It will be coming to Android devices later this summer. The AI (Artificial Intelligence) experience is expected to launch in additional countries in September. However, there are no exact launch dates.

With IKEA Kreativ, the company continues taking steps toward digital transformation. According to IKEA, it is the home retail industry’s first fully featured mixed-reality design experience for lifelike and accurate interior design, bridging the gap between e-commerce and in-store customer journeys.

Virtual home design platforms aren’t new. In fact, the Swedish retailer was one of the first furniture companies to ride the AR wave in 2017 with its IKEA Place app. The app works with Apple’s ARKit to allow customers to scan a room and place an IKEA chair, bed, etc., in the space. There is also visual search tech that recommends similar furniture when a user scans an item that already exists in their home. Amazon, Wayfair, Target, The Home Depot, Overstock, Houzz and others have implemented AR apps to help clients design a room with purchasable products, too.