New Buzz Lightyear Toy Includes Conversational AI and Voice Recognition

By ERIC HAL SCHWARTZ

Robot toymaker Robosen has debuted a new Buzz Lightyear toy based on the recent Disney and Pixar film built with conversational AI and voice recognition to interact with children. The robot incorporates natural language understanding to detect when it is addressed and respond like the character from the film, though the AI makes it seem more like the Toy Story action figure that comes to life when no humans are around. The $650 robot is available for pre-order and will arrive next spring (when its price will rise to $800).

Continue reading… “New Buzz Lightyear Toy Includes Conversational AI and Voice Recognition”
0

New AI Algorithm Could Lead to an Epilepsy Cure

Epilepsy is a neurological condition in which brain nerve cell activity is disturbed, resulting in seizures.

The AI algorithm detects brain abnormalities that cause epileptic seizures.

International researchers working under the direction of University College London have created an artificial intelligence (AI) algorithm that can identify subtle brain abnormalities that cause epileptic seizures.

In order to create the algorithm that reveals where abnormalities occur in instances with drug-resistant focal cortical dysplasia (FCD), a major cause of epilepsy, the Multicentre Epilepsy Lesion Detection project (MELD) analyzed more than 1,000 patient MRI images from 22 international epilepsy centers.

FCDs are brain regions that have developed abnormally and often cause drug-resistant epilepsy. Surgery is typically used to treat it, however, finding the lesions on an MRI is an ongoing problem for physicians since MRI scans for FCDs can appear normal.

The scientists utilized about 300,000 locations throughout the brain to develop the algorithm, which measured cortical features using MRI scans, such as how thick or folded the cortex/brain surface was. After that, based on patterns and characteristics, professional radiologists classified examples as either having FCD or having a healthy brain, which served as the algorithm’s training data.

According to the results, which were published in the journal Brain, the algorithm was successful in identifying the FCD in 67% of cases in the cohort (538 participants).

Continue reading… “New AI Algorithm Could Lead to an Epilepsy Cure”
0

AI is getting better at generating porn. We might not be prepared for the consequences.

Tech ethicists and sex workers alike brace for impact

By Kyle WiggersAmanda Silberling

A red-headed woman stands on the moon, her face obscured. Her naked body looks like it belongs on a poster you’d find on a hormonal teenager’s bedroom wall — that is, until you reach her torso, where three arms spit out of her shoulders.

AI-powered systems like Stable Diffusion, which translate text prompts into pictures, have been used by brands and artists to create concept images, award-winning (albeit controversial) prints and full-blown marketing campaigns.

But some users, intent on exploring the systems’ murkier side, have been testing them for a different sort of use case: porn.

AI porn is about as unsettling and imperfect as you’d expect (that red-head on the moon was likely not generated by someone with an extra arm fetish). But as the tech continues to improve, it will evoke challenging questions for AI ethicists and sex workers alike.

Pornography created using the latest image-generating systems first arrived on the scene via the discussion boards 4chan and Reddit earlier this month, after a member of 4chan leaked the open source Stable Diffusion system ahead of its official release. Then, last week, what appears to be one of the first websites dedicated to high-fidelity AI porn generation launched.

Continue reading… “AI is getting better at generating porn. We might not be prepared for the consequences.”
0

MIT’s new AI model can successfully detect Parkinson’s disease

Apart from detecting Parkinson’s, the new model showed promise in detecting the severity of disease.

Written by Sethu Pradeep 

A new artificial intelligence model developed by researchers at MIT shows great promise in detecting Parkinson’s diesease from breathing patterns.

MIT researchers have developed an early-research artificial intelligence model that has demonstrated success in detecting Parkinson’s disease from breathing patterns. The model relies on data collected by a device that detects breathing patterns in a contactless manner using radio waves.

Neurological disorders are some of the leading sources of disability globally and Parkinson’s disease is the fastest-growing neurological disease in the world. Parkinson’s is difficult to diagnose as diagnosis primarily relies on the appearance of symptoms like tremors and slowness but these symptoms usually appear several years after the onset of the disease.

The model also estimated the severity and progression of Parkinson’s, in accordance with the Movement Disorder Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS), which is the standard rating scale used clinically. The research findings have been published in the journal Nature Medicine.

The researchers trained the model by using nocturnal breathing data (data collected while subjects were asleep) from various hospitals in the US and some public datasets. After training the model, they tested it on a dataset that was not used in training, and discovered it diagnosed Parkinson’s disease with an accuracy of about 90 per cent when it analyses one night’s sleep worth of data from a patient. They found that the model’s accuracy improves to 95 per cent when it analyses sleep data from 12 nights.

The relationship between Parkinson’s and breathing has been known since 1817, as observed by James Parkinson in his research. There has also been previous research into how Parkinson’s patients develop sleep breathing disorders, weakness in the function of respiratory muscles, and degeneration in brainstem areas that control breathing.

Continue reading… “MIT’s new AI model can successfully detect Parkinson’s disease”
0

The AI Vision System Set to Revolutionize Whole Body Scans

Radiologists have long had the capability to scan entire bodies. But identifying all the body’s many internal structures is much harder. Now an AI system can do it instead. 

Whole body imaging is a technique that scans a person’s insides for the early warning signs of heart disease, cancer and other worrying conditions. There are various ways to make these scans but the most common uses x-ray to create images of body slices. A computer then fits the images back together to create a 3D model of the whole body.

This can be used to plan certain types of surgery but it is also offered as a kind of screening service to provide piece of mind to health-conscious individuals—at least that’s the promise. 

The reality is that whole body CT scans are difficult to analyze, not least because it is hard to identify all the different organs from the mass of tissues that make up the human body. So physicians have turned to computer vision systems to do the job instead. 

The task is to identify the organs, structure and bones, as well as their three-dimensional shape using the data from the scan. However, the current crop of algorithms do not work particularly well, say Jakob Wasserthal and colleagues at the University Hospital Basel in Switzerland. 

Continue reading… “The AI Vision System Set to Revolutionize Whole Body Scans”
0

Google Just Stepped Up the Game for Text-to-Image AI

Google announced their new text-to-image diffusion model, DreamBooth. This AI-tool can generate a myriad of images of a user’s desired subject in different contexts using the guidance of a text prompt.

“Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook?”, reads the introduction of the paper.

The key idea for the model is to allow users to create photorealistic renditions of their desired subject instance and bind it with the text-to-image diffusion model. Thus, this tool proves to be effective for synthesising subjects in different contexts.

Google’s DreamBooth takes a moderately different approach when compared to other recently released text-to-image tools like DALL-E2, Stable Diffusion, Imagen, and Midjourney by providing more control of the subject image and then guiding the diffusion model using text based inputs. 

Continue reading… “Google Just Stepped Up the Game for Text-to-Image AI”
0

Super-fast EV charging might be possible with AI and machine learning

Battery-specific chargers are on the horizon.

By Can Emir

Researchers from Idaho National Laboratory are using machine learning and other advanced analysis to reduce electric vehicle charging times without damaging the battery, a press release revealed.

Despite the growing popularity of electric vehicles, many consumers hesitate to make the switch. One of the primary reasons is that it takes so much longer to power up an electric car than to gas up a vehicle powered up by an internal combustion engine. This hesitation is a reflection of range anxiety, and the solution for this anxiety is to get yourself a long-range electric vehicle, which can be a bit pricey.

Continue reading… “Super-fast EV charging might be possible with AI and machine learning”
0

Is your doctor providing the right treatment? This healthcare AI tool can help

By Sean Michael Kerner

How does a medical professional stay aware of the right procedures and treatments for patient ailments in the modern world? While many often rely on experience, there is another way that could have life-saving consequences. The trick is, it relies heavily on the power of artificial intelligence (AI).

New York-based medical startup H1 released a new update to its HCP Universe platform today to inject a dose of healthcare AI into medical intelligence. The HCP Universe platform is currently used by medical affairs teams at life sciences companies, which make sure doctors are aware of and use the latest science and medicine. 

Continue reading… “Is your doctor providing the right treatment? This healthcare AI tool can help”
0

How AI is being used to improve 3D printing

By Adam Zewe

  • Scientists and engineers often manually use trial-and-error to find the optimum parameters to consistently 3D print new materials effectively.
  • But researchers have now streamlined the process by training a machine-learning model to monitor and adjust the 3D printing process to correct errors in real-time.
  • The system could help engineers easily incorporate novel materials into their prints and allow technicians to adjust the printing process if material or environmental conditions change unexpectedly.

Scientists and engineers are constantly developing new materials with unique properties that can be used for 3D printing, but figuring out howto print with these materials can be a complex, costly conundrum.

Often, an expert operator must use manual trial-and-error — possibly making thousands of prints — to determine ideal parameters that consistently print a new material effectively. These parameters include printing speed and how much material the printer deposits.

MIT researchers have now used artificial intelligence to streamline this procedure. They developed a machine-learning system that uses computer vision to watch the manufacturing process and then correct errors in how it handles the material in real-time.

Continue reading… “How AI is being used to improve 3D printing”
0

Hearing aids are getting smarter. Think AI, health tracking

Traditional hearing aid makers and the likes of Bose and Harman are pouring resources into augmented hearing and “hearable” devices that do more than improve sound.

By Shara TibkenRoger Cheng

People with hearing problems have more options than ever before. Getty Images 

This is part of CNET’s “Tech Enabled” series about the role technology plays in helping the disability community.

When Shannon Conn puts her hearing aids in her ears in the morning, a few things happen.

The lights in her bathroom turn on.

The coffee maker starts brewing.  

When someone rings the doorbell, the chime streams straight into her ear.

Conn, a 43-year-old special education advocate from College Grove, Tennessee, wears the Oticon Opn, which features the ability to link up with other connected devices. She’s put that capability to good use, connecting her Opn to her smart home and setting up commands that allow her hearing aids to trigger her house’s morning routine.

“I can’t live without them,” Conn says. “I haven’t said that in a really long time about anything.”

Continue reading… “Hearing aids are getting smarter. Think AI, health tracking”
0

Augmented reality could be the future of paper books, according to new research

Augmented reality might allow printed books to make a comeback against the e-book trend, according to researchers from the University of Surrey.

Surrey has introduced the third generation (3G) version of its Next Generation Paper (NGP) project, allowing the reader to consume information on the printed paper and screen side by side.

Dr. Radu Sporea, senior lecturer at the Advanced Technology Institute (ATI), comments:

“The way we consume literature has changed over time with so many more options than just paper books. Multiple electronic solutions currently exist, including e-readers and smart devices, but no hybrid solution which is sustainable on a commercial scale.

“Augmented books, or a-books, can be the future of many book genres, from travel and tourism to education. This technology exists to assist the reader in a deeper understanding of the written topic and get more through digital means without ruining the experience of reading a paper book.”

Continue reading… “Augmented reality could be the future of paper books, according to new research”
0

Tempur backs $20M investment for an AI-powered robot bed disrupting the future of sleep

BY AKANSHA DIMRI

The promise of evolutionary learning has long excited AI researchers, but few applications are as meaningful as solving the complex problem of sleep. Now here comes a startup from Silicon Valley dubbed as Bryte, which claims to be creating the world’s most advanced AI-connected and robotics-powered bed. The US-based tech company has now secured a $20 million strategic investment round.

The funding was led by Tempur Sealy International, the company synonymous with the mattress industry. The two companies intend to collaborate on future products, services, and technology with the latest investment. Further, ARCHina Capital and other existing Bryte investors also participated in the funding round.

“Our mission is to empower lives through restorative sleep, which starts by reaching as many people as possible, with the most technically advanced products and first-rate services at a complete range of price points. There is simply no company in the world with a more complete and desirable portfolio of brands than Tempur Sealy, and we couldn’t be more excited about their investment,” said Luke Kelly, CEO, Bryte.

“It has long been clear to us that meaningful innovation improves sleep outcomes for millions of people. With Bryte, we have invested in a company that is committed to innovation with an elegant, seamless integrated product that we believe fits our long-term brand strategy. We are excited to form a relationship with their talented team,” said Scott Thompson, Tempur Sealy Chairman and CEO.

Continue reading… “Tempur backs $20M investment for an AI-powered robot bed disrupting the future of sleep”
0