Digital Twin Multi Network Models Could Aid Personalized Therapy, Biomarker, and Drug Discovery

An international team of researchers has developed advanced computer models, or “digital twins,” of diseases that can identify dynamic genome- and cellulome-wide, disease-associated changes in cells across time. Developed with the goal of improving diagnosis and treatment, the research, published in Genome Medicine, underlines the complexity of disease and the necessity of using the right treatment at the right time. The scientists, headed by Mikael Benson, PhD, at Linköping University, and Karolinska Institutet, reported on the development of one model to identify the most important disease protein in hay fever.

In their published paper, titled, “A dynamic single cell‑based framework for digital twins to prioritize disease genes and drug targets,” the investigators concluded, “We propose that our framework allows organization and prioritization of UR [upstream regulator] genes for biomarker and drug discovery. This may have far-reaching clinical implications, including identification of biomarkers for personalized treatment, new drug candidates, and time-dependent personalized prescriptions of drug combinations.”

Continue reading… “Digital Twin Multi Network Models Could Aid Personalized Therapy, Biomarker, and Drug Discovery”

MyHeritage’s deepfake tool animates ancient photos and it’s as weird as it sounds

By Mehreen Kasana

The genealogy service is using artificial intelligence-powered tools as a marketing campaign to drum up new subscribers.

Nostalgia sells and marketers know it. People like to fantasize about a past they think was better than it likely was — and wonder what it might have been like for their relatives who lived through it. To capitalize on this, a genealogy-tracking service called MyHeritage has launched an AI-powered tool it calls Deep Nostalgia which animates old photos of users’ family members, whether deceased or otherwise. 

Several users of the service have taken to Twitter to share animated images of their great grandparents, reanimated, and exhibiting various facial expressions. The style of each video is almost the same: the subject moves their eyes around and then tilts their head a little, as if trying to recall something in answer to a question, before returning their gaze to the viewer. But then, it’s early days for the service, and odds are it’ll get a lot more flexible eventually.

Continue reading… “MyHeritage’s deepfake tool animates ancient photos and it’s as weird as it sounds”

AI researchers use heartbeat detection to identify deepfake videos

AC7E8FA2-D00F-4872-8D00-BFB931C227B3

Facebook and Twitter earlier this week took down social media accounts associated with the Internet Research Agency, the Russian troll farm that interfered in the U.S. presidential election four years ago, that had been spreading misinformation to up to 126 million Facebook users. Today, Facebook rolled out measures aimed at curbing disinformation ahead of Election Day in November. Deepfakes can make epic memes or put Nicholas Cage in every movie, but they can also undermine elections. As threats of election interference mount, two teams of AI researchers have recently introduced novel approaches to identifying deepfakes by watching for evidence of heartbeats.

Existing deepfake detection models focus on traditional media forensics methods, like tracking unnatural eyelid movements or distortions at the edge of the face. The first study for detection of unique GAN fingerprints was introduced in 2018. But photoplethysmography (PPG) translates visual cues such as how blood flow causes slight changes in skin color into a human heartbeat. Remote PPG applications are being explored in areas like health care, but PPG is also being used to identify deepfakes because generative models are not currently known to be able to mimic human blood movements.

Continue reading… “AI researchers use heartbeat detection to identify deepfake videos”

HTC is prototyping an AR headset that looks like sunglasses

9DC8269A-9668-41E7-8C16-76EC7B97A349

The HTC Proton concept rendering

It’s still a work in progress

HTC just announced updates to the Vive Cosmos, its lineup of consumer-ready virtual reality headsets. But it’s also testing a more streamlined mixed reality device codenamed “Project Proton.” While the Proton is just a prototype, HTC shared concept images of its design, shedding some light on the company’s goals.

The Proton headset seems functionally similar to the upcoming Cosmos XR. Both are built for mixed or augmented reality experiences, but unlike Microsoft or Magic Leap’s mixed reality glasses, they use passthrough video instead of transparent waveguide lenses. (So basically, you’re looking at a VR-style screen, but it shows you live video overlaid with virtual elements.) But where the Cosmos XR looks like the Cosmos VR headset, the Proton looks more like ski goggles or — to put it generously — very large sunglasses.

Continue reading… “HTC is prototyping an AR headset that looks like sunglasses”

Researchers tout AI that can predict 25 video frames into the future

91692C51-E8C1-4486-B5C4-BBB788FEF5BD

AI video prediction

AI and machine learning algorithms are becoming increasingly good at predicting next actions in videos. The very best can anticipate fairly accurately where a baseball might travel after it has been pitched, or the appearance of a road miles from a starting position. To this end, a novel approach proposed by researchers at Google, the University of Michigan, and Adobe advances the state of the art with large-scale models that generate high-quality videos from only a few frames. All the more impressive, it does so without relying on techniques like optical flows (the pattern of apparent motion of objects, surfaces, or edges in a scene) or landmarks, as previous methods have.

“In this work, we investigate whether we can achieve high-quality video predictions … by just maximizing the capacity of a standard neural network,” wrote the researchers in a preprint paper describing their work. “To the best of our knowledge, this work is the first to perform a thorough investigation on the effect of capacity increases for video prediction.”

Continue reading… “Researchers tout AI that can predict 25 video frames into the future”

MIT scientists develop a way to recover details from blurry images

side view of pedestrains rush in Hong Kong

A group of MIT researchers have developed a way to recover lost details from images and create clear copies of motion-blurred parts in videos. Their creation, an algorithm called a “visual deprojection model,” is based on a convolutional neural network. They trained that network by feeding it pairs of low-quality images and their high-quality counterparts, so it could learn how the latter can produce blurry, barely visible footage.

When the model is used to process previously unseen low-quality images with blurred elements, it analyzes them to figure out what in the video could’ve produced the blur. It then synthesizes new images that combine data from both the clearer and blurry parts of a video. Say, you have footage of your yard with something moving on screen — the technology can create a version of that video that clearly shows the movement’s sources.

During the team’s tests, the model was able to recreate 24 frames of a video showing a particular person’s gait, their size and the position of their legs. Before you get excited and think that it could one day make CSI’s zoom and enhance a reality, the researchers are more focused on refining the technology for medical use. They believe it could be used to convert 2D images like X-rays into 3D images with more information like CT scans at no additional cost — 3D scans are a lot more expensive — making it especially valuable for developing nations.

“If we can convert X-rays to CT scans, that would be somewhat game-changing. You could just take an X-ray and push it through our algorithm and see all the lost information.”

Via Engadget.com

 

Online dating in a world of deepfakes

72D1806C-2740-4B27-838C-338217E127E1

Facebook has teamed up with the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany–SUNY to build the Deepfake Detection Challenge (DFDC).

Deepfake detection is an enduring arms race that will never end. In case you are wondering… no, this technology will not protect the 2020 election from deepfakes. No science is up to that task.

Facebook’s goal is to commission a realistic data set that will use paid actors, with the required consent obtained, to contribute to the challenge. This “benchmark data” will be used to help developers build better tools to detect deepfakes. Everyone should applaud this effort! As I’ve written about recently, deepfakes will be used extensively by both good and bad people.

Facebook also announced it was bringing its dating service to the U.S. after testing it in roughly 20 countries since its launch last year. These two stories may not seem to have much correlation at first glance. But when combined, they present a potential reality as sinister as it is deceitful. Imagine online dating in a world replete with deepfakes.

Continue reading… “Online dating in a world of deepfakes”

Facebook is challenging researchers to build a deep fakes detector

187EDA4B-77C9-4A61-AF9A-508DB1FFF6A3

Why it makes sense to fight deepfakes with deepfakes.

Deepfakes are becoming so convincing that it’s hard to tell them from real videos. And that could soon spell disaster, eroding trust in what we see online.

That’s why Facebook is teaming up with a consortium of researchers from Microsoft and several prominent research universities for a “Deepfake Detection Challenge.”

The idea is to build a data set, with the help of human user input, that’ll help neural networks detect what is and isn’t a deepfake. The end result, if all goes well, will be a system that can reliably fake videos online.

Continue reading… “Facebook is challenging researchers to build a deep fakes detector”

Microsoft’s tech can make your hologram speak another language

B8D682A7-9658-4F75-83FB-B3187BC7D96F

This exec doesn’t speak Japanese — but it sure looks like she does.

You no longer need to speak another language to look like you’re fluent in it — to anyone, anywhere.

On Wednesday, Microsoft executive Julia White took the stage at the company’s Inspire partner conference to demonstrate how it’s now possible to not only create an incredibly life-like hologram of a person, but to then make the hologram speak another language in the person’s own voice.

This demo was possible thanks to a combination of two existing technologies — mixed reality and neural text-to-speech — and it foreshadows a future in which tech greatly diminishes existing barriers in human communication.

Continue reading… “Microsoft’s tech can make your hologram speak another language”

Chinese vertical dramas made for phone viewing show the future of mobile video

 E6281402-FE75-4F01-BCBE-17849B2C53AC

Mobile video is a big deal, but you don’t need me to tell you that. Big Tech has been fast-moving into the mobile video space for a few years now, and recently a slew of mobile-specific content has arrived.

Instagram launched IGTV in 2018, and is pushing creators to explore what’s possible for mobile video. Netflix introduced vertical 30-second previews, and is now experimenting with mobile-first features like vibrating movies. Spotify is releasing vertical music videos. Snap is delivering plenty of premium mobile video content with its Snap Originals, and has more on the way.

But compared to traditional videos which have been around since 1895, mobile video is still a newborn baby. And for new parents, a good way to learn parenting is to look at what others are doing. On that note, mobile video producers should direct their attention to a format Chinese media companies have been experimenting with: the vertical drama (竖屏剧; shùpíngjù).

Continue reading… “Chinese vertical dramas made for phone viewing show the future of mobile video”

‘Deepfakes’ called new election threat, with no easy fix

81F2A9A4-9D96-4A25-BC35-D119A72954C5

WASHINGTON (AP) — “Deepfake” videos pose a clear and growing threat to America’s national security, lawmakers and experts say. The question is what to do about it, and that’s not easily answered.

A House Intelligence Committee hearing Thursday served up a public warning about the deceptive powers of artificial intelligence software and offered a sobering assessment of how fast the technology is outpacing efforts to stop it.

With a crudely altered video of House Speaker Nancy Pelosi, D-Calif., fresh on everyone’s minds, lawmakers heard from experts how difficult it will be to combat these fakes and prevent them from being used to interfere in the 2020 election.

Continue reading… “‘Deepfakes’ called new election threat, with no easy fix”

Forget 8K, the Insta360 Titan records 11K that can still play back on smartphones

Insta360, the company behind cameras like the Insta360 One X, is aiming to redefine cinematic 360 with 11K footage captured by larger Micro Four Thirds sensors. On Monday, January 7, Insta360 unveiled the Titan, a cinematic 360 camera that the company says is the first standalone 360 camera to shoot in 11K. The Titan also uses the largest sensors for a standalone 360, Insta360 says, with eight Micro Four Thirds sensors.

The Titan, designed as a high-end cinematic virtual reality camera, captures 11K at 30 fps in the 360 format or 10K at 30 fps in the 3D format necessary for VR. The camera can also drop the resolution for faster frame rates, including 8K at 60 fps and 5.3K at 120 fps. Insta360 says the Micro Four Thirds sensors are essential to capturing a cinematic quality, since many use smaller sensors like the ones inside smartphones and action cameras.

Continue reading… “Forget 8K, the Insta360 Titan records 11K that can still play back on smartphones”

Discover the Hidden Patterns of Tomorrow with Futurist Thomas Frey
Unlock Your Potential, Ignite Your Success.

By delving into the futuring techniques of Futurist Thomas Frey, you’ll embark on an enlightening journey.

Learn More about this exciting program.