AI researchers use heartbeat detection to identify deepfake videos

AC7E8FA2-D00F-4872-8D00-BFB931C227B3

Facebook and Twitter earlier this week took down social media accounts associated with the Internet Research Agency, the Russian troll farm that interfered in the U.S. presidential election four years ago, that had been spreading misinformation to up to 126 million Facebook users. Today, Facebook rolled out measures aimed at curbing disinformation ahead of Election Day in November. Deepfakes can make epic memes or put Nicholas Cage in every movie, but they can also undermine elections. As threats of election interference mount, two teams of AI researchers have recently introduced novel approaches to identifying deepfakes by watching for evidence of heartbeats.

Existing deepfake detection models focus on traditional media forensics methods, like tracking unnatural eyelid movements or distortions at the edge of the face. The first study for detection of unique GAN fingerprints was introduced in 2018. But photoplethysmography (PPG) translates visual cues such as how blood flow causes slight changes in skin color into a human heartbeat. Remote PPG applications are being explored in areas like health care, but PPG is also being used to identify deepfakes because generative models are not currently known to be able to mimic human blood movements.

Continue reading… “AI researchers use heartbeat detection to identify deepfake videos”

Heron Systems’ AI pilot just beat a human in a simulated dogfight

FE9C2D40-793F-4B5F-B181-0D705018720C

The final round of DARPA’s AlphaDogfight Trial is complete, and once again, the winning AI pilot celebrated its victory against a field of virtual contenders by going on to defeat a human F-16 pilot. An AI pilot developed by Heron Systems won the shootout, defeating a fellow AI from Lockheed.

Continue reading… “Heron Systems’ AI pilot just beat a human in a simulated dogfight”

A.I. can tell if you’re a good surgeon just by scanning your brain

06F08940-6511-4A3F-A6F0-75B76AF0E6CA

Could a brain scan be the best way to tell a top-notch surgeon? Well, kind of. Researchers at Rensselaer Polytechnic Institute and the University at Buffalo have developed Brain-NET, a deep learning A.I. tool that can accurately predict a surgeon’s certification scores based on their neuroimaging data.

This certification score, known as the Fundamentals of Laparoscopic Surgery program (FLS), is currently calculated manually using a formula that is extremely time and labor-consuming. The idea behind it is to give an objective assessment of surgical skills, thereby demonstrating effective training.

“The Fundamental of Laparoscopic Surgery program has been adopted nationally for surgical residents, fellows and practicing physicians to learn and practice laparoscopic skills to have the opportunity to definitely measure and document those skills,” Xavier Intes, a professor of biomedical engineering at Rensselaer, told Digital Trends. “One key aspect of such [a] program is a scoring metric that is computed based on the time of the surgical task execution, as well as error estimation.”

Continue reading… “A.I. can tell if you’re a good surgeon just by scanning your brain”

The 6 unholy AI systems thou shalt not develop

1FC386F9-2F06-4464-B057-6202E3494DAE

TLDR; don’t pretend a Magic 8 Ball is a useful tool for grownups and don’t build hate machines

Artificial intelligence may be the most powerful tool humans have. When applied properly to a problem suited for it, AI allows humans to do amazing things. We can diagnose cancer at a glance or give a voice to those who cannot speak by simply applying the right algorithm in the correct way.

But AI isn’t a panacea or cure-all. In fact, when improperly applied, it’s a dangerous snake oil that should be avoided at all costs. To that end, I present six types of AI that I believe ethical developers should avoid.

Continue reading… “The 6 unholy AI systems thou shalt not develop”

Study: Only 18% of data science students are learning about AI ethics

554A03CB-2EBD-4F94-A94F-7A660AD1C715

The neglect of AI ethics extends from universities to industry

 A study by data science firm Anaconda found an absence of AI ethics initiatives in both academia and industry.

Amid a growing backlash over AI‘s racial and gender biases, numerous tech giants are launching their own ethics initiatives — of dubious intent.

The schemes are billed as altruistic efforts to make tech serve humanity. But critics argue their main concern is evading regulation and scrutiny through “ethics washing.”

At least we can rely on universities to teach the next generation of computer scientists to make. Right? Apparently not, according to a new survey of 2,360 data science students, academics, and professionals by software firm Anaconda.

Only 15% of instructors and professors said they’re teaching AI ethics, and just 18% of students indicated they’re learning about the subject.

Continue reading… “Study: Only 18% of data science students are learning about AI ethics”

Microsoft CTO Kevin Scott believes artificial intelligence will help reprogram the American dream

0C53BE49-D9CC-4B35-AF12-3B90019594EF

Microsoft Chief Technology Officer Kevin Scott rise to his current post is about as unlikely as you will find. He grew up in Gladys, Virginia, a town of a few hundred people. He loved his family and his hometown to such an extent that he did not aspire to leave. He caught the technology bug in the 1970s by chance, and that passion would provide a ticket to bigger places that he did not initially seek.

The issue was one of opportunity. In his formative years, jobs were decreasing in places like Gladys just as they were increasing dramatically in tech hubs like Silicon Valley. After pursuing a PhD in computer science at the University of Virginia, he left in 2003 prior to completing his dissertation to join Google. He would rise to become a Senior Engineering Director there. He left Google for LinkedIn in 2011. He would eventually rise to become the Senior Vice President of Engineering & Operations at LinkedIn. From LinkedIn he joined Microsoft three and a half years ago as CTO. He is deeply satisfied with the course of his career and its trajectory, but part of him laments that it took him so far from his roots and the hometown that he loves.

As he reflected further on this conundrum, he put his thoughts to paper and published the book, Reprogramming the American Dream in April, co-authored by Greg Shaw. As he noted in a conversation I recently had with him, “Silicon Valley is a perfectly wonderful place, but we should be able to create opportunity and prosperity everywhere, not just in these coastal urban innovation centers.”

Scott believes that machine learning and artificial intelligence will be key ingredients to aiding an entrepreneurial rise in smaller towns across the United States. These advances will place less of a burden on companies to hire employees in the small towns, as some technical development will be conducted by the bots. He also hopes that as some of these businesses blossom, more kids will be inspired to start their own businesses powered by technology, creating a virtuous cycle of sorts.

Continue reading… “Microsoft CTO Kevin Scott believes artificial intelligence will help reprogram the American dream”

Artificial intelligence makes blurry faces look more than 60 times sharper

 72CA34AD-D226-4EE2-9CF0-0D37841603BC

This AI turns blurry pixelated photos into hyperrealistic portraits that look like real people. The system automatically increases any image’s resolution up to 64x, ‘imagining’ features such as pores and eyelashes that weren’t there in the first place.

Duke University researchers have developed an AI tool that can turn blurry, unrecognizable pictures of people’s faces into eerily convincing computer-generated portraits, in finer detail than ever before.

Previous methods can scale an image of a face up to eight times its original resolution. But the Duke team has come up with a way to take a handful of pixels and create realistic-looking faces with up to 64 times the resolution, ‘imagining’ features such as fine lines, eyelashes and stubble that weren’t there in the first place.

“Never have super-resolution images been created at this resolution before with this much detail,” said Duke computer scientist Cynthia Rudin, who led the team.

The system cannot be used to identify people, the researchers say: It won’t turn an out-of-focus, unrecognizable photo from a security camera into a crystal clear image of a real person. Rather, it is capable of generating new faces that don’t exist, but look plausibly real.

While the researchers focused on faces as a proof of concept, the same technique could in theory take low-res shots of almost anything and create sharp, realistic-looking pictures, with applications ranging from medicine and microscopy to astronomy and satellite imagery, said co-author Sachit Menon ’20, who just graduated from Duke with a double-major in mathematics and computer science.

Continue reading… “Artificial intelligence makes blurry faces look more than 60 times sharper”

Microsoft sacks journalists to replace them with robots

B64D68A0-EB66-4829-9E25-03698691FDA1

Users of the homepages of the MSN website and Edge browser will now see news stories generated by AI

Dozens of journalists have been sacked after Microsoft decided to replace them with artificial intelligence software.

Staff who maintain the news homepages on Microsoft’s MSN website and its Edge browser – used by millions of Britons every day – have been told that they will be no longer be required because robots can now do their jobs.

Around 27 individuals employed by PA Media – formerly the Press Association – were told on Thursday that they would lose their jobs in a month’s time after Microsoft decided to stop employing humans to select, edit and curate news articles on its homepages.

Continue reading… “Microsoft sacks journalists to replace them with robots”

Self-supervised learning is the future of AI

DAF3C609-529C-44E1-A84B-3C1B16C3A64E

Despite the huge contributions of deep learning to the field of artificial intelligence, there’s something very wrong with it: It requires huge amounts of data. This is one thing that both the pioneers and critics of deep learning agree on. In fact, deep learning didn’t emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.

Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.

Continue reading… “Self-supervised learning is the future of AI”

Yale researchers say humans would like robots better if they were more vulnerable

2409415C-B9C5-4BAA-A7C2-5710E0B00297

Three humans and a robot form a team and start playing a game together. No, this isn’t the beginning of a joke, it’s the premise of a fascinating new study just released by Yale University.

Researchers were interested to see how the robot’s actions and statements would influence the three humans’ interactions among one another. They discovered that when the robot wasn’t afraid to admit it had made a mistake, this outward showing of vulnerability led to more open communication between the people involved as well.

Continue reading… “Yale researchers say humans would like robots better if they were more vulnerable”

The intelligence community is developing its own AI ethics

5BD80116-A928-4E71-B66A-97BC0CB47E43

While less public than the Pentagon’s Joint Artificial Intelligence Center, the intelligence community has been developing its own set of principles for the ethical use of artificial intelligence.

The Pentagon made headlines last month when it adopted its five principles for using artificial intelligence, marking the end of a months-long effort over what guidelines the department should follow as it develops new AI tools and AI-enabled technologies.

Less well known is that the intelligence community is developing its own principles governing the use of AI.

“The intelligence community has been doing it’s own work in this space as well. We’ve been doing it for quite a bit of time,” Ben Huebner, chief of the Office of Director of National Intelligence’s Civil Liberties, Privacy, and Transparency Office, said at an Intelligence and National Security Alliance event March 4.

Continue reading… “The intelligence community is developing its own AI ethics”

Elon Musk says all advanced AI development should be regulated, including at Tesla

65F8BE23-59EC-4AEB-A322-656737E57985

Tesla and SpaceX CEO Elon Musk is once again sounding a warning note regarding the development of artificial intelligence. The executive and founder tweeted on Monday evening that “all org[anizations] developing advance AI should be regulated, including Tesla.”

Musk was responding to a new MIT Technology Review profile of OpenAI, an organization founded in 2015 by Musk, along with Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba and John Schulman. At first, OpenAI was formed as a non-profit backed by $1 billion in funding from its pooled initial investors, with the aim of pursuing open research into advanced AI with a focus on ensuring it was pursued in the interest of benefiting society, rather than leaving its development in the hands of a small and narrowly-interested few (i.e., for-profit technology companies).

Continue reading… “Elon Musk says all advanced AI development should be regulated, including at Tesla”

Discover the Hidden Patterns of Tomorrow with Futurist Thomas Frey
Unlock Your Potential, Ignite Your Success.

By delving into the futuring techniques of Futurist Thomas Frey, you’ll embark on an enlightening journey.

Learn More about this exciting program.