Researchers develop artificial intelligence that can detect sarcasm in social media

by University of Central Florida

Computer science researchers at the University of Central Florida have developed a sarcasm detector.

Social media has become a dominant form of communication for individuals, and for companies looking to market and sell their products and services. Properly understanding and responding to customer feedback on Twitter, Facebook and other social media platforms is critical for success, but it is incredibly labor intensive.

That’s where sentiment analysis comes in. The term refers to the automated process of identifying the emotion—either positive, negative or neutral—associated with text. While artificial intelligence refers to logical data analysis and response, sentiment analysis is akin to correctly identifying emotional communication. A UCF team developed a technique that accurately detects sarcasm in social media text.

The team’s findings were recently published in the journal Entropy.

Continue reading… “Researchers develop artificial intelligence that can detect sarcasm in social media”
0

A scientist created emotion recognition AI for animals

“Emotion recognition” might be too strong a term. More like pain recognition

BY Tristan Greene

A researcher at Wageningen University & Research recently published a pre-print article detailing a system by which facial recognition AI could be used to identify and measure the emotional state of farm animals. If you’re imagining a machine that tells you if your pigs are joyous or your cows are grumpy… you’re spot on.

Up front: There’s little evidence to believe that so-called ’emotion recognition’ systems actually work. In the sense that humans and other creatures can often accurately recognize (as in: guess) other people’s emotions, an AI can be trained on a human-labeled data set to recognize emotion with similar accuracy to humans.

However, there’s no ground-truth when it comes to human emotion. Everyone experiences and interprets emotions differently and how we express emotion on our faces can vary wildly based on cultural and unique biological features.

In short: The same ‘science‘ driving systems that claim to be able to tell if someone is gay through facial recognition or if a person is likely to be aggressive, is behind emotion recognition for people and farm animals.

Continue reading… “A scientist created emotion recognition AI for animals”
0

Alphabet’s X moonshot division wants to bring AI to the electric grid

By Chris Davies 

Google parent Alphabet has been working on “a moonshot” for the electric grid, with a secret project in its X R&D division aiming to figure out how to make power use more stable, and more green, than it is today. The research, revealed at the White House Leaders Summit on Climate, has been underway for the past three years. 

The team at X – which began as Google X, and then was spun out into a separate division when Google created Alphabet as its overarching parent – isn’t planning to put up power lines and install solar panels and wind turbines itself. Instead, it’s looking at whether a more holistic understanding of the grid would help in the transition to environmentally stable sources. 

“Right now our work is more questions than answers,” Astro Teller, Captain of Moonshots at X, says, “but the central hypothesis we’ve been exploring is whether creating a single virtualized view of the grid – which doesn’t exist today – could make the grid easier to visualize, plan, build and operate with all kinds of clean energy.”

Teller’s use of “moonshot” is a reference to the original NASA plan to put astronauts on the Moon, a project which was generally acknowledged as being ambitious and ground-breaking, not to mention with no immediate path to making a profit. While Teller leads the division, Alphabet brought in Audrey Zibelman – former CEO of Australian energy operator AEMO, and an expert in decarbonization of the electrical system – to lead this particular moonshot.

Continue reading… “Alphabet’s X moonshot division wants to bring AI to the electric grid”
0

Advancing AI With a Supercomputer: A Blueprint for an Optoelectronic ‘Brain’

By Edd Gent 

Building a computer that can support artificial intelligence at the scale and complexity of the human brain will be a colossal engineering effort. Now researchers at the National Institute of Standards and Technology have outlined how they think we’ll get there.

How, when, and whether we’ll ever create machines that can match our cognitive capabilities is a topic of heated debate among both computer scientists and philosophers. One of the most contentious questions is the extent to which the solution needs to mirror our best example of intelligence so far: the human brain.

Rapid advances in AI powered by deep neural networks—which despite their name operate very differently than thebrain—have convinced many that we may be able to achieve “artificial general intelligence” without mimicking the brain’s hardware or software.

Continue reading… “Advancing AI With a Supercomputer: A Blueprint for an Optoelectronic ‘Brain’”
0

Cerebras launches new AI supercomputing processor with 2.6 trillion transistors

By Dean Takahashi

Cerebras Systems has unveiled its new Wafer Scale Engine 2 processor with a record-setting 2.6 trillion transistors and 850,000 AI-optimized cores. It’s built for supercomputing tasks, and it’s the second time since 2019 that Los Altos, California-based Cerebras has unveiled a chip that is basically an entire wafer.

Chipmakers normally slice a wafer from a 12-inch-diameter ingot of silicon to process in a chip factory. Once processed, the wafer is sliced into hundreds of separate chips that can be used in electronic hardware.

But Cerebras, started by SeaMicro founder Andrew Feldman, takes that wafer and makes a single, massive chip out of it. Each piece of the chip, dubbed a core, is interconnected in a sophisticated way to other cores. The interconnections are designed to keep all the cores functioning at high speeds so the transistors can work together as one.

Continue reading… “Cerebras launches new AI supercomputing processor with 2.6 trillion transistors”
0

Blind Spots Uncovered at the Intersection of AI and Neuroscience – Dozens of Scientific Papers Debunked

Findings debunk dozens of prominent published papers claiming to read minds with EEG.

By PURDUE UNIVERSITY 

Is it possible to read a person’s mind by analyzing the electric signals from the brain? The answer may be much more complex than most people think.

Purdue University researchers – working at the intersection of artificial intelligence and neuroscience – say a prominent dataset used to try to answer this question is confounded, and therefore many eye-popping findings that were based on this dataset and received high-profile recognition are false after all.

The Purdue team performed extensive tests over more than one year on the dataset, which looked at the brain activity of individuals taking part in a study where they looked at a series of images. Each individual wore a cap with dozens of electrodes while they viewed the images.

The Purdue team’s work is published in IEEE Transactions on Pattern Analysis and Machine Intelligence. The team received funding from the National Science Foundation.

Purdue University researchers are doing work at the intersection of artificial intelligence and neuroscience. In this photo, a research participant is wearing an EEG cap with electrodes.

Continue reading… “Blind Spots Uncovered at the Intersection of AI and Neuroscience – Dozens of Scientific Papers Debunked”
0

Artificial intelligence has advanced so much, it wrote this article

“Alter 3: Offloaded Agency,” part of the exhibition “AI: More than Human.”

By Jurica Dujmovic

Natural language processing rivals humans’ skills.

I did not write this article. 

In fact, it wasn’t written by any person. Every sentence you see after this introduction is the work of OpenAI’s GPT-3, a powerful language-prediction model capable of composing sequences of coherent text. The only thing I did was provide it with topics to write about. I did not even fix its grammar or spelling.

According to OpenAI, more than 300 applications are using GPT-3, which is part of a field called natural language processing. An average of 4.5 billion words are written per day. Some say the quality of GPT-3’s text is as good as that written by humans.

What follows is GPT-3’s response to topics in general investing.

Continue reading… “Artificial intelligence has advanced so much, it wrote this article”
0

New AI Technique Can Generate 3D Holograms in Real-Time

Holographic display prototype used in the experiments

By  Derya Ozdemir

Not only can this technique run on a smartphone but it also needs less than 1 megabyte of memory.

Virtual reality has been around for decades, and every year, headlines all over the internet announce it to be the next big thing. However, those predictions are yet to become a reality, and VR technologies are far from being widespread. While there are many reasons for that, VR making users feel sick is definitely one of the culprits.

Better 3D visualization could help with that, and now, MIT researchers have developed a new way to produce holograms thanks to a deep learning-based method that works so efficiently that cuts down the computational power need in an instant, according to a press release by the university.

A hologram is an image that resembles a 2D window looking onto a 3D scene, and this 60-year-old technology remade for the digital world can deliver an outstanding image of the 3D world around us.


“People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations,” explains Liang Shi, the study’s lead author and a Ph.D. student in MIT’s Department of Electrical Engineering and Computer Science. “It’s often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades.”

Continue reading… “New AI Technique Can Generate 3D Holograms in Real-Time”
0

Gartner: 75% of VCs will use AI to make investment decisions by 2025

By Kyle Wiggers

By 2025, more than 75% of venture capital and early-stage investor executive reviews will be informed by AI and data analytics. In other words, AI might determine whether a company makes it to a human evaluation at all, de-emphasizing the importance of pitch decks and financials. That’s according to a new whitepaper by Gartner, which predicts that in the next four years, the AI- and data-science-equipped investor will become commonplace.

Increased advanced analytics capabilities are shifting the early-stage venture investing strategy away from “gut feel” and qualitative decision-making to a “platform-based” quantitative process, according to Gartner senior research director Patrick Stakenas. Stakenas says data gathered from sources like LinkedIn, PitchBook, Crunchbase, and Owler, along with third-party data marketplaces, will be leveraged alongside diverse past and current investments.

“This data is increasingly being used to build sophisticated models that can better determine the viability, strategy, and potential outcome of an investment in a short amount of time. Questions such as when to invest, where to invest, and how much to invest are becoming almost automated,” Stakenas said. “The personality traits and work patterns required for success will be quantified in the same manner that the product and its use in the market, market size, and financial details are currently measured. AI tools will be used to determine how likely a leadership team is to succeed based on employment history, field expertise, and previous business success.”

Continue reading… “Gartner: 75% of VCs will use AI to make investment decisions by 2025”
0

New AI Can Detect Emotion With Radio Waves

THERE ARE NATIONAL SECURITY AND PRIVACY IMPLICATIONS TO AN EXPERIMENTAL UK NEURAL NETWORK THAT DECIPHERS HOW PEOPLE RESPOND TO EMOTIONAL STIMULI.

By PATRICK TUCKER 

Picture: military interrogators are talking to a local man they suspect of helping to emplace roadside bombs. The man denies it, even as they show him photos of his purported accomplices. But an antenna in the interrogation room is detecting the man’s heartbeat as he looks at the pictures. The data is fed to an AI, which concludes that his emotions do not match his words…

A UK research team is using radio waves to pick up subtle changes in heart rhythm and then, using an advanced AI called a neural network, understand what those signals mean — in other words, what the subject is feeling. It’s a breakthrough that one day might help, say, human-intelligence analysts in Afghanistan figure out who represents an insider threat.

The paper from a team out of Queen Mary University of London and published in the online journal PLOS ONE, demonstrates how to apply a neural network to decipher emotions gathered with transmitting radio antenna. A neural network functions in a manner somewhat similar to a human brain, with cells creating links to other cells in patterns that create memory, as opposed to more conventional methods such as machine learning, which employ straightforward statistical methods on data sets. 

Continue reading… “New AI Can Detect Emotion With Radio Waves”
0

Using AI to measure the demand for parking space

by Fraunhofer-Gesellschaft

The growth in the number of cars parked in urban areas has a major impact on public space. One key consequence of this is that parking availability is less predictable, both in downtown and in quieter, residential areas, where people are having to spend more and more time looking for a free space. One remedy is to create a residential parking zone. To justify this measure, however, a municipality must first commission reports and carry out a survey of parking availability, both of which take time and cost money. To reduce this effort, the Fraunhofer Institute for Industrial Engineering IAO is now exploring the use of AI to analyze the demand for parking space. This pilot project is running in partnership with the City of Karlsruhe and is being conducted by the Research and Innovation Center for Cognitive Service Systems (KODIS), a branch of Fraunhofer IAO.

Continue reading… “Using AI to measure the demand for parking space”
0