Seoul-based Deep Brain AI launches paid service to speak to the dead 

The company’s image synthesis technology can realize an AI human that is complete in non-verbal parts such as lip-syncing, movements, and facial expressions.

By Sahil Pawar

Seoul-based Deep Brain AI launched a paid service called Re;memory for those who would like to speak again with their deceased loved ones, even if it is just virtually. 

When a loved one dies, a family is often left with only digital memories such as photos and videos. However, these are only one-way modes of communication where one can see or hear their loved ones but cannot interact back with them.

DeepBrain AI takes this trove of information and a pre-interview to put together a private virtual meeting in their studio where one can have unfinished conversations with their loved ones.

Continue reading… “Seoul-based Deep Brain AI launches paid service to speak to the dead “
0

Deep learning tool’s ‘computational microscope’ predicts protein interactions, potential paths to new antibiotics

Examples of protein complexes modeled by AF2Complex residing between the inner and outer membranes of E. coli.

by Audra Davidson

Though it is a cornerstone of virtually every process that occurs in living organisms, the proper folding and transport of biological proteins is a notoriously difficult and time-consuming process to experimentally study.

In a new paper published in eLife, researchers in the School of Biological Sciences and the School of Computer Science have shown that AF2Complex may be able to lend a hand.

Building on the models of DeepMind’s AlphaFold 2, a machine learning tool able to predict the detailed three-dimensional structures of individual proteins, AF2Complex—short for AlphaFold 2 Complex—is a deep learning tool designed to predict the physical interactions of multiple proteins. With these predictions, AF2Complex is able to calculate which proteins are likely to interact with each other to form functional complexes in unprecedented detail.

“We essentially conduct computational experiments that try to figure out the atomic details of supercomplexes (large interacting groups of proteins) important to biological functions,” explained Jeffrey Skolnick, Regents’ Professor and Mary and Maisie Gibson Chair in the School of Biological Sciences, and one of the corresponding authors of the study. With AF2Complex, which was developed last year by the same research team, it’s “like using a computational microscope powered by deep learning and supercomputing.”

In their latest study, the researchers used this “computational microscope” to examine a complicated protein synthesis and transport pathway, hoping to clarify how proteins in the pathway interact to ultimately transport a newly synthesized protein from the interior to the outer membrane of the bacteria—and identify players that experiments might have missed. Insights into this pathway may identify new targets for antibiotic and therapeutic design while providing a foundation for using AF2Complex to computationally expedite this type of biology research as a whole.

Continue reading… “Deep learning tool’s ‘computational microscope’ predicts protein interactions, potential paths to new antibiotics”
0

ISRAELI COMPANY USES AI TO FIND MISTAKES DURING BUILDING CONSTRUCTION


Big construction projects are notorious for delays and running over budget. An Israeli company says it has a high tech solution to get everything back on track. 

At a hospital construction project in England, project manager Bruce Preston says he is juggling millions of pieces to help the nearly $200 million project take shape. “We have 2,300 rooms and spaces that we need to keep track of to know exactly what’s going on in everyone one of those spaces.”

Tracking progress is usually done by hand. But on this job, a 360-degree camera attached to a hard hat is capturing every inch of the site using artificial intelligence to compare the images to the building’s blueprints. Preston points to a computer screen to show how it works, saying, “it’ll tell you green if it’s all done and orange where there’s work still to do.” 

Tech firm Buildots says their AI system catches mistakes before they become a costly problem. “How many times does the industry lose money because it finds out way down the line that we missed something?” asks Buildots Co-founder Aviv Leibovici.

Construction is estimated to be a $10 trillion industry worldwide, and a report from McKinsey Global Institute, a management consulting company, says about $1.6 trillion are wasted every year by productivity problems.

Continue reading… “ISRAELI COMPANY USES AI TO FIND MISTAKES DURING BUILDING CONSTRUCTION”
0

Quora launches Poe, a way to talk to AI chatbots like ChatGPT

By Kyle Wiggers

Signaling its interest in text-generating AI systems like ChatGPT, Quora this week launched a platform called Poe that lets people ask questions, get instant answers and have a back-and-forth dialogue with AI chatbots.

Short for “Platform for Open Exploration,” Poe — which is invite-only and currently only available on iOS — is “designed to be a place where people can easily interact with a number of different AI agents,” a Quora spokesperson told TechCrunch via text message.

“We have learned a lot about building consumer internet products over the last 12 years building and operating Quora. And we are specifically experienced in serving people who are looking for knowledge,” the spokesperson said. “We believe much of what we’ve learned can be applied to this new domain where people are interfacing with large language models.”

Poe, then, isn’t an attempt to build a ChatGPT-like AI model from scratch. ChatGPT — which has an aptitude for answering questions on topics ranging from poetry to coding — has been the subject of controversy for its ability to sometimes give answers that sound convincing but aren’t factually true. Earlier this month, Q&A coding site Stack Overflow temporarily banned users from sharing content generated by ChatGPT, saying the AI made it too easy for users to generate responses and flood the site with dubious answers.

Continue reading… “Quora launches Poe, a way to talk to AI chatbots like ChatGPT”
0

With the help of visual sonograms, Riffusion’s AI creates music from text

By Meghmala Chowdhury

Riffusion was developed by Seth Forsgren and Hayk Martiros as a side project. It stores audio in sonograms, which are two-dimensional images. Riffusion, an AI model that makes music from text prompts by constructing a visual representation of sound and converting it to audio for playback, was launched on Thursday by a couple of IT enthusiasts. It applies visual latent diffusion to sound processing in a novel manner using a fine-tuned version of the Stable Diffusion 1.5 image synthesis model. The X-axis in a sonogram depicts time (the left-to-right order in which the frequencies are played), and the Y-axis is the frequency of the sounds.

The color of each pixel in the image, meanwhile, shows the volume of the sound at that specific instant in time. A sonogram can be processed using stable diffusion because it is a sort of image. With the help of examples of sonograms that were connected to descriptions of the sounds or musical genres they represented, Forsgren and Martiros trained a unique Stable Diffusion model. With this knowledge, Riffusion can produce fresh music on demand based on text prompts that specify the genre of music or sound you like, such as “jazz,” “rock,” or even keystrokes on a keyboard. Riffusion creates the sonogram image, converts it to sound using Torchaudio, and then plays it back as audio.

Continue reading…With the help of visual sonograms, Riffusion’s AI creates music from text
0

Hive Launches HiveMind to Supercharge Project Planning with AI

Hive, the productivity platform provider, announced the public release of HiveMind that uses Artificial Intelligence (AI) to automatically create a project plan in a matter of seconds.

As Artificial Intelligence models are increasingly being integrated into content and note-taking platforms, Hive is pioneering the usage of the models’ capacity for continuous learning and logical decision-making based on in-depth data.

Modeled on six years of successful customer projects, HiveMind automatically sets out the steps to accomplish any goal, expediting project planning and execution. It has the ability to create project tasks based on simple suggestions, set next steps from received emails and reply based on the inbound email’s content.

“Today, superior performance in the marketplace comes from the depth of data you possess, and the ability to apply it quickly,” said John Furneaux, Hive co-founder and CEO. “HiveMind places the wealth of collective wisdom and team experience at our customers’ fingertips. It can play a vital role in training staff better, acquiring new skills and improving decision making.”

In addition to increasing efficiencies in project planning, HiveMind can speed up market research by providing facts, statistics, competitive intelligence and new ideas for brainstorms without having to reference internet searches. Hive customers reported experiencing immediate benefits when using HiveMind.

Continue reading… “Hive Launches HiveMind to Supercharge Project Planning with AI”
0

‘World’s first robot lawyer’: DoNotPay wants to build an AI to help people fight traffic tickets

DoNotPay is also taking on parking tickets and corporations.

 

By Claire Goforth

A company is working towards making history and saving drivers money in the process. Whether it works is anyone’s guess, but DoNotPay claims it is building artificial intelligence designed to represent people in traffic court.

The company’s chief executive officer tweeted about their ambitious plan on Monday.

“We want to build a @donotpay bot that listens to the court hearing via your AirPods and whispers what to say with GPT-3 and LLMs,” Joshua Browder wrote. “We just want to experiment and will pay the ticket, even if you lose!” He asked anyone with an upcoming hearing on a speeding ticket to send him a direct message.

(Generative Pre-trained Transformer 3, or GPT-3, is an autoregressive language model that uses deep learning to emulate text written by humans in response to a prompt; LLMs is an acronym for large language models, an algorithm that also uses deep learning to understand written language.)

Using artificial intelligence (AI) to whisper in people’s ears during a court hearing is a novel idea, but it could also run afoul of laws prohibiting practicing law without a license and other court rules. People wasted no time pointing these issues out.

“Sounds like practicing law without a license…?” wrote a Twitter user who describes themself as an attorney.

Continue reading… “‘World’s first robot lawyer’: DoNotPay wants to build an AI to help people fight traffic tickets”
0

Here’s how AI and AR could transform real estate marketing

Augmented reality and artificial intelligence are making their mark in tech, but they could also change the face of real estate marketing.

By April Bingham

I hate seeing bland houses as the default.

I don’t mean that I swan about pooh-poohing all over other peoples’ tastes…publically…often. Genuinely, if you LIKE cream and gold ‘Live Laugh Love’ prints and framed jerseys, you should HAVE them. We can’t all live in homes that look like a Screamin Jay Hawkins meets Howl Jenkins fever dream.

Not least because it presents a severe tripping hazard….

My problem is more institutional. Best practices say you can’t sell a house even to ‘fun’ people without taking the time to strip all the personality out of it, and that’s disappointing. I promise I understand that certain colors in certain rooms straight up make more money. Just…at what cost? It’s definitely the bitter renter and serial anthropomorphizer in me, but it makes me a little sad seeing whole houses stripped down and painted up before anyone else will love them.

However, AI could change all that for the better!

I’m as surprised as you are, but it finally happened – I found a use for AI-generated images that I actually like. Go figure, it’s for customization-based marketing.

With image generators like Dall-E and MidJourney, Realtors who aren’t also picture-perfect digital artists can change the color and furniture and lighting of a room to suit their client’s desires in a context that doesn’t pass off artificially amalgamated work as their own creation OR come saddled with the reasonable expectation that a talented full-time designer should be paid for doing that work.

I love the idea of walking into a virtual pre-tour of homes tailored to inspire me specifically before I actually schlep myself around the physical locations. Imagine clients walking in, taking a quick look at their aesthetics, and hitting settings like ‘Art Goth’ or ‘Bro-core’ to make it even easier for them to fall in love with a location! 

Continue reading… “Here’s how AI and AR could transform real estate marketing”
0

AI learns to write computer code in ‘stunning’ advance

Snippets of code in white come from the AlphaCode artificial intelligence system, whereas the purple code snippets were written by humans trying to solve similar problems.

BY MATTHEW HUTSON

DeepMind’s AlphaCode outperforms many human programmers in tricky software challenges.

Software runs the world. It controls smartphones, nuclear weapons, and car engines. But there’s a global shortage of programmers. Wouldn’t it be nice if anyone could explain what they want a program to do, and a computer could translate that into lines of code?

A new artificial intelligence (AI) system called AlphaCode is bringing humanity one step closer to that vision, according to a new study. Researchers say the system—from the research lab DeepMind, a subsidiary of Alphabet (Google’s parent company)—might one day assist experienced coders, but probably cannot replace them.

“It’s very impressive, the performance they’re able to achieve on some pretty challenging problems,” says Armando Solar-Lezama, head of the computer assisted programming group at the Massachusetts Institute of Technology.

AlphaCode goes beyond the previous standard-bearer in AI code writing: Codex, a system released in 2021 by the nonprofit research lab OpenAI. The lab had already developed GPT-3, a “large language model” that is adept at imitating and interpreting human text after being trained on billions of words from digital books, Wikipedia articles, and other pages of internet text. By fine-tuning GPT-3 on more than 100 gigabytes of code from Github, an online software repository, OpenAI came up with Codex. The software can write code when prompted with an everyday description of what it’s supposed to do—for instance counting the vowels in a string of text. But it performs poorly when tasked with tricky problems.

Continue reading… “AI learns to write computer code in ‘stunning’ advance”
0

DeepMind debuts new AI system capable of playing ‘Stratego’

BY MARIA DEUTSCHER

Alphabet Inc.’s DeepMind unit has developed a new artificial intelligence system capable of playing “Stratego,” a board game considered more complex than chess and Go.

DeepMind detailed the AI system, which it dubs DeepNash, on Thursday. The Alphabet unit says that DeepNash achieved a win rate of more than 84% in matches against expert human players.

“Stratego” is a two-player board game that is similar to chess in certain respects. Players receive a collection of game pieces that, like chess pieces, are maneuvered around the board until one of the players wins. But there are a number of differences between the two games that make “Stratego” more complicated than chess.

In “Stratego,” each player has only limited information about the other player’s game pieces. A player might know that the other player has placed a game piece on a certain section of the board, but not which specific game piece was placed there. This dynamic makes playing the game difficult for AI systems.

Another source of complexity is that there are more possibilities to consider than in chess. The number of potential tactics that players can use in a board game is measured with a metric known as the game tree complexity number. Chess has a game tree complexity number of 10 to the power of 123, while in “Stratego,” that number increases to 10 to the power of 535.

According to DeepMind, traditional methods of teaching AI systems to play board games can’t be applied well to “Stratego” because of its complexity. To address that limitation, DeepMind’s researchers developed a new AI method dubbed R-NaD that draws on the mathematical field of game theory. That method forms the basis of the DeepNash system DeepMind detailed this week.

Continue reading… “DeepMind debuts new AI system capable of playing ‘Stratego’”
0

AI invents millions of materials that don’t yet exist

Artistic image of a graphene bolometer controlled by electric field

By Anthony Cuthbertson

‘Transformative tool’ is already being used in the hunt for more energy-dense electrodes for lithium-ion batteries.

Scientists have developed an artificial intelligence algorithm capable of predicting the structure and properties of more than 31 million materials that do not yet exist.

The AI tool, named M3GNet, could lead to the discovery of new materials with exceptional properties, according to the team from the University of California San Diego who created it.

M3GNet was able to populate a vast database of yet-to-be-synthesized materials instantaneously, which the engineers are already using in their hunt for more energy-dense electrodes for lithium-ion batteries used in everything from smartphones to electric cars.

The matterverse.ai database and the M3GNet algorithm could potentially expand the exploration space for materials by orders of magnitude.

UC San Diego nanoengineering professor Shyue Ping Ong described M3GNet as “an AlphaFold for materials”, referring to the breakthrough AI algorithm built by Google’s DeepMind that can predict protein structures.

“Similar to proteins, we need to know the structure of a material to predict its properties,” said Professor Ong.

“We truly believe that the M3GNet architecture is a transformative tool that can greatly expand our ability to explore new material chemistries and structures.”

Continue reading… “AI invents millions of materials that don’t yet exist”
0

Google Research Proposes an Artificial Intelligence (AI) Model to Utilize Vision Transformers on Videos

By Ekrem Çetinkaya

Transformers have played a crucial role in natural language processing tasks in the last decade. Their success attributes mainly to their ability to extract and exploit temporal information. 

When a certain method works well in a domain, it is normal to expect to see studies that try to bring that method to other domains. This was the case with transformers as well, and the domain was computer vision. Introducing transformers to vision tasks was a huge success, bringing numerous similar studies afterward. 

The vision transformer (ViT) was proposed in 2020, outperforming its convolutional neural network (CNN) counterpart in the image classification tasks. Its main benefits were at a large scale since they require more data or stronger regularisation. 

ViT inspired many researchers to dive deeper into the rabbit hole of transformers to see how further they can go in different tasks. Most of them focused on image-related tasks, and they obtained really promising results. However, the application of ViTs into the video domain remained an open problem, more or less.

When you think of it, transformers, more importantly, attention-based architectures, look like the perfect structure to be used with videos. They are the intuitive choice for modeling the dependency in natural languages and extracting contextual relationships between the words. A video also contains these properties, so why not use the transformer to process videos? This is the question the authors of ViViT asked, and they came up with an answer. 

Most state-of-the-art video-related solutions use 3D-convolutional networks, but their complexity makes it challenging to achieve proper performance on commodity devices. Some studies focused on adding the self-attention property of transformers into the 3D-CNNs to better capture long-term dependencies within the video. 

Continue reading… “Google Research Proposes an Artificial Intelligence (AI) Model to Utilize Vision Transformers on Videos”
0