Here’s how AI and AR could transform real estate marketing

Augmented reality and artificial intelligence are making their mark in tech, but they could also change the face of real estate marketing.

By April Bingham

I hate seeing bland houses as the default.

I don’t mean that I swan about pooh-poohing all over other peoples’ tastes…publically…often. Genuinely, if you LIKE cream and gold ‘Live Laugh Love’ prints and framed jerseys, you should HAVE them. We can’t all live in homes that look like a Screamin Jay Hawkins meets Howl Jenkins fever dream.

Not least because it presents a severe tripping hazard….

My problem is more institutional. Best practices say you can’t sell a house even to ‘fun’ people without taking the time to strip all the personality out of it, and that’s disappointing. I promise I understand that certain colors in certain rooms straight up make more money. Just…at what cost? It’s definitely the bitter renter and serial anthropomorphizer in me, but it makes me a little sad seeing whole houses stripped down and painted up before anyone else will love them.

However, AI could change all that for the better!

I’m as surprised as you are, but it finally happened – I found a use for AI-generated images that I actually like. Go figure, it’s for customization-based marketing.

With image generators like Dall-E and MidJourney, Realtors who aren’t also picture-perfect digital artists can change the color and furniture and lighting of a room to suit their client’s desires in a context that doesn’t pass off artificially amalgamated work as their own creation OR come saddled with the reasonable expectation that a talented full-time designer should be paid for doing that work.

I love the idea of walking into a virtual pre-tour of homes tailored to inspire me specifically before I actually schlep myself around the physical locations. Imagine clients walking in, taking a quick look at their aesthetics, and hitting settings like ‘Art Goth’ or ‘Bro-core’ to make it even easier for them to fall in love with a location! 

Continue reading… “Here’s how AI and AR could transform real estate marketing”

AI learns to write computer code in ‘stunning’ advance

Snippets of code in white come from the AlphaCode artificial intelligence system, whereas the purple code snippets were written by humans trying to solve similar problems.

BY MATTHEW HUTSON

DeepMind’s AlphaCode outperforms many human programmers in tricky software challenges.

Software runs the world. It controls smartphones, nuclear weapons, and car engines. But there’s a global shortage of programmers. Wouldn’t it be nice if anyone could explain what they want a program to do, and a computer could translate that into lines of code?

A new artificial intelligence (AI) system called AlphaCode is bringing humanity one step closer to that vision, according to a new study. Researchers say the system—from the research lab DeepMind, a subsidiary of Alphabet (Google’s parent company)—might one day assist experienced coders, but probably cannot replace them.

“It’s very impressive, the performance they’re able to achieve on some pretty challenging problems,” says Armando Solar-Lezama, head of the computer assisted programming group at the Massachusetts Institute of Technology.

AlphaCode goes beyond the previous standard-bearer in AI code writing: Codex, a system released in 2021 by the nonprofit research lab OpenAI. The lab had already developed GPT-3, a “large language model” that is adept at imitating and interpreting human text after being trained on billions of words from digital books, Wikipedia articles, and other pages of internet text. By fine-tuning GPT-3 on more than 100 gigabytes of code from Github, an online software repository, OpenAI came up with Codex. The software can write code when prompted with an everyday description of what it’s supposed to do—for instance counting the vowels in a string of text. But it performs poorly when tasked with tricky problems.

Continue reading… “AI learns to write computer code in ‘stunning’ advance”

DeepMind debuts new AI system capable of playing ‘Stratego’

BY MARIA DEUTSCHER

Alphabet Inc.’s DeepMind unit has developed a new artificial intelligence system capable of playing “Stratego,” a board game considered more complex than chess and Go.

DeepMind detailed the AI system, which it dubs DeepNash, on Thursday. The Alphabet unit says that DeepNash achieved a win rate of more than 84% in matches against expert human players.

“Stratego” is a two-player board game that is similar to chess in certain respects. Players receive a collection of game pieces that, like chess pieces, are maneuvered around the board until one of the players wins. But there are a number of differences between the two games that make “Stratego” more complicated than chess.

In “Stratego,” each player has only limited information about the other player’s game pieces. A player might know that the other player has placed a game piece on a certain section of the board, but not which specific game piece was placed there. This dynamic makes playing the game difficult for AI systems.

Another source of complexity is that there are more possibilities to consider than in chess. The number of potential tactics that players can use in a board game is measured with a metric known as the game tree complexity number. Chess has a game tree complexity number of 10 to the power of 123, while in “Stratego,” that number increases to 10 to the power of 535.

According to DeepMind, traditional methods of teaching AI systems to play board games can’t be applied well to “Stratego” because of its complexity. To address that limitation, DeepMind’s researchers developed a new AI method dubbed R-NaD that draws on the mathematical field of game theory. That method forms the basis of the DeepNash system DeepMind detailed this week.

Continue reading… “DeepMind debuts new AI system capable of playing ‘Stratego’”

AI invents millions of materials that don’t yet exist

Artistic image of a graphene bolometer controlled by electric field

By Anthony Cuthbertson

‘Transformative tool’ is already being used in the hunt for more energy-dense electrodes for lithium-ion batteries.

Scientists have developed an artificial intelligence algorithm capable of predicting the structure and properties of more than 31 million materials that do not yet exist.

The AI tool, named M3GNet, could lead to the discovery of new materials with exceptional properties, according to the team from the University of California San Diego who created it.

M3GNet was able to populate a vast database of yet-to-be-synthesized materials instantaneously, which the engineers are already using in their hunt for more energy-dense electrodes for lithium-ion batteries used in everything from smartphones to electric cars.

The matterverse.ai database and the M3GNet algorithm could potentially expand the exploration space for materials by orders of magnitude.

UC San Diego nanoengineering professor Shyue Ping Ong described M3GNet as “an AlphaFold for materials”, referring to the breakthrough AI algorithm built by Google’s DeepMind that can predict protein structures.

“Similar to proteins, we need to know the structure of a material to predict its properties,” said Professor Ong.

“We truly believe that the M3GNet architecture is a transformative tool that can greatly expand our ability to explore new material chemistries and structures.”

Continue reading… “AI invents millions of materials that don’t yet exist”

Google Research Proposes an Artificial Intelligence (AI) Model to Utilize Vision Transformers on Videos

By Ekrem Çetinkaya

Transformers have played a crucial role in natural language processing tasks in the last decade. Their success attributes mainly to their ability to extract and exploit temporal information. 

When a certain method works well in a domain, it is normal to expect to see studies that try to bring that method to other domains. This was the case with transformers as well, and the domain was computer vision. Introducing transformers to vision tasks was a huge success, bringing numerous similar studies afterward. 

The vision transformer (ViT) was proposed in 2020, outperforming its convolutional neural network (CNN) counterpart in the image classification tasks. Its main benefits were at a large scale since they require more data or stronger regularisation. 

ViT inspired many researchers to dive deeper into the rabbit hole of transformers to see how further they can go in different tasks. Most of them focused on image-related tasks, and they obtained really promising results. However, the application of ViTs into the video domain remained an open problem, more or less.

When you think of it, transformers, more importantly, attention-based architectures, look like the perfect structure to be used with videos. They are the intuitive choice for modeling the dependency in natural languages and extracting contextual relationships between the words. A video also contains these properties, so why not use the transformer to process videos? This is the question the authors of ViViT asked, and they came up with an answer. 

Most state-of-the-art video-related solutions use 3D-convolutional networks, but their complexity makes it challenging to achieve proper performance on commodity devices. Some studies focused on adding the self-attention property of transformers into the 3D-CNNs to better capture long-term dependencies within the video. 

Continue reading… “Google Research Proposes an Artificial Intelligence (AI) Model to Utilize Vision Transformers on Videos”

Real-Life ‘Invisibility Cloak’ Stops AI Cameras From Recognizing People

Scientists have developed a real-life “invisibility cloak” that tricks artificial intelligence (AI) cameras and stops them from recognizing people. 

 By PESALA BANDARA

Researchers at the University of Maryland have created a sweater that “breaks” AI systems of human recognition and makes a person “invisible” in front of AI cameras.

“This stylish sweater is a great way to stay warm this winter,” writes the researchers on UMD’s Department of Computer Science website. “It features a waterproof microfleece lining, a modern cut, and anti-AI patterns — which will help hide from object detectors.” 

The researchers note: “In our demonstration, the YOLOv2 detector was able to fool the detector with a pattern trained on a COCO data set with a carefully constructed target.”

Continue reading… “Real-Life ‘Invisibility Cloak’ Stops AI Cameras From Recognizing People”

a BREakthrough AI can track real-time cell changes revealing a key mystery in biology

The study shows how deep learning can be used to detect cell image analysis.

By Brittney Grimes

Researchers have found a way to observe cell samples to study morphological changes — or the change in form and structure — of cells. This is significant because cells are the basic unit of life, the building blocks of living organisms, and researchers need to be able to observe what could influence the parameters of cells, such as size, shape, and density. 

Conventionally, cell samples were observed directly through microscopes by scientists to observe and discover any changes of the cells. They would look for morphological changes in the cell structures. However, they can now use artificial intelligence to make observations. Through using both computer science and a subset of artificial intelligence known as deep learning, researchers can now combine the methods to detect cell analysis. 

The study was published in the journal Intelligent Computing.

Continue reading… “a BREakthrough AI can track real-time cell changes revealing a key mystery in biology”

MIT Researchers Discover A New, Faster AI Using Liquid Neural Neurons

The “liquid” neural network allows AI algorithms to adapt to new input data.

By Jace Dela Cruz

Artificial neural networks are a method that artificial intelligence utilizes to simulate how the human brain functions. A neural network “learns” from input from datasets and produces a forecast based on the available data.

But now, MIT Computer Science and Artificial Intelligence Lab (MIT CSAIL) researchers found a faster method to solve an equation that is employed in the algorithms for “liquid” neural neurons, according to a report by Interesting Engineering. 

Continue reading… “MIT Researchers Discover A New, Faster AI Using Liquid Neural Neurons”

Flexible AI computer chips promise wearable health monitors that protect privacy

A device like this could one day monitor and assess your health.

By Sihong Wang

My colleagues and I have developed a flexible, stretchable electronic device that runs machine-learning algorithms to continuously collect and analyze health data directly on the body. The skinlike sticker, developed in my lab at the University of Chicago’s Pritzker School of Molecular Engineering, includes a soft, stretchable computing chip that mimics the human brain.

To create this type of device, we turned to electrically conductive polymers that have been used to build semiconductors and transistors. These polymers are made to be stretchable, like a rubber band. Rather than working like a typical computer chip, though, the chip we’re working with, called a neuromorphic computing chip, functions more like a human brain. It’s able to both store and analyze data.

To test the usefulness of the new device, my colleagues and I used it to analyze electrocardiogram data representing the electrical activity of the human heart. We trained the device to classify ECGs into five categories: healthy and four types of abnormal signals. Even in conditions where the device is repeatedly stretched by movements of the wearer’s body, the device could still accurately classify the heartbeats. 

Most of the signals from the human body, such as the electrical activity in the heart recorded by ECG, are typically weak and subtle. Accurately recording these small signals requires direct contact between electronic devices and the human body. This can only be achieved by fabricating electronic devices to be as soft and stretchy as skin. We envision that wearable electronics will play a key role in tracking complex indicators of human health, including body temperature, cardiac activity, levels of oxygen, sugar, metabolites and immune molecules in the blood. 

Analyzing large amounts of continuously acquired health data is challenging, however. A single piece of data must be put into the broader perspective of a patient’s full health history, and that is a big task. Cutting-edge machine-learning algorithms that identify patterns in extremely complex data sets are the most promising route to being able to pick out the most important signals of disease. 

Continue reading… “Flexible AI computer chips promise wearable health monitors that protect privacy”

Techman Robot launches ‘all-in-one’ AI collaborative robot series

Techman Robot has launched its “TM AI Cobot” series, describing it as a “collaborative robot which combines a powerful and precise robot arm with native AI inferencing engine and smart vision system in a complete package”.

BY MAI TAO

The company says the new machine is ready for deployment in factories and can accelerate the transition to Industry 4.0.

Techman says the TM AI Cobot works on the principle of being smart, simple and safe. By combining visual processing in the robot arm, the AI Cobot can perform fast and precise pick and place, AMR, palletizing, welding, semi-conductor and product manufacturing, automated optical inspections (AOI) and food service preparation, among many other applications that can be accelerated by AI-Vision. 

The company claims it is the only intelligent robotic arm series on the market provided with a comprehensive AI software suite. It includes TM AI+ Training Server, TM AI+ AOI Edge, TM Image Manager, and TM 3DVision, allowing companies to train and tailor their system to precisely meet their application.

Shi-chi Ho, Techman Robot president, says: “Techman Robot has redefined the future of industry robotics with the introduction of its AI Cobot series that are equipped with a native AI engine, powerful and precise robotic arm and vision system that represents a perfect combination of ‘brain, hands and eyes’.

Continue reading… “Techman Robot launches ‘all-in-one’ AI collaborative robot series”

How Generative AI Is Changing Creative Work

By Thomas H. Davenport and Nitin Mittal

Summary.   Generative AI models for businesses threaten to upend the world of content creation, with substantial impacts on marketing, software, design, entertainment, and interpersonal communications. These models are able to produce text and images: blog…more

Large language and image AI models, sometimes called generative AI or foundation models, have created a new set of opportunities for businesses and professionals that perform content creation. Some of these opportunities include: 

  1. Automated content generation: Large language and image AI models can be used to automatically generate content, such as articles, blog posts, or social media posts. This can be a valuable time-saving tool for businesses and professionals who create content on a regular basis. 
  2. Improved content quality: AI-generated content can be of higher quality than content created by humans, due to the fact that AI models are able to learn from a large amount of data and identify patterns that humans may not be able to see. This can result in more accurate and informative content. 
  3. Increased content variety: AI models can generate a variety of content types, including text, images, and video. This can help businesses and professionals to create more diverse and interesting content that appeals to a wider range of people. 
  4. Personalized content: AI models can generate personalized content based on the preferences of individual users. This can help businesses and professionals to create content that is more likely to be of interest to their target audience, and therefore more likely to be read or shared.

How adept is this technology at mimicking human efforts at creative work? Well, for an example, the italicized text above was written by GPT-3, a “large language model” (LLM) created by OpenAI, in response to the first sentence, which we wrote. GPT-3’s text reflects the strengths and weaknesses of most AI-generated content. First, it is sensitive to the prompts fed into it; we tried several alternative prompts before settling on that sentence. Second, the system writes reasonably well; there are no grammatical mistakes, and the word choice is appropriate. Third, it would benefit from editing; we would not normally begin an article like this one with a numbered list, for example. Finally, it came up with ideas that we didn’t think of. The last point about personalized content, for example, is not one we would have considered.

Continue reading… “How Generative AI Is Changing Creative Work”

Researchers hope AI can alleviate interstate traffic jams

NASHVILLE, Tenn. (AP) — Researchers at Vanderbilt University and other schools around the country are conducting an experiment in Nashville next week to try to decrease the number of stop-and-go traffic jams on a local interstate. 

The new experiment will deploy up to 100 cars equipped with adaptive cruise control technology along a 4-mile stretch of Interstate 24 during morning rush hour, according to a news release from Vanderbilt. That stretch is outfitted with hundreds of ultra-high definition cameras that will give researchers a digital model of how every vehicle behaves. 

Continue reading… “Researchers hope AI can alleviate interstate traffic jams”
Discover the Hidden Patterns of Tomorrow with Futurist Thomas Frey
Unlock Your Potential, Ignite Your Success.

By delving into the futuring techniques of Futurist Thomas Frey, you’ll embark on an enlightening journey.

Learn More about this exciting program.