In the not-so-distant future, an AI companion will seamlessly integrate into your daily life, offering guidance as you navigate through various activities. Acting as an active participant, this AI entity will provide valuable information in crowded stores, during visits to the pediatrician, or even when you’re grabbing a snack at home. It will play a role in mediating all aspects of your experiences, including social interactions with friends, family, colleagues, and strangers.
The term “mediate” is a euphemism for the AI’s ability to influence your actions, thoughts, and feelings. While some may find this idea unsettling, society is on the brink of accepting this technology into everyday life, allowing continuous coaching by friendly voices that skillfully inform and guide. Unlike traditional AI assistants such as Siri or Alexa, the next generation will feature a game-changing element – context awareness.
This additional capability enables AI systems to respond not just to verbal commands but to the sights and sounds around you. Utilizing cameras and microphones on wearable AI-powered devices, these context-aware assistants are set to debut in 2024, promising to reshape our world within a few short years. This evolution comes with both powerful capabilities and new risks to personal privacy and human agency.
On the positive side, these AI assistants will offer valuable information seamlessly integrated with your surroundings. From product specifications in store windows to identifying plants on a hike, the guidance will feel like a superpower, delivering information in real-time. However, the ever-present voice could also become highly persuasive or manipulative, especially if corporations leverage it for targeted conversational advertising.
Mitigating the risks of AI manipulation requires attention from policymakers, an aspect often overlooked thus far. With multi-modal large language models (LLMs), AI systems gain eyes and ears, processing not just text but also images, audio, and video. ChatGPT-4, released by OpenAI in March 2023, marked a significant advancement, with Google’s Gemini LLM and Meta’s AnyMAL further contributing to this space.
AnyMAL, in particular, introduces a vestibular sense of movement, going beyond seeing and hearing to consider the user’s physical state. With consumer-ready AI technology available, companies are rushing to integrate these capabilities into systems that guide users through daily interactions. Wearable devices, especially glasses with built-in sensors, are emerging as a natural choice, capturing visual and auditory inputs, and even motion cues.
Meta, the company behind Ray-Ban smart glasses, appears to be at the forefront of this revolution. Recently releasing a version configured to support advanced AI models, Meta has initiated early access to AI features on December 12, introducing remarkable capabilities that promise to redefine the way we interact with AI in our daily lives.
By Impact Lab