Google CEO Sundar Pichai
Google is reportedly taking steps to enhance its conversational AI abilities and compete with other companies, including OpenAI’s ChatGPT. As reported in News18’s article “How Google Plans to Compete with ChatGPT and Make BERT AI Chatbot Better,” Google’s focus is on improving its BERT (Bidirectional Encoder Representations from Transformers) model, a neural network-based approach for natural language processing (NLP). The goal is to develop a chatbot capable of understanding and responding to human-like conversations with greater precision and fluency.
According to the article, Google’s strategy for conversational AI differs from OpenAI’s. Instead of using a single large model like ChatGPT, Google aims to utilize a variety of smaller models to address various aspects of conversation. The approach is called “Federated Learning,” and it involves training small models on users’ devices, rather than on a central server, to improve the chatbot’s accuracy in context-specific tasks.
Rajen Sheth, director of product management for Google Cloud’s AI and Industry Solutions, explained the advantages of this approach, stating, “By training smaller models on device, our approach is more power-efficient and data-efficient, and better preserves privacy.”
Google is also working on its chatbot’s memory capabilities to enhance its ability to keep track of ongoing conversations. Sheth stated, “Memory is very important when it comes to conversations… Having a chatbot remember the context of previous conversations is really critical for making conversations natural.”
In conclusion, Google is striving to improve its conversational AI abilities to compete with other companies, such as OpenAI’s ChatGPT. The company’s strategy involves utilizing a variety of smaller models to address specific aspects of conversation through “Federated Learning.” Additionally, Google aims to enhance its chatbot’s memory capabilities to make conversations more natural. According to Sheth, “It’s all about making the chatbot as human-like as possible.”
Via The Impactlab