Purdue University engineers have found that autonomous vehicles (AVs) could soon be powered by advanced chatbots like ChatGPT, enabling them to better understand and respond to passengers’ commands. These chatbots are built on large language models (LLMs), a type of artificial intelligence algorithm designed to interpret natural language and continuously learn from vast amounts of data.
This breakthrough study, published on the preprint server arXiv, is set to be presented at the 27th IEEE International Conference on Intelligent Transportation Systems on September 25. It may be one of the first real-world experiments testing how well AVs can use LLMs to understand and act on passenger instructions.
A New Era of Communication for Autonomous Vehicles
Ziran Wang, assistant professor at Purdue’s Lyles School of Civil and Construction Engineering and the study’s lead researcher, believes that for AVs to become fully autonomous, they need to grasp passenger commands more intuitively, even when instructions are implied rather than explicitly stated. For example, a human taxi driver would understand that “I’m in a hurry” implies the need to take the quickest route, even if it’s not directly requested.
Current AVs come equipped with communication features, but they often require passengers to issue precise, clear instructions. In contrast, LLMs like ChatGPT are designed to interpret and respond to human commands more naturally by identifying relationships within large datasets, allowing them to process complex or vague instructions.
Advancing Beyond Conventional Systems
Wang emphasizes the limitations of today’s AV communication systems, which rely heavily on button-pressing or audio recognition that demands clear and explicit commands. “The conventional systems in our vehicles have a user interface where you must press buttons or speak very clearly for the vehicle to understand,” Wang explained. “But with large language models, AVs can interpret more naturally what you mean, similar to how humans do.”
This ability to interact in a more humanlike manner could greatly enhance the way AVs respond to passengers’ needs, making fully autonomous driving systems closer to reality. The integration of LLMs could revolutionize how passengers communicate with AVs, simplifying the process and improving overall user experience.
By Impact Lab

