When personal computers first emerged, they were tools reserved for a select few who understood complex programming languages. Fast forward to today, and nearly anyone can check the weather, stream music, or even generate code—all with just a few keystrokes. This evolution has dramatically reshaped how people interact with technology, making powerful computational tools accessible to the general public. Now, artificial intelligence (AI) is doing the same for robotics through an innovative platform called Text2Robot.

Developed by engineers at Duke University, Text2Robot is a groundbreaking framework that enables anyone—regardless of technical background—to design and build functional robots simply by describing them in natural language. This revolutionary tool will be featured at the IEEE International Conference on Robotics and Automation (ICRA 2025), held May 19–23 in Atlanta, Georgia.

The platform has already received recognition, taking first place in the innovation category at the Virtual Creatures Competition during the Artificial Life conference in Copenhagen, Denmark. The research behind the project is available on the arXiv preprint server.

“Creating a functional robot has traditionally been a slow, expensive process requiring deep expertise in engineering, AI, and manufacturing,” said Boyuan Chen, Assistant Professor at Duke University. “Text2Robot is taking the first step toward simplifying that process by enabling users to generate robots from natural language input.”

At its core, Text2Robot uses AI to transform simple text descriptions into fully-realized robot designs. The process starts with a text-to-3D generative model that creates a 3D design of the robot’s body based on the user’s request. This virtual model is then enhanced with real-world manufacturing constraints, such as proper placement of electronic components and joint mechanics.

To make sure the robot can actually move and perform tasks, the system employs evolutionary algorithms and reinforcement learning to fine-tune its shape, movement, and control software.

“This isn’t just about generating cool-looking robots,” said Ryan Ringel, an undergraduate student and co-first author of the paper. “The AI understands physical laws and biomechanics, producing designs that are both functional and efficient.”

For instance, if a user types in a prompt like “a frog robot that tracks my speed on command” or “an energy-efficient walking robot that looks like a dog,” Text2Robot can deliver a design in minutes. The robot can be walking in simulation within an hour and physically assembled in under a day using a 3D printer.

“This kind of rapid prototyping changes the game,” said Zachary Charlick, another co-first author and undergrad in Chen’s lab. “All you need is a computer, a 3D printer, and an idea.”

Text2Robot opens the door to a wide range of applications. Kids could design custom robot pets. Artists might create kinetic sculptures. Homeowners could build task-specific robots—like a trash bin that navigates hallways and empties itself. Emergency responders might deploy robots tailored to handle unpredictable environments in disaster zones.

Currently, the system specializes in quadrupedal robots, but the team plans to expand to more complex forms and even incorporate automated assembly. “This is just the beginning,” said Jiaxun Liu, co-first author and a Ph.D. student. “We aim to create robots that not only think intelligently but also adapt their physical form to better serve human needs.”

While today’s robots are limited to basic motions like walking or tracking speed, future updates could integrate sensors and other hardware—making them capable of climbing stairs, navigating dynamic obstacles, and more.

As Professor Chen puts it, “The future of robotics isn’t just about building machines. It’s about collaboration between humans and intelligent, adaptable robots. With generative AI, we’re moving toward a world where anyone can turn imagination into reality.

By Impact Lab