As the cost of space launches continues to decrease, thanks to advancements like the Starship and other massive lift systems, the barriers to entry for the space economy are expected to significantly diminish. This shift raises the question: What comes next? Two acronyms—In-Space Servicing, Assembly, and Manufacturing (ISAM) and On-orbit Servicing (OOS)—have gained prominence in the literature, offering potential glimpses into the future. In a series of articles, we will explore the meaning and prospects of these acronyms. To begin, let us examine the role of robots in this equation.
Space robots have been part of the aerospace landscape since 1981 when the Shuttle Remote Manipulator System (SRMS) was deployed on the space shuttle, with astronauts operating them. Over the past four decades, their applications have expanded significantly. They have played crucial roles in assembling the International Space Station (ISS) and, more recently, in proof-of-concept missions aimed at servicing malfunctioning satellites in Earth’s orbit.
A recent paper published in the journal Advanced Intelligent Systems by the State Key Laboratory of Robotics and Systems at the Harbin Institute of Technology in China sheds light on the remaining challenges in achieving fully functional robots in space. The paper identifies five distinct functional areas that require further development.
The first area is vision, a familiar aspect to those working with autonomous robots on Earth. While visual surroundings in space may not be as chaotic, it remains challenging for robots to comprehend what they observe, especially when dealing with tumbling satellites. Recognizing patterns, such as docking port markers on satellites expecting service, is particularly difficult. This challenge arises because the recognition algorithm must be executed on the robot itself, requiring increased computational power, which in turn leads to higher power consumption and the need to manage generated heat. Identifying an “uncooperative” satellite that is not designed for robotic assistance presents an even greater challenge, especially in real-time scenarios.
Once a robot identifies its target and understands what it needs to interact with, the next step is to navigate and effectively engage with the object. This involves various factors collectively referred to as “motion and control” technologies. The paper proposes solutions to unique control problems, such as dealing with forces in low-gravity environments. Vibrations caused by movement commands, especially when manipulating specific objects, can pose risks to the robot’s structure and manipulator. Dynamic control algorithms can help mitigate these vibrations, but coordinating multiple arms to interact with an object simultaneously remains complex in space, as it does on Earth.
When a robot or its manipulator reaches the intended target, another critical technology comes into play—the end-effector. In robotics, end-effectors are equivalent to human hands and enable the robot to interact with objects. They offer versatility, as they can be made of materials and shapes that human hands cannot replicate, and can be easily switched out for different tasks. Unlocking the full potential of end-effectors and enhancing a robot’s efficiency in switching between them requires further technical advancements.
One approach to improve end-effector operation is through teleoperation, which has been a common practice in space robotics, with astronauts controlling robots from within the shuttle or ISS. However, teleoperation consumes valuable astronaut time. Therefore, efforts are underway to enable ground-based teleoperation of robots in space. Recent experiments have explored the reverse scenario, where an astronaut controlled a robot on Earth, aiming to validate the concept of operating robots on other celestial bodies such as the moon or Mars. Nevertheless, teleoperation still faces challenges due to time delays, which can vary based on the robot’s orbital position. Several solutions have been proposed, including virtual reality control setups and force feedback, but the time delay remains an inherent limitation of long-distance communication.
Even on Earth, there are obstacles to overcome. High-fidelity ground verification is a complex task. Verifying the performance of a robot in microgravity is nearly impossible due to the prohibitively expensive nature of launching a verification prototype into space and addressing the myriad issues that arise during testing. Existing solutions simulate microgravity conditions by suspending the robot in forced air pockets, using freefall or parabolic flight on airplanes, or conducting underwater tests. Another promising technology is hardware-in-the-loop, which models the expected behavior of robotic systems and simulates specific space environments through software. However, developing accurate models for this system is challenging and can introduce inaccuracies in the verification process. Currently, no optimal solution guarantees a robot’s space operability during its ground-based development.
Ironically, operating robots in space itself may eventually resolve this challenge. Establishing a robust infrastructure in space that enables the design and assembly of robots could pave the way for their optimization in space-specific conditions. Although this remains a distant prospect, numerous international teams are working towards making it a reality. Overcoming the aforementioned technical hurdles will be crucial in achieving this vision.
Robots are poised to play a pivotal role in the expanding in-space economy. As we delve into the realm of in-space servicing, assembly, and manufacturing, these technological advancements will continue to shape the future of space exploration and commerce, propelling us toward a new era of human achievements in the cosmos.
By Impact Lab

