Innovative technologies that enhance robots’ visual and cognitive capabilities were showcased at the 2025 IEEE International Conference on Robotics and Automation (ICRA), the world’s foremost conference dedicated to advancing the field of robotics. These achievements were developed through a collaborative effort between Professor Kyungdon Joo from the Graduate School of Artificial Intelligence and Professor Cheolhyeon Kwon from the Department of Mechanical Engineering and their research teams at UNIST.
Professor Joo’s team introduced two key innovations aimed at improving indoor spatial understanding and enabling stable multi-robot cooperation: the AiSDF technology and the Co-SLAM in Service Environments (CSE) dataset.
AiSDF (Signed Distance Field) is an artificial intelligence-based method that reconstructs three-dimensional maps by representing the environment through a signed distance field. This approach allows the system to quantitatively assess the proximity of each point within indoor spaces—ranging from simple surfaces like walls and floors to complex objects such as desks and devices—thus enabling highly precise 3D mapping. Such accuracy at the centimeter level is crucial for tasks involving confined spaces, like robotic manipulation, where spatial awareness directly impacts performance.
The CSE dataset serves as a standardized simulation resource designed to evaluate multi-robot perception and cooperation in environments modeled after real-world settings such as hospitals, offices, and warehouses. It encompasses various interaction scenarios, including following behaviors, revisiting routes, and loop closure, which are essential for practical service applications.
Researcher Inha Lee emphasized, “For service robots to operate effectively, accurate spatial perception and inter-robot collaboration are vital. This research provides a foundational platform for such capabilities.”
Meanwhile, Professor Kwon’s team presented three significant advancements, two of which are focused on collision avoidance and safety in autonomous driving scenarios involving human-driven and autonomous vehicles.
The Active Inference-Based Motion Planning approach models the variability in human driving behavior, acknowledging that human actions are not always fully rational. While autonomous vehicles typically follow strict signals, human drivers may act unpredictably—such as entering intersections on yellow lights. By integrating a model that reflects different levels of rationality, autonomous vehicles can actively infer the intentions of nearby drivers and plan trajectories that enhance safety, particularly in complex scenarios like intersections and merging zones.
The second innovation, Unsupervised Kernel-Based Learning for Prediction, is designed for high-speed autonomous driving environments such as racing. It enables vehicles to anticipate the strategies of surrounding cars by employing a kernel-based, unsupervised learning algorithm that accounts for uncertainties. This allows autonomous vehicles to respond flexibly and safely amid dynamic and unpredictable traffic behaviors.
The third development involves an advanced mapping algorithm that incorporates viewpoint-dependent visibility information during sensor data processing. This technique ensures high-precision alignment of point clouds even in situations with sparse overlap, resulting in more accurate and consistent three-dimensional maps critical for autonomous navigation.
Building upon these technological advancements, Professor Kwon’s team achieved second place at the 24th Roborace Autonomous Grand Prix, an official side event showcasing autonomous driving innovations.
The 2025 IEEE International Conference on Robotics and Automation (ICRA), regarded as the premier global forum for robotics research and development, was held from May 19 to 23 in Atlanta, bringing together leading experts and researchers from around the world to share the latest breakthroughs in the field.