Capturing a picturesque scene through reflective materials, such as glass, often results in an unintended superimposition—showing both the transmitted scene and undesired reflected scene. While traditional reflection removal techniques have made progress, they frequently struggle with complex reflection patterns and varying lighting conditions, leaving residual artifacts that diminish image quality.
Recent advances in artificial intelligence are now providing powerful new solutions. Led by Professor Jae-Young Sim from the Graduate School of Artificial Intelligence at UNIST, researchers have developed an innovative AI model that effectively separates reflections from the transmitted scene, enabling clearer and more authentic views beyond the reflective surface.
Recognizing the limitations of existing methods, which often falter in complex and spatially heterogeneous reflection scenarios, the team designed an approach that intelligently segments the superimposed image for targeted analysis. This segmentation allows for precise removal of reflections while maintaining the integrity of the transmitted scene.
The core of this breakthrough lies in two techniques, which are Complementary Mixture-of-Experts (CoME) and Complementary Cross-Attention (CoCA).
CoME employs a mixture-of-experts (MoE) architecture that dynamically assigns specialized neural networks—referred to as ‘experts’—to different regions within an image based on local reflection characteristics. These experts collaboratively analyze both the transmitted and reflected layers, exchanging relevant information to improve separation accuracy, especially in regions with diverse reflection patterns.
CoCA enhances the reconstruction process by considering both strongly correlated and weakly correlated regions. Unlike traditional attention mechanisms that focus solely on highly related areas, CoCA recognizes that meaningful reflection details can also exist in less correlated regions, enabling a more comprehensive and effective separation.
Extensive evaluations on diverse real-world datasets demonstrate that this approach surpasses existing state-of-the-art methods in both visual quality and quantitative performance. It remains robust even in challenging scenarios with intricate reflection distortions—areas where prior models often underperform.
Professor Sim remarked, “Reflections in natural scenes are inherently complex and vary widely. Traditional neural networks often struggle to handle this variability. Our approach, with its adaptive expert allocation and dual attention mechanisms, offers a more flexible and effective solution. We believe this technology has significant potential across a range of imaging applications, from photography to autonomous systems.”
Supported by the Institute for Information & Communications Technology Planning & Evaluation (IITP) and the National Research Foundation of Korea (NRF), this research has been published in the IEEE Transactions on Image Processing.
Journal Reference
Jonghyuk Park and Jae-Young Sim, “Complementary Mixture-of-Experts and Complementary Cross-Attention for Single Image Reflection Separation in the Wild,” IEEE TIP, (2026).









