An innovative AI model has been developed to create dynamic lighting effects in portrait images and videos using only text input. This technology allows users to adjust colors easily with descriptive prompts, such as ‘warm, freshly cooked chicken’ and ‘icy blue light,’ eliminating the need for complicated editing tools.
Professor Seungryul Bak and his team at the UNIST Artificial Intelligence Graduate School introduced Text2Relight, an AI-driven lighting-specific foundational model that can perform relighting of a single portrait image driven by a creative text prompt as shown in Figure 1.
This groundbreaking study, conducted in collaboration with Adobe, will be showcased at the 39th Annual AAAI Conference on Artificial Intelligence (AAAI-25) in Philadelphia, taking place at the Pennsylvania Convention Center from February 25 to March 4, 2025. The research has also been accepted by the Association for the Advancement of Artificial Intelligence (AAAI), one of the leading conferences in the field.
Figure 1: Text2Relight generates the image of a relighted portrait (right) as a condition of a text prompt while keeping original contents in an input image (left).
The new model excels in expressing diverse lighting characteristics, such as emotional ambiance, alongside color and brightness, all through natural language inputs. Notably, it adjusts the colors of both the subject and the background simultaneously, maintaining the integrity of the original image. In contrast to existing text-based image editing models that lack specialization in lighting data and often result in image distortion or limited lighting control, Text2Relight provides a more refined solution.
To enable the AI to learn the correlation between creative texts and lighting, the research team developed a large-scale synthetic dataset. They utilized ChatGPT and text-based diffusion models to generate lighting data, while also implementing OLAT (One-Light-at-A-Time) techniques and Lighting Transfer methods to explore various lighting conditions.
In addition, the team further enhanced the model’s functionality by training auxiliary datasets focused on shadow removal and illumination positioning, thereby improving visual coherence and realism in lighting effects.
Professor Bak commented, “Text2Relight holds significant potential in content creation, including reducing editing time in photo and video production and enhancing immersion in virtual and augmented reality settings.”
The study was participated by Junuk Cha, a researcher at the UNIST Graduate School of Artificial Intelligence, who served as the first author. This research was supported by Adobe and the Ministry of Science and ICT (MSIT).
Journal Reference
Junuk Cha, Mengwei Ren, Krishna Kumar Singh, et al., “Text2Relight: Creative Portrait Relighting with Text Guidance”, in Proc. of Annual AAAI Conference on Artificial Intelligence (AAAI), Pennsylvania, USA, 2025.