
Yanshui Beehive Fireworks Festival by Yu Wen Chen
July 28, 2025
KWSA: A Landmark in Kinetic Engineering and Spatial Storytelling
July 28, 2025Bodhisatta & Debshree
Blending neural computation with emotional aesthetics, Bodhisatta Maiti and Debshree Chowdhury chart a new trajectory in AI art. Their creative vision reframes technology as both mirror and medium for inner worlds.
Bodhisatta: Thank you! My background is in AI research, with a focus on vision and language. Over time, I became interested in how these models could do more than classify or generate — how they might help tell stories, express emotion, or reflect inner states. The idea of translating something as intangible as a feeling into a visual became an irresistible challenge.
Debshree: I come from a more intuitive space — I’ve always been drawn to how people express themselves non-verbally, through colours, textures, or movement. When Bodhisatta showed me the pipeline he was working on, I was fascinated by the potential to turn emotions into something visual and visceral. It felt like giving a form to something that’s usually invisible.
Debshree: With Painted Sentience, we wanted to stay close to the human face — to see how emotion could be worn, like a second layer of skin. There’s something deeply personal about seeing joy or fear translated into flowing colours and forms right on a face. But abstraction felt just as honest. Echoes of Emotion removes the person entirely, and in doing so, it lets the feeling speak for itself.
Bodhisatta: From a system design perspective, both series use the same core AI pipeline — but we wanted to test its expressive flexibility. Could the same emotion prompt lead to completely different visual interpretations depending on how we guide the generation phase? That question led directly to this split — one toward portraiture, one toward abstraction.
Bodhisatta: We start with an emotion classifier — an AI model trained to detect emotional tone from either a text input or a voice clip. Once the emotion is identified, we use a language model to describe an imaginary landscape that corresponds to that feeling.
For example, “serenity” might lead to a soft, misty forest with muted blues and greens. That description is then broken down into visual elements — colour palettes, compositional hints, and texture styles. Finally, we feed that into an image generation model to bring the emotion to life visually.
Debshree: We think of it like this: the emotion is the seed, and the language model writes a poetic scene from it. Then the image model paints that poem. And we, in between, nudge it in the right direction — shaping the output until it resonates.
Debshree: I’ve always been fascinated by masks — not to hide, but to reveal. In Painted Sentience, the face becomes a canvas, and emotion becomes the brush. We didn’t want to distort the identity of the person, but rather let the emotion bloom across their features — like a thought rising to the surface.
Bodhisatta: Technically, it was a matter of conditioning the AI model to respect the facial structure while embedding abstract visual motifs tied to the emotion. The “painting” isn’t applied randomly — it's driven by colours and patterns associated with the emotion class, drawn from the generated landscape description. We fine-tuned how much the facial features should remain versus how much artistic overlay should emerge.
Debshree: Sometimes the emotion is too big, too raw to anchor it to a face. With Echoes of Emotion, we wanted the viewer to feel, not interpret. When there's no subject to relate to, you're left alone with the emotion itself — and that can be powerful. Abstract art invites the viewer to bring their own meaning.
Bodhisatta: From a pipeline perspective, this was a chance to see what the AI would generate when unconstrained by human form. It gave us more room to explore visual intensity, movement, and scale. The structure was looser, but that gave us more room to experiment with how emotion could ripple across a frame.
Bodhisatta: “Curiosity” was a challenge. It’s not as visually defined as, say, fear or joy. The AI often leaned toward ambiguity — strange shapes, incomplete forms — which in hindsight made sense, but it took a while to embrace that. On the flip side, “serenity” was a joy to work with — the colours and flow emerged very naturally.
Debshree: For me, “melancholy” was the most powerful. It didn’t look sad in the usual way. It was gentle, quiet, almost beautiful — like a soft fog. That surprised me. And I think that’s where AI and emotion blend best — when the image teaches us something about the feeling.
Bodhisatta: The hardest part was preserving emotional consistency between what the classifier detects, what the language model generates, and what the image model finally paints. Each step can introduce drift. So we had to monitor how prompts were worded and even tune how much randomness the AI was allowed in image generation.
Debshree: And emotionally, it was about asking — does this image feel like the emotion? That’s hard to measure, but essential. We would often sit with the images for a while, look at them on different days, and ask others how they felt when they saw them. If the answer kept returning to the same emotional space, we knew we were close.
Debshree: It was very fluid. I would often describe how I imagined an emotion — what colours I saw, or what it felt like physically. Bodhisatta would then find ways to translate that into inputs the system could understand. We’d go back and forth — refining language, tweaking outputs — until the image felt emotionally grounded.
Bodhisatta: Exactly. It was a fusion of structure and subjectivity. The pipeline handled the sequence — but the soul came from the back-and-forth between us. I think the duality helped the work become what it is — not just a system, and not just intuition, but both in conversation.
Debshree: It’s deeply affirming. AI art can sometimes feel like an outsider — especially when it comes from emotion rather than novelty. To be recognised by a platform like the London Photography Awards, especially alongside traditional photography, makes us feel seen. It means these ideas matter.
Bodhisatta: I agree. For me, it’s a sign that creative AI work — especially when it's human-centred — can stand beside any other visual form. It’s not about replacing photography or art, but adding a new language to express what we feel.
Bodhisatta: Don’t treat AI as just a tool. Treat it like a collaborator — something that brings unpredictability, insight, and sometimes resistance. And don’t chase aesthetics alone. If there’s a strong idea, the visuals will find their way.
Debshree: Also — don’t be afraid to bring yourself into it. Even when working with algorithms, what you feel, imagine, or remember matters. Technology can do many things, but the meaning? That always starts with you.
Winning Entry
Painted Sentience | 2025 London Photography Awards
This photo series is part of an ongoing research project that explores how human emotions can be translated into visual art using AI. The system begins by detecting emotions from either text or audio using an emotion classification model. Based on the... (read more here)
Bodhisatta & Debshree
Blending neural computation with emotional aesthetics, Bodhisatta Maiti and Debshree Chowdhury chart a new trajectory in AI art. Their creative vision reframes technology as both mirror and medium for inner worlds.
Explore more design insights through Six Wins and Still Searching: A Conversation with Daniel Gilpin here.