GenAI moves from pilot to production in fashion retail experience
A European fashion platform shows how image generation, systems design, and data scale drive repeat customer engagement
Zalando, a publicly traded Berlin-based online retailer active across Europe and specializing in shoes, fashion, and beauty products, is testing how generative AI can scale from pilot to production.
The company, which has more than 51 million active users across 25 European markets, is deploying customer-facing AI features designed for repeat use rather than one-off experimentation.
The fashion and lifestyle e-commerce platform has been testing a GenAI-powered feature called “Style It,” which lets shoppers visualize complete outfits on digital avatars and encourages exploration across a vast online assortment.
Rather than positioning GenAI as a novelty, Zalando has treated Style It as a product test grounded in customer behavior. The company wanted to see whether users would return after trying the feature, a key benchmark for determining whether GenAI could meaningfully enhance engagement rather than function as a novelty.
“The insight was super positive with what we built. The folks who landed on it got really engaged, started to build their own outfits, and they came back,” said Tofigh Naghibi, head of Zalando Research. “That was the key hypothesis. Are they coming back, or is it just a toy?”
Naghibi said early signals showed that customers who interacted with Style It were willing to spend time experimenting with combinations, sharing outfits, and returning to the feature over time. That behavior encouraged the company to plan a broader rollout, even as technical limitations remained.
The comments were made at Momentum AI 2025, a two-day enterprise technology conference organized by Reuters in London.
The session, titled “Redefining Retail with a GenAI-Powered Style It Solution,” took the form of a fireside-style discussion between Zalando’s head of research and Stephen Moody, director of data and AI for EMEA at EPAM, on how the company brought generative image technology into a live retail environment.
From lab to retail
The emergence of Style It reflects a broader inflection point in image generation technology. For years, producing realistic, controllable images at scale was considered impractical for live retail environments, particularly when both quality and speed were critical.
Naghibi traced the breakthrough back to advances in generative modeling over the past decade. Early work on generative adversarial networks demonstrated that image generation was possible, but instability made them difficult to scale. That changed with the arrival of diffusion models, which could absorb growing volumes of data and computing power without collapsing.
“With diffusion models, image quality could finally be pushed to the level we wanted for production,” Naghibi said. “Before that, we thought we couldn’t. Now we can.”
The improved stability allowed Zalando to consider deploying image generation in a customer-facing application, where visual errors and long wait times are unacceptable.
The apparent simplicity of Style It masks a complex technical architecture. What initially appeared to be a single AI task turned into a system composed of multiple interconnected models, each responsible for a specific step in the image-generation pipeline.
“What I had in mind was one AI system that takes the product image and generates a beautiful avatar,” Naghibi said. “In reality, what we built consists of seven different AI systems working together.”
Those systems include models for garment placement, pose consistency, image refinement, and automated quality assurance. The latter plays a critical role in filtering out flawed outputs before they reach customers, such as missing limbs, distorted proportions, or incorrect product imagery.
The complexity forced the team to rethink how GenAI products should be engineered. Rather than waiting for a perfect end-to-end model, Zalando chose to assemble components that could compensate for the limitations of current technology.
“Technology is never ready. Technology is moving forward,” Naghibi said. “You have to meet it where it is, not where you wish it is.”
Engineering at scale
Performance was another major constraint. For Style It to function as a retail feature, images needed to be generated in seconds, not tens of seconds. Long delays would break the shopping flow and undermine engagement.
Zalando addressed the problem by using a teacher–student approach. A large, high-quality model is first trained offline, then used to train a smaller, faster model that can generate images in near real time. The result is a system capable of producing images in under five seconds.
“If you want to use image generation in a customer-facing application, you cannot wait 30 seconds,” Naghibi said. “That’s really not customer-facing.”
The approach allowed Zalando to generate up to one million images per week during the pilot phase, supported by cloud infrastructure designed for elastic scaling.
Behind that capability sits a broader modernization of Zalando’s data platform. The retailer has been working with EPAM and Amazon Web Services since 2022 across more than a dozen programs to migrate legacy analytics and data warehouse workloads to cloud-native services. By decoupling storage and compute and shifting to AWS-based architectures, Zalando reduced its exposure to peak-load costs, improved query performance, and made it easier for dozens of internal data teams to experiment with new AI-driven features without disrupting core operations.
That foundation proved critical for Style It. Generating images in near real time required not only fast models but also reliable data pipelines, scalable orchestration, and predictable performance during traffic spikes such as promotional events. The GenAI feature was built and hosted on AWS services, including object storage, compute, and container orchestration, allowing the team to scale capacity up or down as customer demand fluctuated.
Brand protection emerged as another critical consideration. As a platform hosting thousands of fashion brands, Zalando needed safeguards to prevent visual inaccuracies that could undermine brand trust.
“You don’t want to generate an image of Adidas with four lines,” Naghibi said. “That would be devastating.”
To address this risk, the company deployed specialized AI models trained to automatically detect brand-related errors and quality failures. Even then, perfection was not guaranteed.
“What we learned is that customers and brands are more forgiving today than they were a year ago,” Naghibi said, noting that tolerance for minor AI imperfections appears to be growing as generative tools become more familiar.
Zalando also paid close attention to cultural concerns surrounding AI and creative work. The avatars used in Style It were deliberately designed to look cartoon-like rather than photorealistic, signaling that the tool was not intended to replace human models or fashion photography.
“When AI enters the domain of someone’s career, especially creative work, the reaction can be very harsh,” Naghibi said.
The company kept traditional photo shoots at the forefront of its product pages, using AI-generated visuals as a complementary layer that enables combinations and experimentation not possible with manual photography alone.
What comes next
Style It was first released in selected markets in 2024, and Zalando is now preparing to expand the feature to its main landing pages. The next phase will focus on real-time image generation at a broader scale, while continuing to refine quality controls and performance.
As generative AI tools mature, Zalando’s experience suggests that success will depend less on raw model capability and more on systems engineering, data foundations, and thoughtful product design. For retailers considering similar moves, Style It offers a case study in how GenAI can transition from experimentation to production without losing sight of customer trust and usability.




