
I led the redesign of an AI image generation product built for social media advertisers. The improvements our team made resulted in a 35% increase in published campaigns that included at least one AI generated image variation.
Industry
Ad Tech
Client
Meta
Platform
Desktop
Year
2025
Overview
My Role
I was the lead designer for this initiative. I collaborated with product management and engineering within my own team and coordinated with product designers on other teams. As an IC5 I was expected to directly influence the strategy, deliver design solutions, and engage with stakeholders to gain support for proposed directions.
Process
We followed a four step process to deliver value for the customers and the business. First we defined the problem space and project goals. Next I designed multiple solutions. Once our most promising options were built we ran tests to validate our hypothesis. Based on the learning from our initial experiments we planned to iterate.
Define
Problem
Many advertisers lack the expertise and resources to create high-performing assets at scale, limiting their ability to achieve business goals. The manual process of creating multiple creative variations is time-consuming and ineffective. With generative AI, we saw an opportunity to empower advertisers to overcome these challenges.
Design
Process
At Meta, the craft quality bar is very high, so I followed a rigorous process to meet that standard. I leveraged design system components and auto-layout to create high fidelity mockups in Figma. I built clickable prototypes and slide presentations for stakeholder alignment. I presented my work in team critiques and leadership reviews, continuously refining my designs based on feedback. I worked closely with engineering and QA to ensure pixel-perfect implementation and address edge cases before the production launch.
Zero-Click Generation
To reduce latency and improve engagement, I redesigned the flow to generate images automatically based on the advertiser’s inputs before they reached the Image Generation step in the creative setup modal. This allowed us to lead with ready-to-select image options rather than forcing manual decisions.
Composition Editing
A lightweight editing interface that gave advertisers control over key composition elements. By focusing on correction instead of full customization, we avoided overwhelming users. This approach enabled media buyers to stay aligned with brand guidelines, helping to overcome a major barrier to adoption.
Feedback loop
In-product feedback collection experience that allowed advertisers to quickly share input on individual image variations without interrupting their workflow. By capturing granular insights at the instance level, I helped the team identify specific quality gaps and prioritize model improvements.
Validate
Implementation
We validated our hypothesis through a phased rollout to English-speaking advertisers, leveraging data to inform improvements. Key adjustments included reducing header size to prioritize image visibility. After achieving stable results at 25% rollout, we expanded to 50%. The feature is now live for an estimated 1.75 million users, with statistically significant adoption gains. Full rollout to English-speaking advertisers is planned for later this year, with language expansion slated for 2026.
Impact
The improvements our team made resulted in a 35% increase in published campaigns that included at least one AI generated image variation. Campaigns with AI-generated images saw 11% CTR lift and 8% CVR lift. We also collected over 10,000 responses through the in-product feedback survey during the first month, giving us valuable insights to improve model quality. These product enhancements helped strengthen Meta’s advertising platform by keeping us competitive in an evolving AI landscape.




AI Generations selected in Ads Manager

Ad on Facebook Feed



















