Case Study: AI Bias in Advertising and the Definition of Happiness
Introduction
As a media and film student, I’ve always been fascinated by how visual storytelling can shape perceptions. During a class assignment, I created an advertising campaign for a fictional insurance company, For Life, using AI-generated stock images and videos. The campaign revolved around the theme “Finding the Definition of Happiness”—a concept I believed would resonate universally.
The idea was simple: showcase moments that people commonly associate with happiness—family gatherings, personal milestones, and everyday joys—then connect them to the value of insurance as a way to protect those precious experiences.
But after presenting my work, my professor asked two questions that completely changed the way I viewed my project:
“Have you noticed that most of the people in this advertisement are white?” “Do you believe happiness is only defined by positive moments?”
I was caught off guard. Until that moment, I hadn’t even thought about who was being represented in my video or how happiness was being framed. That moment of realization pushed me to dig deeper into two critical issues:
The racial bias embedded in AI-generated media and stock content
The oversimplified, one-sided portrayal of happiness in advertising
This case study explores how my project unintentionally reflected these biases and what I learned from it.
AI Bias in Stock Media: Who Gets to Be Seen?
After my professor’s question, I spent an entire day manually browsing through 20 of the top stock image and video platforms, searching for diverse representations of people. What I found was shocking.
Over 80% of the images and videos featured white individuals. Only around 20% included people of color, and when they did, they were often depicted in stereotypical roles—such as athletes, laborers, or cultural “token” figures. Even when I searched for terms like “diverse group of people” or “happy family from different backgrounds,” the results remained overwhelmingly white.
At first, I assumed this was just a limitation of stock platforms, but as I dug deeper, I realized the problem went beyond that. The AI algorithms curating these results were reflecting the bias already present in the datasets they were trained on. This raised broader concerns about how AI, which is increasingly integrated into media and advertising, is perpetuating and even exacerbating existing racial disparities.
Why Does This Happen?
Historical Bias in Media Production
The stock media industry has always been dominated by Western, white-centric aesthetics. This is a result of decades of marketing strategies that were designed to appeal primarily to white audiences in countries like the United States and Europe. The historical underrepresentation of people of color in advertising has contributed to a cycle where past biases continue to shape present media.
Moreover, photographic and film lighting techniques have historically been optimized for lighter skin tones. Color film technology, such as Kodak’s Shirley cards, was originally designed with white skin as the standard, making it more difficult to properly capture and light darker skin tones. This legacy continues to influence modern digital photography and AI image processing.
AI Reflects and Reinforces Existing Biases
AI-generated stock searches rely on existing data. If that data is already biased, the AI will continue prioritizing certain demographics over others. In a study by MIT Media Lab, researcher Joy Buolamwini found that facial recognition algorithms were far less accurate for darker-skinned individuals, particularly Black women, due to the lack of diverse training data. This inaccuracy isn't just a flaw—it has real-world consequences, from misidentification in security systems to the erasure of people of color in visual storytelling.
AI bias also extends to language models that determine which images appear for a given search query. If search engines and AI systems have been trained on predominantly white datasets, they will continue surfacing white-centric imagery as the default representation of concepts like happiness, success, and family.
Lighting and Technology Favor Light Skin
Even today, many cameras and AI-enhanced imaging tools struggle to process darker skin tones correctly, often making them appear washed out or underexposed. This isn’t just a technical issue—it’s a design flaw rooted in the fact that historically, the people developing these technologies didn’t prioritize inclusivity.
Real-World Examples of AI Bias
This issue isn’t just limited to stock media. It exists in AI-driven image generation and facial recognition systems as well.
Majid Hussain, co-founder of MiQuest AI, explained that “AI bias in skin tone analysis is largely due to imbalances in training data sets.” (Vogue Business)
Google’s AI image generator recently faced backlash after creating racially diverse images of historical figures like Nazi soldiers—highlighting how AI can misinterpret diversity when trying to correct past biases. (Wall Street Journal)
This made me realize something important: My ad wasn’t just reflecting my own unconscious bias—it was exposing a systemic issue in AI-generated media. The lack of diversity in my AI-driven advertisement was not a coincidence; it was the result of deep-seated industry-wide practices that prioritize whiteness as the default.
Is Happiness Just About Positive Moments?
The second question my professor asked made me pause: “Do you believe happiness is only defined by positive moments?”
At first, I thought, Well, of course! That’s what happiness is—joyful, beautiful moments. But the more I reflected, the more I realized my ad was portraying happiness in a one-dimensional way.
My campaign only featured perfect, Instagram-worthy moments: families celebrating a birthday, a couple enjoying a sunset, and children playing in a park. But real happiness isn’t just about picture-perfect moments. It’s about overcoming obstacles, growth, and finding meaning even in difficult times.
How Advertising Skews the Idea of Happiness
Commercialized Happiness is Unrealistic
Most ads depict happiness as something constant and effortless. This creates pressure on people to feel happy all the time, leading to unrealistic expectations. This standard can be damaging, as it ignores the importance of emotional depth and resilience in human life.
Happiness Includes Vulnerability and Struggle
Psychologists have found that true well-being often comes from facing challenges and developing resilience. Some of the most powerful, emotional storytelling includes both joy and hardship—because that’s how happiness really works.
Missed Emotional Depth in My Ad
If I had included moments of struggle, perseverance, and emotional complexity, my campaign would have felt more genuine and relatable.
Examples I could have used:
A child falling off a bike, then getting up and trying again.
A couple going through a difficult moment, then reconnecting.
A person facing an illness, surrounded by their loved ones.
I realized my campaign was too polished, too perfect—and because of that, it lacked the depth of real human experience.
Conclusion
What started as a simple AI-generated advertisement turned into an eye-opening experience about bias in media and the way advertising distorts human emotions. My professor’s questions forced me to step back and critically evaluate my own work.
Moving forward, I want to create media that is more inclusive, emotionally honest, and socially responsible. This project didn’t just change the way I approach advertising—it changed the way I think about storytelling as a whole.