Why your next car will be grey and your next thought will be too
The decline of colour in consumer products mirrors a troubling trend in AI, where bias and homogenisation threaten intellectual diversity, societal fairness, and the integrity of human knowledge itself, says Lewis Liu
This week we’re getting a new family car. When my wife texted me the website showing color choices, I texted back, “I cannot tell the difference; it’s all the same”. In fact, cars that are not grayscale – that is, not black, gray, silver, and white – have halved from 40 per cent in 2005 to just 20 per cent last year.
The collapse of color in man-made objects is well documented across multiple studies, from clothes to teapots to cars. Looking at underlying sociological factors, the most prominent explanation is that neutral-colored products appeal to a larger audience, and hence more neutral products are produced, feeding into the cycle. Auto analyst Karl Brauer captures this perfectly: “If you think about it, if everyone is doing that, then all of these gray, black, white, and silver cars aren’t reflecting what everyone wants, they’re reflecting what dealers and consumers think everyone wants.”
So we’re losing our color, but what does this have to do with AI?
Sadly, the same phenomenon could afflict AI and, by extension, our collective knowledge base as a species. The impact would be far more devastating to our civilization than merely living in a monochrome world.
First, let’s talk about today’s AI models. They are extremely powerful (a topic I’ve covered extensively in other columns), but they are also inherently biased. AI safety experts have flagged “AI bias” as an issue, but I believe the community has done a poor job describing why and how these biases occur. In fact, generative AI models need to be biased in order to work. Just as our human brains make generalizations about the world, so does the AI training process; it’s a mathematical necessity. Research has demonstrated that generative AI models (both for images and text) over-index on common patterns while grossly under-representing what is somewhat uncommon.
Let me share a personal example. I tried to generate an image of my family to show my two boys how generative AI works, but no matter what prompts I used, I could not get ChatGPT to produce the correct racial combination: an ethnically Chinese father and a white Caucasian mother. Why? Because couples like my wife and I are uncommon in the US, while the reverse pairing is far more prevalent. From an AI perspective, a family like mine simply doesn’t exist in the model’s understanding of what’s “normal”. This isn’t an isolated case; Bloomberg published a comprehensive study showing how AI consistently underrepresents female and non-white CEOs while overrepresenting certain demographics in service roles. This is the cold, mathematical reality of how generative AI models function.
Heading for model collapse
Far from being merely a trivial example, this pattern has serious implications. GenAI tools are already being deployed by DOGE in the US to determine staff reductions, and corporations are increasingly using these systems to make hiring decisions. The embedded biases within these models can have profound, insidious societal effects at an unprecedented scale.
Next, we must confront the escalating cycle as society increasingly relies on these models to generate content. Research indicates that 74 per cent of new internet content is already AI-generated, while Europol estimates that by next year, over 90 per cent of online content will be AI-produced. Because this new content inherits the biases of existing AI models, we face a critical problem: when this heavily biased internet content is used to train the next generation of models, it will further amplify high-probability patterns (such as white male CEOs) while further diminishing representation of less common realities (such as families like mine).
In AI development, if this AI training feedback loop continues through successive training cycles (AI output training new AI models), the system eventually breaks down, a phenomenon known as “model collapse“, where the model becomes completely unusable. The parallel to our increasingly monochromatic world is striking: just as colour diversity has collapsed in consumer products, we risk a collapse in the diversity of thought.
Just as colour diversity has collapsed in consumer products, we risk a collapse in the diversity of thought
The danger extends far beyond technical model collapse in the AI realm. While that alone would have severe consequences as our world becomes increasingly AI-dependent, the more alarming threat is the profound impact on our physical reality. As AI outputs become progressively more biased, they will begin to reshape human perception of what constitutes “normal” or “real”. These distorted perceptions will inevitably manifest in our physical world, creating a devastating combination of what I call “human knowledge collapse” and a reinforcing cycle of toxic homogeneity.
How do we prevent this dystopian future, given that AI is here to stay?
As individuals, we must commit to engaging with original content. Don’t settle for AI summaries from ChatGPT or Perplexity; seek out primary sources and form independent judgements. Each time you choose an original source over an AI digest, you vote for intellectual diversity and help shape tomorrow’s AI. Think independently first, then use AI as a tool, not a replacement for your own intellectual work. Parents especially: ensure your children develop critical thinking skills. The ability to reason independently will be their most valuable asset in a world where pre-processed knowledge is served on demand.
As a society, we must strengthen IP protections for creators whose collective work made AI possible. Developing better AI requires safeguarding these creators through both legal frameworks and public funding. Our educational systems must integrate AI literacy alongside traditional skills, teaching students to recognise and resist algorithmic uniformity. Following precedents set in banking and insurance, where regulations have long governed algorithmic credit decisions, we need targeted oversight to prevent AI bias from infecting hiring, firing and consumer decisions.
For AI builders, the responsibility is clear: design systems that actively counter bias amplification, particularly in autonomous workflows. Create rigorous evaluation frameworks that test for representation of valid but uncommon scenarios. Let’s build products that empower the individual rather than taking away intellectual diversity. The power entrusted to us by investors, customers and employees demands nothing less than ethical vigilance. In an increasingly monochrome world, those who preserve diversity of thought will not only shine, they will safeguard humanity’s collective future.
As for the car, we’re driving home the most colorful model the dealership has: a gray-blue hybrid Volvo XC90.
Dr Lewis Z. Liu is co-founder and CEO of Eigen Technologies