Customising reality: Will we ever see the world unfiltered again?

AI and deepfakes mean the world we see through our screens is already fiction. The next step is removing screens altogether, says Paul Armstrong
Reality used to be something you could trust. What you saw, heard and experienced was, for the most part, real. But as AI-driven filters, deepfake augmented reality (AR), and immersive environments creep further into everyday life, the line between real and synthetic is dissolving. The question is no longer whether AI will distort perception, because it already has, instead the real concern is whether we’ll ever be able to switch it off, or know that we need to.
There was a time when photo filters were just playful tweaks. Now, AI-enhanced reality is so pervasive that entire digital personas are crafted out of thin air. Instagram faces, AI-generated influencers and deepfake avatars are standard fare. News images are algorithmically “enhanced”, videos are subtly altered and AI-powered beauty filters don’t just soften wrinkles, they shift bone structures, erase ethnic markers and rewire self-image at a fundamental level. The world we see through our screens is already fiction. The next step is removing screens altogether.
Augmented reality was supposed to add layers to our experience of the world. What it’s doing instead is replacing reality itself. With deepfake AR, AI doesn’t just modify digital content, it overlays a version of reality that is entirely customisable. Real-time face-swapping, AI-powered speech modulation and personalised content feeds mean that two people can stand in the same physical location and experience completely different digital realities.
Take AI-powered video calls. It’s already possible to subtly tweak your real-time appearance with smoother skin, brighter eyes and better lighting. Now push that further. Your voice can be modulated, your body language adjusted, your facial expressions rewritten in real-time. The person on the other end has no idea they’re interacting with an altered version of you. There are already examples of AR masks being donned by people taking interviews for others already. And why stop at video calls? AR glasses will soon let us redesign the world as we see fit. The people around us can be digitally “enhanced” to look more attractive. Ads can be erased from view. AI can edit live interactions in real time, filtering out undesirable speech patterns or emotional responses.
If perception is dictated by algorithms, then what happens to truth? More importantly, what happens to trust? The internet fractured collective experience. AI-mediated reality seems destined to shatter it completely. Two people standing next to each other in Times Square might not see the same billboards, the same news headlines, or even the same people. One might be living in a hyper-commercialised, ad-saturated version of the world, while the other has paid to strip it all away. The advertising poor-tax will never feel so real. Are we about to see a trillion-dollar reality-as-a-service blossom?
Dystopia?
Sadly, this isn’t hyperbole or a distant dystopia, it’s already happening. Google and co’s generative search is delivering AI-rewritten versions of the internet, personalising not just the results but the actual content users see. TikTok’s For You Page is so finely tuned that people exist in entirely different algorithmic bubbles without realising it. Now apply that logic to everything. Shopping streets where brands bid to be visible to you. Conversations where AI filters out speech patterns it deems offensive before they even reach your ears. A reality curated entirely by algorithms that never reveal what they’ve removed.
The more dangerous part? AI doesn’t just distort how we see reality, it changes how we remember it. A recent study found that AI-generated memories can be so convincing that people genuinely recall events that never happened. When AI edits real-time perception, it’s not just shaping what we experience, it’s rewriting history on the fly.
So who controls all these filters? Give you three guesses. AI-driven reality isn’t inherently bad. There are benefits to being able to customise how we interact with the world. Imagine a person with PTSD using AR glasses to filter out distressing triggers or an AI overlay that enhances accessibility for the visually impaired. But these tools will not remain personal choices for long.
Big Tech will control the filters, deciding which versions of reality are profitable, permissible and politically acceptable. An AR cityscape could block out independent businesses in favour of sponsored content. AI-driven news feeds could erase inconvenient narratives entirely. And once AI controls real-time perception, dissent becomes harder to prove. If the news story you saw yesterday doesn’t exist today, did it ever happen?
Can we opt out? Theoretically, yes. Practically, no. AI-mediated reality will become the default because opting out will mean exclusion. If job interviews, dating, and even casual conversation rely on AI-enhanced presentation, then refusing to engage with it becomes a disadvantage. People who choose to live in unfiltered reality might soon be seen as socially or professionally unpolished, less competitive in a world where perfection is automated.
People who choose to live in unfiltered reality might soon be seen as socially or professionally unpolished, less competitive in a world where perfection is automated
There’s also the issue of compulsory augmentation. Employers may require AR interfaces for efficiency. Governments could mandate AI-filtered content for “safety.” Businesses will design physical spaces assuming that visitors are using AR overlays. A world without AI mediation might cease to function altogether.
Businesses need to start planning for an AI-mediated future now, both to protect their own interests and to shape how these technologies evolve. In the short term, companies must audit their reliance on AI-enhanced content, ensure transparency in how they present reality to consumers, and establish ethical guidelines for AI-generated interactions. Customer trust will hinge on authenticity; businesses that are upfront about their use of AI will hold an advantage over those that quietly manipulate perception. In the long term, organisations should push for industry-wide standards on AI transparency, advocate for user control over AI filters and invest in technologies that give consumers the option to toggle between mediated and unmediated experiences. The businesses that take a proactive stance now will be the ones defining what “real” means in the decades to come.
We are at the point where reality is being rewritten in real-time, often without our explicit consent. AI-mediated experiences are so subtle, so integrated into our daily lives, that most people don’t even realise they are engaging with a version of reality rather than reality itself. The question isn’t whether AI will reshape perception, it’s whether we’ll ever be able to turn it off.
The fight (back?) for an unfiltered world starts now. If businesses, regulators, and individuals don’t challenge the creeping normalisation of AI-curated reality, opting out won’t be an option. The world you see will be whatever the algorithm decides. And if you never see the alternative, you won’t even know what’s missing.
Paul Armstrong is founder of TBD Group, runs TBD+ and author of Disruptive Technologies