Can AI Be Fair Without Considering Who You Are?
While the new AI policy in the US redefines what “bias” in AI systems means, CPI explores how shifts affect people’s perception of technology, identity, and fairness outcomes.
SUMMARY
We explore the cyberpsychological implications of important shifts — from identity-based metrics toward a framework of neutrality — in AI systems and algorithmic technologies.
Most people don’t spend much time thinking about bias in artificial intelligence (AI).
But whether we notice it or not, bias shapes how algorithms operate — from what content we’re shown, to what loans we’re approved for, to how medical or hiring tools interpret our data.
For those reasons, bias frameworks (from organizations like NIST, OECD, and others) have encouraged AI systems to detect and mitigate bias based on race, gender, ability, or class. For years, these various frameworks have strived, under their shared interpretations of bias, to account for how different people might be treated differently by algorithmic technologies.
Today, interpretations of what bias is (and isn’t) in artificial intelligence are underway.
Under the current administration’s new AI policy, bias is redefined to emphasize identity-blind approaches.
In other words, instead of directing AI systems to account for race, gender, or other demographic factors in their outcomes, the policy promotes a framework where such identity markers are deemphasized entirely — aiming to treat individuals as equal, regardless of background.
This kind of shift reshapes what machines learn to pay attention to and what they’re taught to ignore.
As such, USA AI policy now promotes a “blind” version of bias — one where AI systems don’t consider nor apply personal identity markers to any individual.
By treating users neutrally, this new policy aims to apply an even fairness in AI systems across the board, regardless of one’s demographic background.
Identity matters — but so can neutrality
From a cyberpsychological perspective, clear arguments for both sides of this identity vs. neutral ‘bias’ fence have been made, which is no surprise since psychological experience accounts for a diverse range of ideas and perspectives.
Arguments for identity markers in AI systems
Some researchers argue that when AI systems completely ignore identity markers (like race, gender, or lived experience), users may feel unseen. This can lead to what’s known as “perceived unfairness,” a concept in psychology where people distrust systems they feel don’t understand their context — even if those systems apply the same rules to everyone.
Arguments for identity more neutral approaches
On the other hand, others believe that identity-blind systems may actually increase trust, especially among those who feel that demographic-based adjustments introduce new forms of bias or favoritism.
This view holds that when algorithms treat everyone the same — without considering race, gender, or background — it creates a sense of equal footing and transparency, which can strengthen trust in institutions that are often accused of being ideologically biased or politically motivated.
How people interpret or perceive bias and derive at trust in AI systems can (and does) differ widely depending on their experiences, values, and expectations.
Both perspectives reflect real psychological responses; and the clear arguments for AI system approaches toward bias are demonstrative of the natural tensions that arise when technology is framed and designed.
Many questions, but no easy answers
Technological systems of any kind impact our wellbeing to some degree — be it cognitively, emotionally, and/or socio-economically.
In the context of bias in AI systems, our shared understanding of what bias was or meant is undergoing a significant overhaul, which will take time to adjust, sort through, and thereafter analyze.
Questions that come to mind at this time, in light of these momentous shifts toward identity-neutral AI systems, span our three key pillars as follows:
Responsible AI
How will identity-neutral AI systems be audited for unintended disparities if demographic information is deliberately excluded from training, testing, and outcome review?
Responsibility means building systems that are safe, transparent, and accountable. If race, gender, or other identity markers are removed at the design level, it may challenging to measure disparate impacts. This raises concerns about how we define and track harm.
Ethical AI
In the absence of demographic awareness, whose ethical framework determines whether an AI decision is fair, neutral, or harmful?
Ethical design isn’t just about removing bias — it’s about making value-aligned decisions. If we’re no longer using identity-based lenses, which values become dominant in deciding what’s “neutral”? And do those values reflect broad societal consensus or a narrower worldview?
Humane AI
How might identity-blind systems affect users’ psychological sense of recognition, inclusion, or trust — especially for those whose lived experiences have historically been ignored or minimized?
Technology isn’t emotionally neutral. How people feel about how they’re treated (or not) by a system influences everything from engagement to perceived legitimacy. The erasure of identity from algorithmic systems may have unforeseen psycho-affective consequences, despite the best intentions of being treated as a technologically “equal” being in comparison to everyone else.
This domain, and the inherent questions that will continue unfolding as identity-neutral AI systems are deployed, is one we’ll be closely monitoring for months and years to come.
PS — Want to see what the current U.S. policy says about bias? See our side-by-side comparison of how traditional AI ethics frameworks define bias and how that’s changing under current federal guidance.

Mayra Ruiz-McPherson, PhD(c), MA, MFA
Executive Director & Founder
The CyberPsych Institute (CPI)
Empowering Minds for the AI Age
