AI Ethics Theater: Your Company's 'Responsible AI' Policy Is Mostly Performance Art
AI ethics has become the tech industry's version of greenwashing, except instead of saving the planet, companies claim they're protecting humanity from algorithmic harm.
This is part of a series of articles exploring artificial intelligence (AI) and its impact on our lives, told from the perspective of a technology industry veteran, though not an AI expert, yet. If you want to start at the beginning check out the series page.
Walk into any tech company's board meeting these days and you'll hear the same buzzwords: "responsible AI," "ethical guidelines," "trustworthy systems." Read their press releases and you'll find commitments to fairness, transparency, and accountability that sound like they were written by philosophy professors who moonlight as compliance lawyers. What you won't find? meaningful action in many places.
The gap between corporate AI ethics statements and actual implementation has grown so wide you could drive a self-driving car through it. Actually, scratch that—the car would probably crash because nobody took responsibility for testing it properly. Which brings us to the uncomfortable truth about modern AI governance: most companies are performing ethics rather than practicing it.
Ethics Washing: The New Greenwashing
Remember when every corporation suddenly became passionate about sustainability? When fossil fuel companies started planting trees in their marketing campaigns while quietly lobbying against climate regulations?AI ethics has become the tech industry's version of greenwashing, except instead of saving the planet, companies claim they're protecting humanity from algorithmic harm.
The playbook is remarkably consistent. Step one: form an AI ethics board filled with impressive academics. Step two: publish a beautiful PDF outlining your ethical principles. Step three: continue business as usual while pointing to that PDF whenever questions arise. It's corporate theater at its finest.
The phenomenon even has a technical term: "AI washing." Companies overstate their AI capabilities or ethical safeguards, creating a facade of responsibility without substantive change. The practice has become so pervasive that regulators are starting to notice. But catching companies in the act requires understanding what they're actually doing versus what they're claiming to do.
Consider the numbers: 73% of C-suite executives believe ethical AI guidelines are important, yet only 6% have actually developed them. That's not a gap—that's a chasm. Most organizations treat AI ethics as a PR problem rather than an operational challenge.
The Greatest Hits of AI Ethical Failures
Let's examine the real-world consequences of this performance art, starting with the classics:
Bias: When Algorithms Learn Our Worst Habits
Aon Consulting created hiring assessment tools that were supposed to bring objectivity to recruitment. Instead, their Adaptive Employee Personality Test systematically discriminated against neurodivergent individuals and those with mental health conditions. The video screening component? It heightened the risks of discrimination based on race and disability. The ACLU filed complaints in 2023, alleging Aon knew about these biases but marketed the tests as completely fair anyway.
The pattern repeats across industries. A University of Washington study found significant racial and gender bias in how state-of-the-art AI models ranked job applicants. These aren't isolated glitches—they're systemic failures dressed up as innovation.
The Mobley v. Workday case took things further. In 2024, a federal court ruled that AI vendors could be held directly liable as "agents" of companies using their discriminatory screening tools. Derek Mobley applied to over 100 jobs through Workday's system and was rejected within minutes each time. The case achieved nationwide class action status in 2025, covering all applicants over 40 rejected by the system. Unlike individual human bias, a single biased algorithm multiplies discrimination across thousands of applicants.
Privacy: The Data Nobody Agreed to Share
The Cambridge Analytica scandal remains the gold standard for AI-enabled privacy violations. Facebook allowed a political consulting firm to harvest data from millions of users without consent, creating psychological profiles for targeted political advertising. The fallout? A $5 billion FTC fine and permanent damage to public trust.
But the violations continue. Clearview AI scraped millions of photos from social media to create a facial recognition database sold to law enforcement and private entities. Google DeepMind shared NHS patient data for healthcare AI development without explicit consent. The city of Trento, Italy, became the first municipality fined for improper AI use in street surveillance, failing to anonymize data or comply with GDPR requirements.
The 2025 Stanford AI Index Report showed a 56.4% jump in AI incidents in a single year, with 233 reported cases throughout 2024. Privacy violations aren't edge cases—they're becoming the norm.
Transparency: Trust Us, It's Complicated
Perhaps the most insidious ethical failure is the deliberate opacity of AI systems. Companies claim their algorithms are too complex to explain or that transparency would compromise competitive advantage. Translation: we can't show you how it works because then you'd see the problems.
Air Canada learned this lesson the hard way. Its customer service chatbot gave incorrect fare information to a passenger. When the airline tried to disclaim responsibility, courts ruled otherwise—holding the company liable for its AI's bad advice. The case established a precedent: you can't deploy AI and then disown its decisions when they go wrong.
The Regulatory Wild West: Why Nobody's Really in Charge
If you're waiting for regulators to fix this mess, you'll be waiting a while. The U.S. currently has no comprehensive federal AI legislation. Instead, we have what experts politely call a "regulatory patchwork"—a chaotic mix of executive orders, agency guidelines, and state laws that vary wildly by location and industry.
The situation got messier earlier this year. President Trump's executive order "Removing Barriers to American Leadership in Artificial Intelligence" rolled back many AI safety measures, prioritizing economic competitiveness over regulatory scrutiny. The message was clear: innovation first, oversight maybe later.
The Corporate Capture Attempt: When Industry Writes Its Own Rules
Perhaps the most brazen display of corporate influence came in May 2025, when House Republicans slipped a provision into their budget reconciliation bill that would have banned states from regulating AI for ten years. Let that sink in: a decade-long prohibition on any state or local government protecting its citizens from algorithmic harm, discrimination, or privacy violations. The provision would have overridden over 145 existing state laws passed with bipartisan support across 40+ states since 2019.
This wasn't policy-making—it was regulatory capture in its purest form. Big Tech companies, led by Google and OpenAI, lobbied intensively for the ban, claiming that "fragmented" state regulations would "stifle innovation". OpenAI CEO Sam Altman even testified to Congress that complying with 50 different state regulations would be too difficult. Translation: accountability is inconvenient, so let's ban it entirely while we figure out how to profit.
The provision faced fierce opposition from an unusual coalition: 40 state attorneys general, 17 Republican governors led by Arkansas's Sarah Huckabee Sanders, child safety advocates, and civil rights groups. The Brennan Center warned that the ban would "prevent states from protecting their residents from the threat of emerging AI technologies, with virtually no federal legislation in place to counter those harms".
Fortunately, the Senate voted 99-1 to strike the provision from the final bill, and President Trump signed it into law on July 4, 2025, without the moratorium. Senator Marsha Blackburn (R-TN), who led the effort to remove it, captured the absurdity perfectly: "You know who has passed [AI protections]? It is our states. They're the ones protecting children in the virtual space. They're the ones protecting our entertainers — name, image, likeness".
But make no mistake: this wasn't corporate responsibility succeeding—it was corporate responsibility never existing in the first place. When an industry's first instinct is to ban oversight for a decade while deploying transformative technology with known risks, they're not asking for reasonable regulation. They're demanding permission to operate without consequences while maximizing profit and externalizing harm.
Who's Accountable When AI Goes Wrong?
The Air Canada chatbot case cuts to the heart of the accountability problem. When AI makes decisions autonomously, who owns the consequences? The airline tried to argue that the chatbot was a separate entity. The court wasn't having it.
The legal framework is evolving quickly. The Mobley v. Workday case established that AI vendors can be held liable as agents of the companies deploying their systems. This creates what one expert calls a "liability squeeze"—businesses are responsible for AI failures they cannot fully audit, control, or understand.
Corporate contracts reflect this tension. AI vendors increasingly shift liability to customers through aggressive contract terms. You deploy the AI, you own the consequences—even if the vendor won't explain how the AI reaches its decisions. It's a brilliant business model if you're selling AI. It's a nightmare if you're buying it.
The accountability gap extends beyond liability. Salesforce research on "agentic AI" notes that autonomous decision-making creates challenges in pinpointing responsibility when mistakes occur. Unlike conventional software that follows predefined rules, AI systems learn and adapt dynamically. This makes their decision-making less predictable and accountability harder to assign.
Some companies are responding by creating new roles. About 15% of S&P 500 companies now provide board-level AI oversight. Chief AI Officers (CAIOs) are appearing to monitor and review AI system performance. But these structural changes remain the exception rather than the rule.
What Responsible AI Actually Looks Like (Spoiler: It's Boring)
Here's the part nobody wants to hear: responsible AI implementation is tedious, unglamorous work that doesn't make for compelling press releases. It involves checklists, audits, documentation, and constant monitoring. It requires saying "no" to exciting AI applications that can't meet ethical standards. It means slowing down when everyone else is racing ahead.
But it's the only approach that actually works.
Start with Real Governance
Not a committee that meets twice a year to rubber-stamp decisions. Real governance means establishing clear oversight frameworks where someone specific is accountable for each AI system's performance. It means creating cross-functional teams that include legal, compliance, ethics, and operational expertise.
Novartis created a CEO-chaired ESG committee that approves their AI framework, ensuring AI ethics are part of top-level strategic decisions. Amazon committed over $700 million to retrain 100,000 employees for AI-impacted roles. These aren't sexy initiatives, but they're substantive.
Implement Continuous Monitoring
AI systems don't "launch" and then run perfectly forever. They drift, develop biases from new data, and fail in unexpected ways. Responsible implementation requires automated dashboards that flag problematic outputs, human-in-the-loop systems that allow intervention before issues escalate, and regular audits to assess accuracy over time.
Google's approach to its Bard model included adversarial testing before launch and ongoing monitoring for harmful outputs. They restricted certain response types based on testing that revealed anthropomorphization risks. The work happens in testing phases nobody sees, not in launch events everyone watches.
Prioritize Explainability and Transparency
If you can't explain how your AI reaches decisions, you shouldn't deploy it for high-stakes applications. Period. This means using interpretable models when possible, documenting decision-making processes, and maintaining transparency about limitations and risks.
Organizations need to create clear documentation about how AI systems work, what data they use, and where they might fail. Transparency isn't just ethical—it's practical risk management. The Air Canada case proved that claiming "the AI did it" won't shield you from liability.
Build Diverse Teams
AI bias often stems from homogeneous development teams that miss how systems affect different populations. Involving diverse perspectives—cultural, social, historical, political, legal, and ethical—improves fairness and inclusivity. This includes consulting affected communities during development, not just after problems emerge.
Establish Clear Remediation Processes
When AI fails, what happens next? Organizations need structured plans for correcting errors, communicating with affected parties, providing compensation where appropriate, and preventing recurrence. These processes should be defined before deployment, not improvised during crises.
The Bottom Line: Ethics Aren't Optional Anymore
The AI ethics performance is ending. Courts are establishing liability frameworks. Regulators are—slowly—building enforcement mechanisms. Consumers are demanding accountability. Companies that continue treating AI ethics as PR theater will face legal, financial, and reputational consequences.
The path forward isn't mysterious. Organizations that integrate ethical considerations from the ground up, implement continuous monitoring, involve diverse teams, and maintain transparency will build AI systems people can actually trust. Those that publish beautiful ethical guidelines while deploying problematic systems will join the growing list of AI scandal case studies.
AI responsibility isn't about philosophy—it's about operational discipline. It's conducting bias assessments before deployment, not after lawsuits. It's establishing accountability frameworks that assign specific people responsibility for specific systems. It's saying no to AI applications that can't meet ethical standards, even when competitors are racing ahead.
The most powerful question facing every organization deploying AI isn't "what can this technology do?" It's "who owns the consequences when it fails?" Until you can answer that clearly and credibly, all the ethical guidelines in the world are just performance art.
Sources
- LinkedIn: AI Ethics - The Illusion of Corporate Responsibility
- Enzuzo: 7 AI Privacy Violations (+What Can Your Business Learn)
- GDPR Local: AI Regulations in the US: What You Need to Know in 2025
- Observer: Corporate AI Responsibility in 2025: How to Navigate AI Ethics
- LinkedIn: Real-World Examples Of AI Data Privacy Breaches
- Future of Privacy Forum: The State of State AI: Legislative Approaches to AI in 2025
- World Economic Forum: Why corporate integrity is key to shaping the use of AI
- DigitalDefynd: Top 50 AI Scandals
- Built In: As Trump Fights AI Regulation, States Step In
- Fast Company: Don't be fooled by AI companies' 'ethics washing'
- IBM: Exploring privacy issues in the age of AI
- NCSL: Summary of Artificial Intelligence 2025 Legislation
- SAGE Journals: Responsible AI in Marketing: AI Booing and AI Washing Cycle
- Kiteworks: AI Data Privacy Wake-Up Call: Findings From Stanford's 2025 AI Index Report
- White & Case: AI Watch: Global regulatory tracker - United States
- Quinn Emanuel: When Machines Discriminate: The Rise of AI Bias Lawsuits
- IAPP: US State AI Governance Legislation Tracker
- Kodexo Labs: Bias in AI | Examples, Causes & Mitigation Strategies 2025
- Brookings Institution: How different states are approaching AI
- Codica: A Guide to Responsible AI: Best Practices and Examples
- Salesforce: In a World of AI Agents, Who's Accountable for Mistakes?
- A&MPLIFY: Steps to Build an AI Ethics Framework
- Lakera AI: Embracing the Future: A Comprehensive Guide to Responsible AI
- Finextra: Who Owns AI's Mistakes? The Accountability Dilemma
- arXiv: Implementing AI Ethics: Making Sense of the Ethical Requirements
- VIDIZMO: 10 Best Practices for Responsible AI Development Services in 2025
- Jones Walker: AI Vendor Liability Squeeze: Courts Expand Accountability While Contracts Shift Risk
- Transcend: Key principles for ethical AI development
- EqualAI: 5 ways to avoid artificial intelligence bias with 'responsible AI'
- Tech Policy Press: AI Accountability Starts with Government Transparency
- Capgemini: A practical guide to implementing AI ethics governance
- 6clicks: Responsible AI: Best practices and real-world examples
- Forbes: AI Accountability: How Doing The Right Thing Builds A Better Business
- UNESCO: Ethics of Artificial Intelligence
- Galkin Law: Legal Accountability in AI Failures and Malfunctions
- Carnegie Endowment: State AI Regulation Survived a Federal Ban. What Comes Next?
- Brennan Center: Congress Shouldn't Stop States from Regulating AI
- Four Score Law: Congress Blocks AI Regulation Ban: What It Means for Businesses
- DLA Piper: Ten-year moratorium on AI regulation proposed in US Congress
- Reuters: US Senate strikes AI regulation ban from Trump megabill
- Data Innovation: Five Reasons Why Critics Were Wrong About the AI Moratorium
- McDermott Will & Emery: No state AI law moratorium in One Big Beautiful Bill Act
- Transparency Coalition: Budget bill revised to ban state AI laws for 5 years
- Quarles: No AI Moratorium for Now, but What Comes Next?
- Transparency Coalition: New version of 10-year state AI law ban tied to broadband funding
- PBS NewsHour: Senate pulls AI regulatory ban from GOP bill after complaints from states
- Ogletree Deakins: U.S. Senate Strikes Proposed 10-Year Ban on State and Local AI Regulation
- AP News: House Republicans include a 10-year ban on US states regulating AI
- Issue One: As Washington Debates Major Tech and AI Policy Changes, Big Tech's Lobbying Is Relentless