AI Governance

When Algorithms Control Real Lives: Why AI Governance Matters

AI is rapidly reshaping our world. From streamlining business operations to offering personalized recommendations, its reach feels endless. But beneath the surface hype, we rarely pause to confront a more urgent question: who governs these systems when they hold real power over our jobs, finances, and well-being?

According to Gartner’s Hype Cycle, a model that tracks how emerging technologies rise and fall in public expectations, we’re now passing the “peak of inflated expectations.” The initial euphoria surrounding AI is giving way to a more sober and necessary assessment of its real-world capabilities and risks.

And there’s a crucial conversation business often overlooks due to the overwhelming advertising noise: the importance of governance for AI, especially when these systems have the power to affect our jobs, finances, or overall well-being.

Bluntly, we need to stop judging AI only by how much “money it saves.” These ideas, often pushed by marketing teams, hide a worrying truth: Quite frequently, AI is not only inaccurate, but sometimes, no real proof of accuracy is provided. And when there is, it’s often as vague as those ads we’re all used to, which claim “9 out of 10 people agree.”

Sounds great, right?

Until you realize the company picked only the people who were already likely to say yes, such as asking if they wanted more free stuff before asking them to take a survey. This isn’t honest information; it’s just a trick to make more money. And we’ve accepted this as a society for decades.

The Myth of AI Predictions: A Real-World Example

Think about something many of us have seen or heard of: a company saying their AI can screen someone applying for a job. It sounds incredible – faster hiring, fair decisions, less bias. However, as Arvind Narayanan, a computer science professor at Princeton, suggested at a recent talk at MIT, no company has demonstrated that its AI is actually effective for this. Often, they didn’t even try to prove it.

You can watch the full talk here: https://www.youtube.com/watch?v=C3TqcUEFR58

In some cases, it is investigative journalists, not the companies, who found clever ways to test these AI systems. One story, which Prof. Narayanan included in his talk, involved sending in two copies of the same video: one with a plain background and another with a bookshelf digitally added. The idea that a simple bookshelf could change an AI’s prediction about whether someone is right for a job isn’t just silly; it’s deeply troubling. It exposes how fragile and easily manipulated these predictive systems can be, even if their intent is noble.

You can see the interactive report here: Objective or Biased

More recently, Laura Witlox posted about the current collective action lawsuit against Workday which alleges that its job applicant screening technology is discriminatory. You can read her full post on LinkedIn: The Workday lawsuit should be a wake-up call.


When Is It Okay for AI to Have This Power?

This leads us to a big question: When is it truly okay for an AI system to have this kind of power over a person? It’s not enough to say “it’s pretty accurate” or “it saves money.” For it to be okay, we need:

  • Proof It Works in Real Situations: The AI must show, through clear, independent tests, that it does what it claims to do reliably and fairly in the exact situation it’s used for. This means more than just a company’s own tests; outside experts need to check it.

  • No Harm and No Unfairness: We need strong steps in place to find, fix, and constantly check for unfairness or algorithmic bias that can lead to bad outcomes for people. This means using diverse data to train the AI, getting ethical reviews, and having ways for people to fix mistakes.

  • Clear and Easy to Understand: People affected by AI decisions should have the right to know how the AI made that decision. They don’t need to see the code, but they should understand the main reasons and logic behind the AI’s choices. They should also have a way to escalate decisions made by AI.

  • People in Charge Who Can Say No: There must always be a real person involved who has the power to review, question, and change what the AI decides, especially in important situations. And yes, I do recognize the irony that this leads back to the question of personal bias.

  • Ways to Fix Mistakes and Hold People Accountable: If an AI makes a wrong decision that harms someone, there must be clear ways to figure out who is responsible and easy ways for people to get things corrected.


Old Lessons: When Money Matters More Than Safety

History teaches us harsh lessons about what happens when making money becomes more important than being fair and safe. We’ve seen this in many industries, causing much damage:

  • The Tobacco Industry’s Lies: For many years, tobacco companies heavily advertised their products, playing down or simply denying what science clearly showed: their products were addictive and caused serious health problems. Chasing huge profits led to millions of early deaths and sicknesses worldwide. The dangers weren’t just ignored; they were actively hidden for money.

  • The Subprime Mortgage Crisis: In the years before the 2008 financial crash, banks gave out risky “subprime” home loans to people with bad credit, often using tricky tactics and ignoring clear signs that people wouldn’t be able to pay them back. The push for quick profits from selling these loans led to a massive housing market collapse, causing millions to lose their homes and starting a global economic crisis. Risks were completely overlooked to make more money.

These past events are reminders that new technology, when driven only by financial gain, can have terrible results.


The Path Forward for Smart AI Rules

As AI gets more powerful in our lives, the job of setting good rules falls on the people who build it, use it, make laws about it, and all of us. We need to go beyond vague ideas and establish clear, enforceable standards that prioritize human rights and well-being.

  • Create Clear Ethical Guidelines and Laws: We need to go beyond vague ideas and establish rules that can be enforced, prioritizing human rights and well-being.

  • Pay for Independent Checks and Tests: We should require and fund external groups to test AI systems, especially those used in critical decision-making processes. They should specifically look at how companies claim their AI is accurate.

  • Help People Understand AI: Teach everyone about what AI can and can’t do, so they can be smart and ask good questions about it.

  • Work Together Across Different Fields: Bring together tech experts, ethics experts, lawyers, social scientists, and the people affected by AI to help shape how AI is built responsibly.

AI holds enormous promise, but precisely because of its power, we must hold it to a higher standard. Responsible AI Governance isn’t a constraint on innovation; it’s the foundation for using AI in ways that truly serve people, not just profit.

A special thank you to Laura Witlox who graciously provided edits for this blog.

Leave a comment

Your email address will not be published. Required fields are marked *