Articles

What HR Gets Wrong About AI (And How to Fix It Before It’s Too Late)

published September 8, 2025 In

Organization & People What HR Gets Wrong About AI (And How to Fix It Before It’s Too Late)
Organization & People What HR Gets Wrong About AI (And How to Fix It Before It’s Too Late)

What HR Gets Wrong About AI (And How to Fix It Before It’s Too Late)

“AI is going to replace HR.”

You’ve probably heard some version of that line. It’s made headlines at conferences, shows up in LinkedIn discourse, and may even have crossed your mind in a quiet moment of uncertainty. When automation can draft job descriptions, sort resumes, analyze performance data, and respond to employee questions, what role remains for HR?

That question might sound like a tactical concern, but it reveals something deeper. It’s not really about which tasks HR will still own. It’s about whether the function itself will remain relevant. And when that relevance feels uncertain, it’s natural to pull back — defending the familiar instead of exploring what’s evolving. We protect what we know instead of asking what’s changing and how we might evolve.

Unfortunately, retreating is the riskiest thing HR could do right now.

AI won’t eliminate HR, but it will change it profoundly.

That change can either enhance how we support people and design work, or it can worsen existing problems like bias, burnout, and disengagement. The outcome depends on how actively HR chooses to shape the direction of that change.

In this article, I will walk through what HR is getting wrong about AI, why the current framing is dangerous, and how to pivot before someone else defines the future of people strategy for us.

First, let’s debunk the myth

The idea that “AI will replace HR” is easy to repeat because it feels like one of those dystopian inevitabilities we don’t want to be true but don’t quite know how to argue with.

But it’s mostly wrong. And more importantly, it’s answering the wrong question.

Let’s first clarify what we mean by AI. Most of what’s being marketed today isn’t general artificial intelligence — it’s narrow machine learning. These systems identify patterns in large datasets and use those patterns to make predictions. They don’t reason. They don’t understand context. They don’t grasp meaning. They process data.

What it does (very well) is identify patterns across massive amounts of data and make predictions based on those patterns.

That makes it an extraordinary tool for repetitive cognitive tasks, such as:

  • Scanning thousands of resumes to score alignment between job descriptions and applicant profiles
  • Analyzing sentiment trends across employee engagement surveys
  • Predicting which skills gaps will emerge on your teams based on historical data and learning behavior

But compare that to what a human HR professional can do, including:

  • Reading between the lines in an exit interview where the departing employee is clearly holding back
  • Navigating a conflict between two high performers that keeps resurfacing with slightly different symptoms
  • Weighing organizational priorities against ethical or cultural concerns in performance evaluations

No matter how sophisticated AI systems appear, they can’t interpret ambiguity like a human can. And left unchecked, AI doesn’t just replicate people processes; it scales the flaws already embedded in them. AI tools struggle with interpersonal dynamics, ethical tradeoffs, and cultural nuance. In other words, they can handle the “what” but not always the “why” or the “should.”

And that means AI still needs a human at the helm. Not just to review outputs but to shape the questions we’re asking of AI in the first place. Because the questions determine the system. 

We have to ask ourselves:

  • Will we optimize for efficiency or for equity?
  • For retention or for belonging?
  • For reputation or for repair?

None of those tradeoffs are mathematical. Instead, they’re moral.

HR teams should be at the core of AI implementation, not on the sidelines. That means asking hard questions about how tools work beyond what they promise to do.

You aren’t cross-examining. You’re doing due diligence to ensure these tools don’t create blind spots that affect your people.

How to start leading AI ethically

Fancy strategic plans are helpful, but transformation usually begins with some really unglamorous first steps. So if you’re a VP, director, or people strategist wondering where to start, you don’t need a moonshot. You need momentum. 

Here are a handful of high-leverage, doable actions you can take this quarter:

1. Inventory the invisible

Even if you haven’t started a formal AI transformation, your organization is probably using more AI than you think. In HR alone, you may be using third-party vendors managing recruitment, learning pathways, or engagement tracking that have AI embedded into their process flows.

Align IT, procurement, HR, and compliance to audit areas where AI systems are already in use. There may be a tool silently making decisions that nobody is validating, and this is an opportunity to uncover it.

As a bonus, this immediately improves your internal credibility with legal and data privacy teams.

2. Ask better vendor questions

It feels like every software today has an AI component. As you’re bringing in new HR tools, it’s important to understand what’s happening behind the decisions those tools are making and how humans are in the loop to check them.

HR can make a big difference by starting to compile the questions that need to be asked in the vendor selection process and educating procurement about what those answers really mean. To start, your teams need to know:

  • What data was this model trained on?
  • How often is that data updated and by whom?
  • Can we access explainability or audit reports about how the model makes recommendations?
  • What happens when an employee disagrees with an AI-based decision? How are those decisions appealed? How are appeals documented?

If your internal buyers aren’t asking these questions, they can overlook the negative impacts AI can have. Simultaneously, if a vendor can’t answer these questions, they may be a major ethical liability.

3. Define your minimum ethical standards in writing

Treat it like a design spec.

  • Do you require an explanation for every automated decision that affects hiring, promotion, or pay?
  • Do your vendors need to demonstrate bias mitigation?
  • Do you require human override or review protocols?

You don’t need to solve every edge case today, but having even a rough rubric helps shift your team from reactive to proactive.

Put another way: you’re starting to build culture infrastructure that can survive AI implementation without dismantling your values.

4. Pilot small, fail smart

Pick a lightweight use case like an AI assistant for scheduling interviews or a learning recommendation engine for compliance training, and run a low-risk pilot.

Bring the affected users into the design and feedback process. Document where it helps, where it confuses, and where bias or friction arises.

Real-world experimentation builds confidence, and feedback from the frontline is often the most insightful source of what “ethical” really looks like in practice.

This is about who gets to shape the future

AI isn’t something to fear or ignore. It’s a dynamic system already at work in many organizations, and its trajectory will shape how we hire, develop, and engage our people. That means HR has a critical role to play as a guide, helping ensure these tools align with the values that make organizations work well.

And HR is uniquely poised to lead that shift. Not because we understand machine learning (though by all means, get curious), but because we’ve long held the part of the map that technology still can’t read: how people move, grow, connect, and break when systems don’t account for who they are.

That’s what’s at stake here: whether the systems we build through AI align with the better parts of what we believe about people.

This moment asks something deeper from HR than adoption or resistance. It asks for discernment.

  • Will you speak up when a tool optimizes for performance but erodes inclusion?
  • Will you push back when leadership wants to add automation but not accountability?
  • Will you build a future that uses AI to support (not just measure) human thriving?

Because someone is going to decide how AI shapes our workplaces. The only question is whether you’re in the room when it happens. If we want the future of work to be more human, it starts with someone fighting for the humanity in it.

Having the support of an Expert who has managed these challenges before and helped organizations ethically incorporate AI can make a huge difference in ensuring you start on the right foot. By tapping into the Consulting 2.0 model, you can connect with consultants and operators who have tackled these challenges before and leverage their experience to accelerate your AI initiatives and improve the outcomes.

Ready to take an ethical look at your AI rollout?

Let’s Talk

Meet the Author

Kelle Snow is a Catalant Expert and Founder of Thornbird Consulting, where she helps organizations build strategic learning programs, strengthen internal communication, and develop leadership at every level. With more than a decade of experience working with Fortune 100 enterprises, startups, and nonprofits, Kelle takes a human-centered and data-informed approach that enables companies to improve alignment, communication, and outcomes. Kelle holds a Master of Science in Educational Psychology with a focus in cognitive science and motivation and a Bachelor of Arts in English Language and Literature from the University of Nevada, Las Vegas.