AI - Not Sentient, But Not Passive Either

A few weeks ago I wrote a quick post on LinkedIn about AI shifting from simply a tool to tool-user.

It was a loosely formed idea inspired by a conversation about how true sentience emerges and the role of tool-use in that path.  Tool use like the way a crow uses a stick, or the way humans learned to shape tools to solve problems we couldn’t reach on our own. I wasn’t thinking about agentic AI at the time. But someone commented that I’d basically described it.

At first, I wasn’t sure. I’ve always thought of agentic AI as a specific technical thing: software agents with task orientation, goal structures, maybe the ability to select tools and take action without constant prompting.

But I’ve been sitting with it. And the more I looked at how today’s systems actually operate, especially the ones creeping into real-world use, the more the comparison started to make sense. Not because I think we’re at the “AI is inventing tools from scratch” phase like a crow as I was originally noodling, but because we are building systems that increasingly act based on their own internal logic. AI that doesn’t just wait for input, but takes in context and starts doing things: selecting, sequencing, escalating.

Perhaps the more pressing shift isn’t tool-use in the biological sense, but instead it’s initiative. The moment a system shifts from waiting, to acting on its own based on patterns, context, or perceived relevance. Maybe that’s not just automation. Maybe that’s the beginning of agency.

Once I landed on that framing it’s just been stuck in my craw as I watch how fast AI is being integrated into the machinery of government. Especially now, with federal guidance pushing agencies to accelerate both adoption and innovation. The use cases being discussed aren’t theoretical (no, I’m not talking about Skynet), they’re things like:

  • Parsing and prioritizing incident data

  • Flagging what risks require response (and which don’t)

  • Suggesting remediation steps that get greenlit by default

And yeah, those are probably good uses. They'll save time, reduce noise, maybe even help people make better decisions. But they also move decision-making into systems that weren’t really designed for it.

That’s what keeps catching my attention. Government workflows are built on the assumption that humans are the ones making decisions. The sign-offs, approval chains, and built-in slowdowns aren’t just bureaucracy, they’re part of how trust is structured into the system.

So what happens when we hand more of that action over to AI? Even if we say “a human still approves everything”, if the system is recommending, ordering, sequencing, and we’re just rubber-stamping, is that really oversight? Or are we just along for the ride?

I’m not arguing against using AI this way. Honestly, I think we’ll need to. The scale and complexity of information already outpaces what people can realistically manage. AI is going to have to do more. But that shift, from a tool to more independent actor, raises real questions. Who’s responsible when something goes sideways? What does “good enough” look like when a system decides what you see? Are we changing how decisions get made without realizing it?

Feels like that line between “this helps me work faster” and “this makes the call for me” is already blurry. So maybe the real question is: do we know where that line is, and are we okay with crossing it?

Previous
Previous

Confessions From a Perfume Obsessive

Next
Next

The Fragility of Proof