My honest, overthinking, still-unresolved struggle with artificial intelligence — from skeptic to reluctant user.


After graduating, courses like Neural Networks, Modeling and Simulation, and Artificial Intelligence felt like obscure concepts pulled straight from a science fiction novel. As my career progressed — from software development into cybersecurity — technology moved faster than I ever anticipated. Then COVID hit, and the pace became something else entirely: cloud services exploded, remote connectivity opened entirely new attack surfaces, and AI went from academic curiosity to something injected into almost everything.

I want to be clear about something upfront: I don't rush decisions. As I wrote in "Why Most Technology Decisions Don't Live Up to Expectations", the "New Tool Reflex" — jumping on something just because it's new — is how organizations and individuals end up in trouble. That instinct applies here too. So when AI became inescapable, part of me got nervous. Not because I'm against innovation, but because I had real questions.


What I kept asking myself

  1. If AI grows by crawling the web and storing conversation data, what does that mean for people's private lives?
  2. Does it actually benefit every field it's being injected into — or is that just optimism?
  3. How is it being protected and secured, and by whom?
  4. What's stopping it from going off the rails — intentionally or not?
  5. Where is it genuinely useful, and where are we forcing it where it doesn't belong?
These aren't paranoid questions. They're basic human concerns — and I suspect I'm not the only one sitting with them.

The data problem is real

LLMs are, at their core, massive language processing machines trained on enormous datasets. Whether through web crawling or conversation logs, private information ends up in these systems — more than most people realize. From a cybersecurity standpoint, this keeps my attention. Because no matter how strong the protections are, the exposure creates attack vectors. And I keep coming back to a quote I find hard to argue with:

"There are two types of companies: those who have been hacked, and those who don't know it yet." — John Chambers

I'd add: and those who eventually will be. The question isn't just how to secure AI systems — it's how to reduce what gets exposed in the first place.

AI is powerful. It is not a replacement for human judgment.

Here's where I push back on the blanket enthusiasm. AI can process language and generate responses at scale — but it cannot understand how a patient feels, weigh the emotional context of a legal dispute, or make decisions rooted in human experience. Take medicine: a diagnosis requires not just the data in front of you, but the human factors around it — tone, context, what the patient isn't saying. Or law: how does an AI determine whether the facts being presented in court are fabricated, not just technically accurate? The paper trail doesn't capture everything.

None of this means AI has no place in these fields. It can aggregate information, surface patterns, and support decision-making. I think of it as a massively overpowered engine — extraordinary at what it does, but not a substitute for judgment. The real question is whether we're being honest about that distinction before we deploy it.


Security: I don't have a clean answer

If you're asking how AI systems are being protected and whether your data is safe — frankly, I can't give you a guarantee. Local LLMs, strict data handling policies, human awareness training — these all help. But if you operate from a Zero Trust mindset (which I do), you accept that loopholes will be found, patches will follow, and the cycle continues. The more important question is: why are we putting sensitive information into systems where we don't fully control the exposure? Reducing what goes in is a better strategy than hoping the perimeter holds.


So where does that leave me?

Still skeptical. Still using it. I was in a conversation recently with a senior executive — a candid back-and-forth about exactly this — and he said something that has stayed with me:

"AI change is happening. The question is how to say yes the right way."

That's where I've landed too. Not yes or no. Not optimism or fear. Just — how do we do this with our eyes open?

AI is a double-edged sword. It can be genuinely useful — for me personally, it's become the next-generation Stack Overflow for technical work. But it can also be misapplied, over-trusted, and exploited. What happens next depends on whether people demand thoughtful integration or just accept whatever ships.


What's your take? I'd genuinely like to know where you land on this — whether you've made peace with it, still fighting it, or somewhere in the messy middle like me.