Jacob Morgan | Best-Selling Author, Speaker, & Futurist | Leadership | Future of Work | Employee Experience

How Amazon’s Cybersecurity Chief Says Leaders Should Prepare for AI Threats

Want to sponsor this newsletter or other content? Reach out to me directly, Jacob[at]thefutureorganization[dot]com.

Join 40,000 other subscribers who get Great Leadership delivered directly to their inbox each week. You’ll get access to my best thinking and latest content. Sign up today.


If you’re a Chief Human Resources or Chief People Officer, then you can request to join a brand new community I put together called Future Of Work Leaders which focuses on the future of work and employee experience. Join leaders from Tractor Supply, Johnson & Johnson, Lego, Dow, Northrop Grumman and many others. We come together virtually each month and once a year in-person to tackle big themes that go beyond traditional HR.

There’s no escape from the AI revolution. Like it or not, it’s been penetrating every corner of our workplaces today. But here’s the uncomfortable truth: while AI holds immense promise for boosting efficiency and unlocking new capabilities, it’s also quietly opening the door to unprecedented cybersecurity risks that most leaders aren’t prepared for.

As AI tools become more sophisticated, so do the attackers using them. And the most dangerous threat isn’t the technology itself — it’s the humans behind it, exploiting AI’s power to breach systems, manipulate data, and trick even the most vigilant teams.

This is the reality Steve Schmidt, Amazon’s Senior Vice President and Chief Security Officer, knows all too well. In a world where AI can generate lifelike phishing emails, craft deepfakes that erode trust, and even execute automated actions — so-called agentic AI — the old rules of cybersecurity simply don’t cut it anymore.

In our latest episode of the Future Ready Leadership Podcast, Steve unpacks the evolving risks at the intersection of AI, cybersecurity, and leadership, sharing practical strategies that every business leader needs to hear.

Listen to the episode here on Apple Podcast & leave a review!

Why Cybersecurity Is a Human Problem, Not Just a Tech Challenge

It’s tempting to think cybersecurity is just a tech problem. Throw enough AI at it, and it’ll sort itself out.

That’s exactly the problem.

Because while we obsess over shiny tools and automated code, the real threat slips quietly through the human cracks. The real challenge lies in understanding how people interact with these technologies, whether it’s employees misusing AI tools (shadow AI), attackers exploiting vulnerabilities, or leaders blindly trusting AI outputs without verification.

And it gets worse! With agentic AI now capable of acting on your behalf, such as booking travel, deploying code, the line between convenience and catastrophe is razor-thin. AI lowers the barrier for phishing attacks and social engineering.

What once required a skilled hacker fluent in a target’s language or cultural nuances can now be done with the click of a button, using AI to craft convincing messages at scale. That’s why we must stop treating cybersecurity as an IT project. Start treating it as a people strategy. Because in a world where AI is the tool, humans are still the vulnerability.


This episode is sponsored by Workhuman:

These days, it feels like there isn’t much good to go around in the world of work. But Workhuman knows when we celebrate the good in each of us, we bring out the best in all of us. It’s why they created the world’s # 1 employee recognition platform — and they didn’t stop there, combining rich recognition data with AI to create Human Intelligence, so you can get uniquely good insights into performance, skills, engagement and more.

To learn more about how you can join their force for good, go to Workhuman.com, or check out their own podcast, “How We Work,” which explores, the trends, issues, relationships, and experiences that shape our workplaces.


Building Guardrails: The New Leadership Imperative

The AI revolution is running at full speed, and if leaders aren’t building the right guardrails, they’re leaving the door wide open for disaster. So, what can leaders do to build a culture that’s resilient to these evolving risks? Steve says it’s not about banning AI tools outright but about putting the right guardrails in place.

  1. Authentication and authorization — At Amazon, they consider these two the basics of access control. If you don’t know who is accessing your systems and what they’re allowed to do, you’re already behind.
  2. Output validation — Steve firmly advises, don’t just trust what an AI system spits out. Always verify before acting.
  3. Compartmentalization — Steve also describes this step as the Titanic principle: when a breach happens, you want the damage to stay contained, not flood your entire system.

At Amazon, Steve and his team have used AI to speed up security reviews by as much as 80%. BUT humans are always in the loop. AI can help flag issues, but it’s not the final decision-maker. Why? Because AI is only about 65% accurate when it comes to security decisions. That’s not nearly good enough when the stakes are this high.

It’s a sobering reminder: AI can enhance your capabilities, but it can’t replace human judgment.

Listen to the episode here on Apple Podcast & leave a review!

Fostering a Security-First Culture

You can invest in all the cutting-edge tools you want — but if your people don’t know how to think like defenders, your security strategy has a blind spot. And that blind spot is human.

The reality is, even the most sophisticated AI can’t stop an employee from clicking the wrong link — or using a risky tool they found online because it was “easier.” If leaders aren’t actively fostering a culture where security is second nature, then all the tech in the world becomes just window dressing.

A security-first culture means training employees to be skeptical, empowering them to question unusual requests, and providing internal tools that are as good — or better — than the free options they might find online. It also means asking the tough questions about AI providers:

  • Where is your data going?
  • How is it being used?
  • And can it be exploited to train someone else’s model?

And as if that wasn’t enough, looming on the horizon is quantum computing, ready to break today’s encryption like it’s a lock on a diary. Which means the time to build resilient, attack-aware AI systems isn’t later, it’s NOW.

Security isn’t just a checklist. It’s a culture. And that culture starts with leadership.

The Bottom Line: Pair AI with Human Oversight

The bottom line is clear: AI isn’t some superhero swooping in to solve all your problems. It’s a tool. A powerful one, yes. But without the right systems, thoughtful design, and human judgment behind it, it’s just another shiny object with a dangerous blind spot.

The real risk isn’t AI itself, it’s the false sense of security it creates. When leaders assume the tech has it handled, they stop asking the hard questions. They skip the training. They sideline human oversight. And that’s when the cracks start to show.

If you want to reap the benefits of AI without opening the door to new vulnerabilities, here’s the truth: You need to pair automation with intention. Blend cutting-edge tech with curious, well-trained humans. Build a culture where safety isn’t outsourced to software, but embedded in every decision.

Because in the age of AI, your competitive edge isn’t the algorithm. It’s the culture that governs it.

To dive deeper into these critical strategies and hear Steve’s full insights, listen to the full episode of the Future Ready Leadership Podcast embedded below. This is a conversation no leader can afford to miss, because in the age of AI, your organization’s security is only as strong as the culture you build around it.

Listen to the episode here on Apple Podcast & leave a review!

🎧 Listen here

🎧 Watch on YouTube

Scroll to Top