68% of organizations have experienced AI-related data leaks, yet only 23% have formal AI security policies. Companies are connecting powerful AI systems to their internal data faster than they're protecting it.
Source: Metomic's 2025 State of Data Security Report
There's something we can't ignore anymore. AI is everywhere making things easier, faster, even kind of magical. But underneath the excitement, there's also something missing:
Security. Awareness. And real conversations about the risks.
So I started this newsletter to help fill that gap.
Welcome to Secure AI Weekly where we explore what happens when powerful, unpredictable systems meet the real world. If you're building with AI, securing it, or just trying to keep up, this space is for you.
We'll break down complex ideas, talk openly about what's going wrong (and right), and learn together in public.
🔍 The Backstory
I've been in tech for over 25 years. I started in the days of Novell NetWare, Lotus Notes, and bare metal servers. I've coded. I've architected. I've secured cloud systems at scale.
And every time a new wave of technology hit, I saw the same pattern: People rushed to build and skipped the security conversation.
When cloud took off, folks said things like:
"It's just another data center."
They weren't malicious. Just misinformed. And suddenly, I found myself being pulled in not because I understood Cloud and Cloud security when others didn't.
That same shift is happening with AI right now.
⚠️ The Real Problem
Let me be clear this isn't about blaming corporations or mocking new builders. It's about calling out two uncomfortable truths I keep seeing:
1. 🏢 Inside orgs, people still assume "internal = safe"
"It's just an internal app." "We're not using real data yet."
But with AI, internal threats can hit harder. You can leak sensitive patterns through chat history. You can automate something dangerous through misunderstood logic. You can expose your own org through a friendly chatbot.
Internal isn't safe. It's just not attacked yet.
We need to apply Zero Trust and defense-in-depth to AI systems the same way we do for our external surfaces. And a lot of people just aren't doing that.
2. 🌐 Outside those walls, people are building without any technical foundation
AI is helping people build apps, write code, ship features with zero understanding of how the pieces fit together.
That's powerful. It's also dangerous.
If you assume AI tools work like traditional software, you'll miss the new ways they can fail and be exploited.
🎯 Why Now
Honestly? I think this is a golden moment for security practitioners.
We've been asking for years to be included earlier. Now, the whole world is moving fast, building messy, and ignoring internal threats.
For those of us who care about how systems behave this is our time. It's not just about blocking attacks. It's about helping people ask better questions while they build.
"What could go wrong if this LLM misunderstands the task?"
"What happens if this AI connects to tools with no guardrails?"
"What assumptions are we making about who (or what) is trustworthy?"
📬 What to Expect
Each week, you'll get a mix of:
🔬 Research Teardowns – Breaking down papers into real-world takeaways
📡 News & Trends – Curated updates with actual signal
🎯 Weekly Actions – Something small you can do to think or build more securely
This is for security professionals, yes. But also for AI builders, cloud architects, developers, policy folks, and the simply curious. Because at the end of the day—AI security is everyone's job now.
💡 One Challenge to Start With
This week, take a look at one AI-powered tool you use (or that your team uses). Ask yourself:
What's the worst thing this could do if it misunderstood what I meant?
That one question can open the door to better design, better controls, and better conversations.
We're just getting started.
Devon Artis
Founder, Secure AI Weekly