Conference Notes Cybersecurity Summit 2025
Date: September 30, 2025
My Notes & Takeaways
AI & Security Team Alignment
- Gary Hayslip framed AI as a force multiplier only if the human + process layer is strong.
- Takeaway: Don’t over-invest in tools before you mature your organizational posture (roles, workflows, governance).
Reasonable Security in the Age of AI
- Matt Stamper’s talk centered on how regulations, legal exposure, and practicability must inform your threat model—not just technical aspirations.
- “Reasonable” is a shifting target—what’s defensible evolves as tools and adversaries evolve.
Data Security & Context
- Past DLP and data-security models failed in part due to over-filtering and blind spots.
- The “missing context” is often who, why, and when — not just what.
- AI can enhance signal-to-noise if anchored to strong metadata / usage patterns.
Agentic AI & IGA
- Key concern: Non-human agents (bots, agents) acting on behalf of humans need identity controls just as strict as user accounts.
- The session raised “agent impersonation” as a future threat vector.
Emerging Threats Panel
- The usual suspects: ransomware evolution, AI-powered phishing, supply chain compromises.
- Trend takeaway: More small-to-mid adversaries will gain access to advanced tooling (AI, APIs, LLMs) cheaply.
- Defensive push: invest in automated responses, anomaly detection, and decoy systems.
Cyber Risk Insurance (ESET)
- AI adds both risk and opportunity for insurers: premium models may evolve with how well AI is used defensively.
- Key factors in coverage: incident history, defense maturity, and transparency in adversarial risk exposure.
Post-Quantum Threats
- “Harvest now, decrypt later” remains real: data stolen today may become readable decades later under quantum decryption.
- Crypto-agility is no longer optional — you need a migration plan soon, not later.
Real-Time SASE & AI Threat Response
- AI-infused SASE can help shift from detection → prevention, especially in edge networks.
- But: if AI is compromised, it can amplify damage. Defenses must monitor the defender’s AI.
Reflections & Next Steps
- I saw recurring themes: AI, resiliency, and trust boundary expansion.
- The more we expose non-human agents, the more we must treat them as first-class identities.
- Quantum risk, once theoretical, is now operationally relevant for long-lifetime systems.
Action Items for Me
- Review current toolset: Where am I early or late in AI adoption in my org?
- Refresh vendor risk assessments with quantum-readiness and AI-resilience in mind.