Agentic AI is already being used across organizations. It is writing code, connecting to systems, and taking action, often without meaningful security involvement. The challenge here is not only policy. It is understanding. You cannot secure what you do not understand.
Security teams that cannot engage with how these systems work are often left out of the decision process. As adoption expands, agents are being given access to internal tools, communication platforms, and codebases, increasing both capability and exposure. In some cases, agents can act across systems, creating new paths for misuse or unintended actions. The risk spans developer tools, vendor-integrated agents, and custom agents built across the business.
At SANSFIRE 2026, SANS Certified Instructor Ahmed Abugharbia will be teaching SEC545: GenAI and LLM Application Security, examining how agentic AI systems are built, where the real risks exist, and how to apply controls that hold up in practice. Join Ahmed at SANSFIRE 2026 to build the foundational knowledge needed for mitigating risk in an agentic AI world.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home