In the rapidly evolving landscape of artificial intelligence, autonomous large language model (LLM) agents are redefining how systems reason, act, and interact with the world. These agents go beyond answering queries—they execute complex workflows, leverage external tools, and maintain persistent memory to achieve goals. However, with this transformative power comes unprecedented security challenges. Agentic AI Security: Designing and Protecting Autonomous LLM Agents with Advanced Threat Models, Prompt Engineering, and Memory Safeguards is your essential guide to building and securing these next-generation AI systems.
This comprehensive book provides AI engineers, security architects, DevSecOps professionals, and responsible AI practitioners with a robust framework to safeguard autonomous LLM agents. Across eight expertly crafted chapters, you’ll explore how to mitigate risks like prompt injection, memory poisoning, feedback loop attacks, and self-modifying agent behaviors. Learn to design secure agent architectures, implement layered defenses, and align with emerging compliance standards to ensure your systems are both powerful and trustworthy.
Inside, you’ll discover how to:
Les informations fournies dans la section « Synopsis » peuvent faire référence à une autre édition de ce titre.
Vendeur : GreatBookPrices, Columbia, MD, Etats-Unis
Etat : New. N° de réf. du vendeur 51516702-n
Quantité disponible : Plus de 20 disponibles
Vendeur : GreatBookPrices, Columbia, MD, Etats-Unis
Etat : As New. Unread book in perfect condition. N° de réf. du vendeur 51516702
Quantité disponible : Plus de 20 disponibles
Vendeur : Grand Eagle Retail, Bensenville, IL, Etats-Unis
Paperback. Etat : new. Paperback. In the rapidly evolving landscape of artificial intelligence, autonomous large language model (LLM) agents are redefining how systems reason, act, and interact with the world. These agents go beyond answering queries-they execute complex workflows, leverage external tools, and maintain persistent memory to achieve goals. However, with this transformative power comes unprecedented security challenges. Agentic AI Security: Designing and Protecting Autonomous LLM Agents with Advanced Threat Models, Prompt Engineering, and Memory Safeguards is your essential guide to building and securing these next-generation AI systems.This comprehensive book provides AI engineers, security architects, DevSecOps professionals, and responsible AI practitioners with a robust framework to safeguard autonomous LLM agents. Across eight expertly crafted chapters, you'll explore how to mitigate risks like prompt injection, memory poisoning, feedback loop attacks, and self-modifying agent behaviors. Learn to design secure agent architectures, implement layered defenses, and align with emerging compliance standards to ensure your systems are both powerful and trustworthy.Inside, you'll discover how to: Develop agent-specific threat models using STRIDE and other frameworks tailored for autonomous systems.Engineer schema-bound prompts and gated tool orchestration to prevent intent drift and unauthorized actions.Implement memory integrity checks, anomaly detection, and write controls to secure agent recall and persistence.Embed safety critics, intent modeling, and policy enforcement within the agent's reasoning loop for real-time protection.Conduct red teaming, adversarial testing, and continuous threat simulation to proactively harden agent deployments.Navigate compliance with NIST AI RMF, OWASP GenAI Top 10, and the EU AI Act for enterprise-grade, auditable AI systems.Whether you're building AI agents for real-world applications or securing enterprise-grade deployments, this book equips you with practical strategies and technical patterns to address the unique vulnerabilities of autonomous systems. Stay ahead of evolving threats and build AI agents that are not only intelligent but also secure, resilient, and aligned with ethical standards. Start mastering agentic AI security today! This item is printed on demand. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. N° de réf. du vendeur 9798270171551
Quantité disponible : 1 disponible(s)
Vendeur : Rarewaves.com USA, London, LONDO, Royaume-Uni
Paperback. Etat : New. N° de réf. du vendeur LU-9798270171551
Quantité disponible : Plus de 20 disponibles
Vendeur : PBShop.store UK, Fairford, GLOS, Royaume-Uni
PAP. Etat : New. New Book. Delivered from our UK warehouse in 4 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000. N° de réf. du vendeur L0-9798270171551
Quantité disponible : Plus de 20 disponibles
Vendeur : GreatBookPricesUK, Woodford Green, Royaume-Uni
Etat : New. N° de réf. du vendeur 51516702-n
Quantité disponible : Plus de 20 disponibles
Vendeur : GreatBookPricesUK, Woodford Green, Royaume-Uni
Etat : As New. Unread book in perfect condition. N° de réf. du vendeur 51516702
Quantité disponible : Plus de 20 disponibles
Vendeur : CitiRetail, Stevenage, Royaume-Uni
Paperback. Etat : new. Paperback. In the rapidly evolving landscape of artificial intelligence, autonomous large language model (LLM) agents are redefining how systems reason, act, and interact with the world. These agents go beyond answering queries-they execute complex workflows, leverage external tools, and maintain persistent memory to achieve goals. However, with this transformative power comes unprecedented security challenges. Agentic AI Security: Designing and Protecting Autonomous LLM Agents with Advanced Threat Models, Prompt Engineering, and Memory Safeguards is your essential guide to building and securing these next-generation AI systems.This comprehensive book provides AI engineers, security architects, DevSecOps professionals, and responsible AI practitioners with a robust framework to safeguard autonomous LLM agents. Across eight expertly crafted chapters, you'll explore how to mitigate risks like prompt injection, memory poisoning, feedback loop attacks, and self-modifying agent behaviors. Learn to design secure agent architectures, implement layered defenses, and align with emerging compliance standards to ensure your systems are both powerful and trustworthy.Inside, you'll discover how to: Develop agent-specific threat models using STRIDE and other frameworks tailored for autonomous systems.Engineer schema-bound prompts and gated tool orchestration to prevent intent drift and unauthorized actions.Implement memory integrity checks, anomaly detection, and write controls to secure agent recall and persistence.Embed safety critics, intent modeling, and policy enforcement within the agent's reasoning loop for real-time protection.Conduct red teaming, adversarial testing, and continuous threat simulation to proactively harden agent deployments.Navigate compliance with NIST AI RMF, OWASP GenAI Top 10, and the EU AI Act for enterprise-grade, auditable AI systems.Whether you're building AI agents for real-world applications or securing enterprise-grade deployments, this book equips you with practical strategies and technical patterns to address the unique vulnerabilities of autonomous systems. Stay ahead of evolving threats and build AI agents that are not only intelligent but also secure, resilient, and aligned with ethical standards. Start mastering agentic AI security today! This item is printed on demand. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability. N° de réf. du vendeur 9798270171551
Quantité disponible : 1 disponible(s)
Vendeur : Rarewaves.com UK, London, Royaume-Uni
Paperback. Etat : New. N° de réf. du vendeur LU-9798270171551
Quantité disponible : Plus de 20 disponibles