As artificial intelligence becomes embedded in everything from healthcare diagnostics to financial systems and autonomous vehicles, the stakes for AI security have never been higher. Adversarial AI Threat Response and Secure Model Design is your essential guide to understanding, defending against, and designing resilient machine learning systems in the face of growing adversarial threats.
Written by a leading expert in AI security and policy, this book delivers a combination of technical depth, practical implementation, and strategic insight. It begins by mapping the full landscape of adversarial threats―evasion, poisoning, model extraction, backdoors, and more―across diverse data modalities and real-world applications. From there, it equips readers with a robust toolkit of detection and defense techniques, including adversarial training, anomaly detection, and formal robustness certification.
But this book goes beyond code. It explores the organizational, ethical, and regulatory dimensions of AI security, offering guidance on risk quantification, explainability, and compliance with frameworks like the EU AI Act. With hands-on projects, open-source tools, and case studies in high-stakes domains, readers will learn to design secure-by-default systems that are not only technically sound but socially responsible.
Whether you're an AI engineer deploying models in production, a cybersecurity professional defending intelligent systems, or an educator preparing the next generation of AI talent, this book provides the clarity, rigor, and foresight needed to stay ahead of adversarial threats. It’s not just a reference―it’s a roadmap for building trustworthy AI.
What You Will Learn:
Who This Book is for:
Written for technical professionals and researchers who are building, deploying, or securing machine learning systems in real-world environments. The primary audience includes machine learning engineers, AI developers, cybersecurity professionals, and graduate-level students in computer science, data science, and applied AI programs. It is also relevant for technical leads, architects, and academic instructors designing secure AI curricula or systems in regulated or high-stakes domains.
Les informations fournies dans la section « Synopsis » peuvent faire référence à une autre édition de ce titre.
Dr. Goran Trajkovski is Director of Data Analytics at Touro University, a Fulbright Scholar, and author of over 300 scholarly works, including 20 books. With over 30 years of experience in artificial intelligence, data analytics, and educational technology, he leads AI curriculum design, assessment innovation, and academic program development. He teaches graduate courses in AI and machine learning, and is a Pluralsight course author focused on adversarial AI and AI ethics. His research and instructional work center on AI model vulnerabilities, human-centered AI design, and practical adversarial defense strategies―making him a leader in the secure implementation of generative and adversarial AI systems.
Les informations fournies dans la section « A propos du livre » peuvent faire référence à une autre édition de ce titre.
Vendeur : Grand Eagle Retail, Bensenville, IL, Etats-Unis
Paperback. Etat : new. Paperback. As artificial intelligence becomes embedded in everything from healthcare diagnostics to financial systems and autonomous vehicles, the stakes for AI security have never been higher. Adversarial AI Threat Response and Secure Model Design is your essential guide to understanding, defending against, and designing resilient machine learning systems in the face of growing adversarial threats.Written by a leading expert in AI security and policy, this book delivers a combination of technical depth, practical implementation, and strategic insight. It begins by mapping the full landscape of adversarial threatsevasion, poisoning, model extraction, backdoors, and moreacross diverse data modalities and real-world applications. From there, it equips readers with a robust toolkit of detection and defense techniques, including adversarial training, anomaly detection, and formal robustness certification.But this book goes beyond code. It explores the organizational, ethical, and regulatory dimensions of AI security, offering guidance on risk quantification, explainability, and compliance with frameworks like the EU AI Act. With hands-on projects, open-source tools, and case studies in high-stakes domains, readers will learn to design secure-by-default systems that are not only technically sound but socially responsible.Whether you're an AI engineer deploying models in production, a cybersecurity professional defending intelligent systems, or an educator preparing the next generation of AI talent, this book provides the clarity, rigor, and foresight needed to stay ahead of adversarial threats. Its not just a referenceits a roadmap for building trustworthy AI. What You Will Learn:Understand the full spectrum of adversarial threats to AI systems, including evasion, poisoning, backdoor injection, and model extraction, across vision, language, and multimodal applications.Apply practical detection and defense techniques using real tools and code, including adversarial training, statistical anomaly detection, input preprocessing, and ensemble defenses.Evaluate and balance trade-offs between accuracy, robustness, performance, and interpretability in the design of secure machine learning systems.Navigate the regulatory, ethical, and risk management challenges associated with adversarial AI, including disclosure practices, auditability, and compliance with emerging AI laws.Design, implement, and test secure-by-design AI solutions through hands-on projects and real-world case studies that span sectors such as healthcare, finance, and autonomous systems.Who This Book is for:Written for technical professionals and researchers who are building, deploying, or securing machine learning systems in real-world environments. The primary audience includes machine learning engineers, AI developers, cybersecurity professionals, and graduate-level students in computer science, data science, and applied AI programs. It is also relevant for technical leads, architects, and academic instructors designing secure AI curricula or systems in regulated or high-stakes domains. This item is printed on demand. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. N° de réf. du vendeur 9798868823077
Quantité disponible : 1 disponible(s)
Vendeur : California Books, Miami, FL, Etats-Unis
Etat : New. N° de réf. du vendeur I-9798868823077
Quantité disponible : Plus de 20 disponibles
Vendeur : PBShop.store UK, Fairford, GLOS, Royaume-Uni
PAP. Etat : New. New Book. Shipped from UK. Established seller since 2000. N° de réf. du vendeur DB-9798868823077
Quantité disponible : 3 disponible(s)
Vendeur : GreatBookPrices, Columbia, MD, Etats-Unis
Etat : New. N° de réf. du vendeur 53338558-n
Quantité disponible : 3 disponible(s)
Vendeur : PBShop.store US, Wood Dale, IL, Etats-Unis
PAP. Etat : New. New Book. Shipped from UK. Established seller since 2000. N° de réf. du vendeur DB-9798868823077
Quantité disponible : 3 disponible(s)
Vendeur : GreatBookPrices, Columbia, MD, Etats-Unis
Etat : As New. Unread book in perfect condition. N° de réf. du vendeur 53338558
Quantité disponible : 3 disponible(s)
Vendeur : GreatBookPricesUK, Woodford Green, Royaume-Uni
Etat : New. N° de réf. du vendeur 53338558-n
Quantité disponible : 3 disponible(s)
Vendeur : GreatBookPricesUK, Woodford Green, Royaume-Uni
Etat : As New. Unread book in perfect condition. N° de réf. du vendeur 53338558
Quantité disponible : 3 disponible(s)
Vendeur : Rheinberg-Buch Andreas Meier eK, Bergisch Gladbach, Allemagne
Taschenbuch. Etat : Neu. Neuware -As artificial intelligence becomes embedded in everything from healthcare diagnostics to financial systems and autonomous vehicles, the stakes for AI security have never been higher.Adversarial AI Threat Response and Secure Model Designis your essential guide to understanding, defending against, and designing resilient machine learning systems in the face of growing adversarial threats.Written by a leading expert in AI security and policy, this book delivers a combination of technical depth, practical implementation, and strategic insight. It begins by mapping the full landscape of adversarial threats evasion, poisoning, model extraction, backdoors, and more across diverse data modalities and real-world applications. From there, it equips readers with a robust toolkit of detection and defense techniques, including adversarial training, anomaly detection, and formal robustness certification.But this book goes beyond code. It explores the organizational, ethical, and regulatory dimensions of AI security, offering guidance on risk quantification, explainability, and compliance with frameworks like the EU AI Act. With hands-on projects, open-source tools, and case studies in high-stakes domains, readers will learn to design secure-by-default systems that are not only technically sound but socially responsible.Whether you're an AI engineer deploying models in production, a cybersecurity professional defending intelligent systems, or an educator preparing the next generation of AI talent, this book provides the clarity, rigor, and foresight needed to stay ahead of adversarial threats. It s not just a reference it s a roadmap for building trustworthy AI. What You Will Learn:Understand the full spectrum of adversarial threats to AI systems, including evasion, poisoning, backdoor injection, and model extraction, across vision, language, and multimodal applications.Apply practical detection and defense techniques using real tools and code, including adversarial training, statistical anomaly detection, input preprocessing, and ensemble defenses.Evaluate and balance trade-offs between accuracy, robustness, performance, and interpretability in the design of secure machine learning systems.Navigate the regulatory, ethical, and risk management challenges associated with adversarial AI, including disclosure practices, auditability, and compliance with emerging AI laws.Design, implement, and test secure-by-design AI solutions through hands-on projects and real-world case studies that span sectors such as healthcare, finance, and autonomous systems.Who This Book is for:Written for technical professionals and researchers who are building, deploying, or securing machine learning systems in real-world environments. The primary audience includes machine learning engineers, AI developers, cybersecurity professionals, and graduate-level students in computer science, data science, and applied AI programs. It is also relevant for technical leads, architects, and academic instructors designing secure AI curricula or systems in regulated or high-stakes domains. 541 pp. Englisch. N° de réf. du vendeur 9798868823077
Quantité disponible : 2 disponible(s)
Vendeur : BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Allemagne
Taschenbuch. Etat : Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -As artificial intelligence becomes embedded in everything from healthcare diagnostics to financial systems and autonomous vehicles, the stakes for AI security have never been higher.Adversarial AI Threat Response and Secure Model Designis your essential guide to understanding, defending against, and designing resilient machine learning systems in the face of growing adversarial threats.Written by a leading expert in AI security and policy, this book delivers a combination of technical depth, practical implementation, and strategic insight. It begins by mapping the full landscape of adversarial threats evasion, poisoning, model extraction, backdoors, and more across diverse data modalities and real-world applications. From there, it equips readers with a robust toolkit of detection and defense techniques, including adversarial training, anomaly detection, and formal robustness certification.But this book goes beyond code. It explores the organizational, ethical, and regulatory dimensions of AI security, offering guidance on risk quantification, explainability, and compliance with frameworks like the EU AI Act. With hands-on projects, open-source tools, and case studies in high-stakes domains, readers will learn to design secure-by-default systems that are not only technically sound but socially responsible.Whether you're an AI engineer deploying models in production, a cybersecurity professional defending intelligent systems, or an educator preparing the next generation of AI talent, this book provides the clarity, rigor, and foresight needed to stay ahead of adversarial threats. It s not just a reference it s a roadmap for building trustworthy AI. What You Will Learn:Understand the full spectrum of adversarial threats to AI systems, including evasion, poisoning, backdoor injection, and model extraction, across vision, language, and multimodal applications.Apply practical detection and defense techniques using real tools and code, including adversarial training, statistical anomaly detection, input preprocessing, and ensemble defenses.Evaluate and balance trade-offs between accuracy, robustness, performance, and interpretability in the design of secure machine learning systems.Navigate the regulatory, ethical, and risk management challenges associated with adversarial AI, including disclosure practices, auditability, and compliance with emerging AI laws.Design, implement, and test secure-by-design AI solutions through hands-on projects and real-world case studies that span sectors such as healthcare, finance, and autonomous systems.Who This Book is for:Written for technical professionals and researchers who are building, deploying, or securing machine learning systems in real-world environments. The primary audience includes machine learning engineers, AI developers, cybersecurity professionals, and graduate-level students in computer science, data science, and applied AI programs. It is also relevant for technical leads, architects, and academic instructors designing secure AI curricula or systems in regulated or high-stakes domains. 541 pp. Englisch. N° de réf. du vendeur 9798868823077
Quantité disponible : 2 disponible(s)