Linkedin Learning – Security Risks In AI And Machine Learning-Categorizing Attacks And Failure Modes

Linkedin Learning – Security Risks In AI And Machine Learning-Categorizing Attacks And Failure Modes
English | Tutorial | Size: 330.56 MB


From predicting medical outcomes to managing retirement funds, we put a lot of trust in machine learning (ML) and artificial intelligence (AI) technology, even though we know they are vulnerable to attacks, and that sometimes they can completely fail us. In this course, instructor Diana Kelley pulls real-world examples from the latest ML research and walks through ways that ML and AI can fail, providing pointers on how to design, build, and maintain resilient systems.

Learn about intentional failures caused by attacks and unintentional failures caused by design flaws and implementation issues. Security threats and privacy risks are serious, but with the right tools and preparation you can set yourself up to reduce them. Diana explains some of the most effective approaches and techniques for building robust and resilient ML, such as dataset hygiene, adversarial training, and access control to APIs.

Skills you will gain
Machine learning
Artificial intelligence

Diana Kelley
CISO | Board Member | Volunteer | Executive Advisor

Buy Long-term Premium Accounts To Support Me & Max Speed


RAPIDGATOR:
rapidgator.net/file/59ed276ea3b2280145decdf53e03eedd/Linkedin.Learning.Security.Risks.In.AI.And.Machine.Learning-Categorizing.Attacks.And.Failure.Modes.rar.html

ALFAFILE:
alfafile.net/file/AcB7r/Linkedin.Learning.Security.Risks.In.AI.And.Machine.Learning-Categorizing.Attacks.And.Failure.Modes.rar

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.