Skip to main content

Search site

Find podcasts, news, articles, webinars, and contributors in one search.

Attacks on machine learning models

Source: rnikhil.com

Found this useful? Share it with your network

Machine learning models, used in areas like self-driving cars, language bots, and neural networks, are vulnerable to diverse attacks which can compromise their effectiveness or even hijack their results. These include adversarial examples that manipulate model outputs, data poisoning and backdoor attacks which target the training data, and membership inference attacks that compromise privacy. Models can also be threatened via extraction attacks, fairwashing that cloaks bias, and attacks that increase energy consumption or degrade performance. Guarding these systems involves counteracting both data and code vulnerabilities.

Read Full Article

Opens on rnikhil.com