Original on Transparent.png

Machine learning (ML) models can be vulnerable to adversarial attacks. These can range from an attacker attempting to make the ML system learn the wrong thing (data poisoning), do the wrong thing (evasion), reveal the wrong thing (inversion), or can be stolen (extraction).

Led by proven leaders in AI/ML and well-funded by top venture capitalists in cybersecurity and machine learning, Protect AI is building security solutions for machine learning.

Thanks for submitting!