How to do Penetration Testing for AI Models

GenAI is everywhere, but is your AI truly secure? Hackers are constantly finding new ways to exploit AI vulnerabilities, putting your security and compliance at risk.

In this session, we took a deep dive into:
- The real security risks hiding in AI models
- How attackers exploit vulnerabilities in GenAI
- Strategies to secure AI and maintain SOC 2 compliance

Led by Nikita Goman, Scytale’s Penetration Testing Team Leader, and Avi Lumelsky, AI Security Researcher at Oligo.

Summary of the Webinar

This session uncovers key insights to help businesses stay ahead of AI security threats with penetration testing best practices.