- Founder: Ian Swason, Daryan Dehghanpisheh, Badar Ahmed
- Sector: Cybersecurity, Artificial Intelligence
- Location: Seattle, WA
Building the Future of MLSecOps: Salesforce Ventures Invests in Protect AI
As the wave of AI startups began exploding in the early days of 2023, we started asking ourselves how to apply cybersecurity to this quickly emerging space. Security of ML systems and AI applications differs from typical software development, as the ML pipeline goes beyond code. The dynamic ML development life cycle includes data, unique machine learning artifacts, and a vast ecosystem of open-source libraries, packages, and frameworks. This leads to the need for new tools that provide full transparency and visibility across all elements of an ML pipeline. The tailwind has only strengthened as AI adoption reached a seminal point, with increasing consumer adoption and enterprises gearing up for more deployments of AI systems, driven by interest in generative applications. Regulators are also focusing on AI security. In May this year, White House officials hosted CEOs from leading AI companies and announced new US policies drafted to mitigate AI risks, shortly before the EU approved its AI Act on June 14th, classifying AI systems by risk and mandating development and usage requirements.
As artificial intelligence continues its rapid trajectory, a new startup has emerged to help secure AI systems across the machine learning lifecycle. Salesforce Ventures is excited to announce our investment in Protect AI, the pioneers of MLSecOps.
Founded in 2022 by serial entrepreneurs Ian Swanson, Daryan Dehghanpisheh, and Badar Ahmed, Protect AI recognizes that traditional cybersecurity solutions are insufficient for the unique risks and workflows of machine learning. Their mission is to shift security left by building specialized tools for ML developers and operators.
The Need for AI-Specific Security
Machine learning introduces new attack surfaces and vulnerabilities that legacy security vendors aren’t equipped to address. ML pipelines rely on diverse data sources, rapidly evolving open source libraries, and complex model architectures. Once deployed, models become black boxes that are difficult to monitor and test.
As enterprises accelerate their AI adoption, these security gaps become urgent priorities. Protect AI’s research has uncovered rising exploits of ML systems, foreshadowing potential vulnerabilities as adoption increases. Their solutions cover the entire machine learning lifecycle to provide end-to-end protection.
Enabling Secure and Trustworthy ML
Protect AI takes a developer-first approach to ML security. Their initial open source offering, NB Defense, integrates directly into Jupyter Notebooks to provide security scanning without disrupting workflows. It checks for exposed credentials, vulnerable dependencies, and other common risks.
Their upcoming flagship product, AI Radar, delivers robust capabilities for managing and securing ML pipelines. It provides visibility into the entire ML supply chain, an auditable record of ML development, and AI-aware security testing. Together, these solutions aim to help enterprises build trust and confidence in their machine learning applications.
Beyond its technology, Protect AI also leads important community building in this nascent field. Initiatives like MLSecOps.com provide education and bring together practitioners to advance ML security.
Why We’re Excited About Protect AI
Unsurprisingly, the top reason for our conviction is the amazing team leading Protect AI. Ian Swanson, Co-founder & CEO, is a serial entrepreneur who has assembled a talented team with rare experience in both AI/ML and security. Protect AI is Ian’s third startup, having sold his first company to American Express in 2011 and his second company, DataScience.com, to Oracle in 2018. Most recently, Ian was AWS Worldwide Leader for AI and ML, responsible for all go-to-market efforts across the AWS portfolio.
The other co-founders, Daryan (Global Leader for AI/ML Solution Architects at AWS) and Badar (Head of Engineering at Datascience.com), bring deep technical capabilities in ML. Their recent CISO hire, Diana Kelley, a former cybersecurity executive at Microsoft and IBM, complements the team’s expertise.
Second, the need for a new type of security player and the increasing importance of AI security for CISOs creates a large and opportune market for Protect AI. ML pipelines also incorporate new tools like PyTorch and MLFlow not covered by existing vendors. As a result, CISOs have moved ML Security to a top three budget priority in recent months, which we’ve consistently heard in market research and customer calls. Protect AI’s solutions address the distinct security challenges of ML systems missed by legacy vendors.
And lastly, Protect AI aims to lead the industry in MLSecOps, combining security practices with AI operations. Their end-to-end approach aligns with the need for comprehensive solutions expressed by organizations adopting AI. We believe they have the vision and experience to define leading practices in this emerging category.
Partnering to Define a New Category
As a pioneer in MLSecOps, Protect AI has an enormous opportunity to shape how enterprises secure their AI futures. We believe Ian, Daryan, and Badar have the experience and vision to define leading practices in this space. That’s why we’re excited to partner with them.
At Salesforce Ventures, we look to back companies inventing the future. By integrating security into ML operations, Protect AI unlocks the next generation of trustworthy and ethical AI applications. We can’t wait to see what they build!