default hero for archive pages

Welcome, Together AI!

  • Founders: Vipul Ved Prakash, Ce Zhang, Chris Ré, Percy Liang
  • Sector: Artificial Intelligence
  • Location: San Francisco, CA

The Opportunity

2023 was a year of incredible growth and innovation in open-source AI. The year started with a meaningful gap in model performance between open-source and closed-source models, but the paradigm quickly shifted when Meta introduced Llama in February, catalyzing the emergence of a host of more powerful open-source models like Falcon and Mistral/Mixtral. At Salesforce Ventures we’re big believers in open source, having worked with and invested in numerous companies that focus on open-source, including Hugging Face, Airbyte, Astronomer, dbt, Starburst, and MuleSoft. 

We believe the AI world will be hybrid in the future, and that companies will utilize both closed-source and open-source technologies. Furthermore, having a powerful model is no longer enough. To fully unlock adoption, we need an entire stack built around the models to enable the right balance between performance and cost. With this context in mind, we searched for the right team with a bold vision that could build the best infrastructure for training and running open-source models in production, which is why we’re beyond excited to announce that we’re leading a new round in Together AI. 

The Solution

Together AI is capturing the market at exactly the right moment—open source is taking off and the need for good, performant, cost effective GPU infrastructure is more acute than ever. While the GPU hardware supply shortage is a prominent factor, it’s not the only factor that contributes to the question of “why now.” 

The GPU supply crunch makes it difficult for companies from all market segments to get their hands on GPUs. This shortage is exacerbated for companies that don’t have long-term relationships or big contracts with hyperscalers or cloud providers, given that’s where most GPU supply currently resides. 

Even for companies that do have these relationships, the ability to train and serve models in production well is a different set of challenges. There are many layers of complexity when it comes to running training and inference workloads to ensure criteria users care about, such as latency, throughput, lower failure rate during training, and effective training cycles. As such, there’s a growing desire for an abstraction layer to automate training, finetuning, and deployment so customers don’t have to spin up their own infrastructure or hire their own teams with backgrounds in AI or systems engineering.

Together AI captures customers’ end-to-end needs for AI workloads through its full-stack approach. The company offers both compute and software packaged in products that vary in volume of software abstraction on top of the hardware compute resources. For customers that want dedicated clusters for training (or inference), Together AI offers GPU Clusters comprised of high-end compute and networking, plus a software layer that incorporates the optimizations needed for users to run AI workloads efficiently without needing to manage infrastructure. The Together AI team has also led the development of efficient kernels like FlashAttention for training, and a number of techniques to speed up inference of transformer models that can extract 2-8 times more work from GPUs for generative AI workloads. 

For customers that want an additional layer of abstraction, typically for model deployments, Together AI supports an extensive list of open-source models (e.g., Llama-2, Code Llama, Falcon, Vicuna, Mistral/Mixtral, SDXL, etc.) as well as its own models (e.g. RedPajama, StripedHyena, Evo) that are at the frontier of model architecture research. Together AI will also soon support the option for customers to bring their own models. Customers can host these models using dedicated API instances built on top of the Together Inference Engine optimized with open-source and proprietary techniques. What’s more, Together AI will soon be able to deploy in customers’ own cloud environments. The ability to bring their own models and deploy in their own environments will open up numerous use cases for enterprise customers. 

Together AI also offers fine-tuning as part of this product, which is an attractive value-add for open source use cases. For customers who have specific needs, they can choose to build their own Custom Models, leveraging the Together AI software and hardware stack as well as support from their technical team on finding the optimal data design, architecture, and training recipe.

Why We’re Backing Together AI

We first met the CEO and co-founder of Together AI, Vipul Ved Prakash, back in the summer of 2023. It didn’t take long to realize Vipul had built something special, starting with an outstanding founding team. Both Vipul and co-founder Chris Ré are serial entrepreneurs with previous successful exits. Chris and fellow co-founder Percy Liang are also the leading researchers in AI model architectures and infrastructure. Ce Zhang (CTO and co-founder) is an expert in decentralized and distributed training/learning as well as data-centric ML Ops. The team’s combined expertise and experience suggest they can build proprietary IP around optimizing the AI infrastructure stack, as well as scale talent and functional teams.

We believe Together AI’s “full-stack” product approach is the right approach to the open-source AI market. By offering a suite of products that vary in how close they are to the underlying hardware—from dedicated clusters to serverless endpoints—Together AI captures demand from all types of workloads: training, fine-tuning, and inference. There is no inference without training or fine-tuning, and we think customers will increasingly want to deploy where they fine-tune and train their models, especially given some of the features that the Together AI team are working on to increase reliability, privacy, and security. 

As training and inference workloads ebb and flow, Together AI can move the freed up GPU resources across different types of workloads, allowing the company to scale efficiently. In this sense, Together AI straddles two distinct categories of companies that have emerged in this new AI era: specialized GPU cloud providers and serverless endpoint / LLM Ops platforms. We like this approach, as it provides differentiated software on top of hardware when compared to pure GPU cloud providers. This approach also provides more control over hardware utilization and optimizations compared to pure serverless endpoint providers / LLM Ops companies.

Together AI is also, at its core, an AI research company. Two of Together AI’s co-founders are bona fide luminaries in the AI research community. In addition to having Ce Zhang as the CTO, Together AI also brought onboard Tri Dao, the creator of FlashAttention and FlashAttention-2, as Chief Scientist in summer 2023. The power behind Together AI’s products is vested in the company’s ability to rapidly bring research innovations to production. 

In the spirit of open-source, Together AI has been prolific in publishing some of the optimization techniques (FlashDecoding, Medusa, Cocktail SGD) and model research (RedPajama, StripedHyena) that are garnering a lot of attention from the broader AI community. The research and models are great tools to drive demand to Together AI’s platform, creating a flywheel for growth. Together AI’s continuous research capability differentiates its platform from other GPU Cloud/serverless endpoint platforms and hyperscalers, cementing a longer-term technical moat. 

What’s ahead?

Amid the litany of 2024 AI predictions that have been circulating online, one common thread we keep seeing is how 2024 will be the year that customers go from training models to deploying models in production. While we believe this is directionally correct, we suspect that process to be much slower and more gradual than people anticipate. There’s still additional customer education to be done and infrastructure for model deployment needs to continue to improve. 

Together AI is not the only company that has realized this—the field of GPU cloud/serverless endpoints platforms keeps expanding to fill the need for better infrastructure. This entrepreneurial enthusiasm speaks to the immense market opportunity founders see in this space. Vipul and his team have a bold long-term vision that requires precise and vigorous execution to accomplish. We are confident they will realize their vision, and Salesforce Ventures will be there to support them every step of the way.

We’re thrilled to partner with Vipul, Ce, Chris, Percy and the rest of the Together AI team. Together, we will bring the fastest and most performant cloud in generative AI to more  users who are pushing the frontier of AI development. Welcome to the Salesforce Ventures family!

Welcome, Oleria

  • Founder: Jim Alkove, Jagadeesh Kunda
  • Sector: Cybersecurity
  • Location: Seattle, WA

The Opportunity

For 30+ years, organizations around the world have utilized Identity and Access Management (IAM) solutions to enable their workforce as they incorporate more technology/devices. Over time, technology has evolved dramatically—from specific individual users part of an on-premises server in a basement, to users now accessing their corporate networks through multiple devices across the world while browsing the public internet. The rise of cloud computing has added to this change, as the server in the basement has been replaced by ephemeral instances of the cloud that change identities by the minute. 

This is where we see an opportunity. While IAM has evolved over the decades, the IT DNA has persisted through legacy vendors. Even relatively newer vendors brand themselves as security companies while providing IT/compliance/governance DNA solutions that continue to fail the modern enterprise. These organizations struggle to keep up with the growing demand and complexities of modern identity infrastructure that brings together different users, networks, micro-services, SaaS applications, and more.  

The way we see it, while today’s solutions are good at provisioning access and creating basic identity workflows / authenticating users from an IT perspective, they lack the intelligence, agility, and security context required to keep up with the rise in cyberattacks. Crowdstrike reports that 80% of all security breaches are the result of compromised identities. Further, Gartner states that “by 2026, 70% of identity-first security strategies will fail unless organizations adopt context-based access policies that are continuous and consistent.” 

We believe IAM needs to be rebuilt with a security-first & cloud-native DNA. This is where Oleria comes in 🙂 

The Solution

Oleria has developed an innovative approach to IAM that’s designed to provide appropriate access to the right users, at the right time, for the right duration, with a focus on great UX. 

Oleria’s Adaptive Security solution continuously assesses and validates the people, applications, and assets involved in each digital interaction, enabling organizations to adapt security to specific contexts and changing needs. With modern businesses adopting an ever-increasing number of interconnected SaaS applications and cloud infrastructure to drive global operations, Oleria represents a massive improvement over legacy security solutions that require significant manual effort to combat identity-based threats. 

Oleria’s AI-first approach, combined with a security and identity DNA, can orchestrate IAM across an organization without needing constant manual input. Oleria can understand user behavior and patterns of access to become smarter while enabling organizations to adopt a just-in-time + zero-trust philosophy at scale. 

Oleria’s value proposition is already resonating, as the company has built a strong pipeline of unicorn startups and large enterprises ready to implement Oleria’s Adaptive Security solution.

Why We’re Backing Oleria

Our excitement for Oleria stems from our confidence in the team and our optimism for the market they’re building in / massive problem they’re solving. 

Founders Jim Alkove and Jagadeesh Kunda possess a wealth of cybersecurity expertise. Jim is the former Chief Trust Officer at Salesforce with 25+ years of experience building cybersecurity products for Microsoft and Google. Jagadeesh is the former VP of Identity at Salesforce and has 20+ years of experience in the infrastructure, security, and identity space—most recently as the CPO at JumpCloud. They bring together the right combination of product-building expertise and operational security prowess at the largest of enterprise scales. 

This problem of enterprise identity is also close to their hearts. While at Salesforce, Jim and Jagadeesh invested heavily to modernize Salesforce’s identity infrastructure. Their shared experience was the spark that has become Oleria.

The duo are entering a market that’s ripe for disruption. IAM / IGA (Identity Governance and Administration) is a $17B+ industry dominated by legacy enterprises and private equity roll-ups that have been slow to keep up with the shift to SaaS and cloud. As more and more hacks are wedged in through lack of identity governance, the timing for building a security-focused IAM / IGA solution is ideal.

What’s Next?

We’re very excited to back Oleria’s Series A round alongside Evolution Equity. 

This new round of funding will enable Oleria to ramp up hiring to accelerate product innovation, including their AI strategy and GTM strategy. We look forward to supporting the team as they work to build the next-gen cybersecurity solution for enterprises.

Welcome to the Salesforce Ventures family, Oleria!

Welcome, AutogenAI!

  • Founder: Sean Williams
  • Sector: GenAI, Bid Writing
  • Location: London, UK

The Opportunity

The bid writing process is mired in the past. Many global enterprises require hundreds of staff working several months to submit a single bid, and the existing tech stack is wholly insufficient to support the needs of these organizations. The result is numerous underserved organizations wasting time, money, and effort writing failed bids.

The Solution

AutogenAI is revolutionizing the time-consuming and highly sensitive bidding process by introducing generative AI to the equation. The product is a unique combination of gen AI-native and bidding-first.

The AutogenAI team builds secure, bespoke language engines for Fortune 500 companies, international government agencies, management consultancies, and construction companies, as well as charities and nonprofits applying for grant funding. These language engines harness the power of large language technology to turbo-charge the sales function. By leveraging AutogenAI, organizations can draft high-quality bids quickly and accurately, giving them a significant leg up against their competition.

To date, AutogenAI has helped businesses achieve a 30% uplift in win rates and cut associated costs by up to 85%. The team has already seen immense interest from large corporations selling into the public sector—particularly amongst global consultancies, and companies in the construction and utilities sectors. AutogenAI was appointed to the UK Government’s AI Framework, allowing them to offer their services to all UK public sector organizations. AutogenAI also recently expanded services to the U.S. and Australia.

Why We’re Backing AutogenAI

AutogenAI is an excellent example of a company utilizing generative AI to address a real pain point in a verticalized market. The team’s combination of deep bid writing experience and technical expertise makes them well-equipped to build relevant solutions that address customer needs.

Our investment in AutogenAI represents our conviction that their software is a game changer for enterprises that secure business via tendering and drafting proposals. While every business is in the midst of being remade by generative AI, AutogenAI is far ahead of the market in terms of the capability of its technology.

AutogenAI is one of the fastest-growing companies we’ve encountered in the generative AI application layer space, and we’re excited to back the team as they continue to work to revolutionize the bid writing process.

What’s Ahead?

As one of AutogenAI’s customers said to us on a reference call, “This whole industry is about to change. Soon bid writers will become prompt writers. Anyone not using this technology will fall behind.”

We’re excited to partner with AutogenAI alongside Spark Capital, Blossom Capital, and other investors as the team works to further develop its bid-writing product suite to help businesses efficiently win more work.

Welcome to Salesforce Ventures, AutogenAI!

Welcome, Amini!

  • Founder: Kate Kallot
  • Sector: Sustainability
  • Location: Nairobi, Kenya

The Opportunity

Africa is the last and biggest emerging market. The continent is home to 65% of the world’s uncultivated arable land with a population that’s set to double to 2.5B by 2050. But Africa’s largely agrarian economy is under threat by a changing climate that’s expected to expose up to 118M people to droughts, food shortages, and extreme heat by 2030—and the region is under-equipped to handle this challenge.

Consider that Africa has just one-eighth the minimum density of weather stations recommended by the World Meteorological Organization. This scarcity of data leads to inaccurate forecasts and puts civilians and supply chains at risk. With climate change driving more frequent and extreme weather events, more and better data is needed to predict environmental trends, enhance operational resilience, and safeguard people and supply chains across the continent. 

The Solution

Amini AI was launched against the backdrop of a rapidly growing African population uniquely exposed to climate change impacts. Founded in Nairobi in 2022, Amini is building the single source of truth for verifiable, immutable, and actionable environmental data across the African continent.

Leveraging satellites, existing data sources, and AI / ML, Amini has built a data ingestion engine and proprietary unification protocol that delivers reliable, precise, and unbiased environmental, climate, and vegetative data to public and private sector stakeholders at scale. This data can provide meaningful insights to farmers, governments, corporations, financial institutions, and supply chain operators to drive informed decision-making and sustainable growth.

Amini’s data is already being utilized in crucial functions like climate resilience planning, agricultural insurance risk assessments, and supply chain analysis. Amini AI unlocks solutions previously held back by data deficiency and can serve as a starting point for Africa’s transition to climate-smart agriculture and the development of sustainable supply chains.

Why We’re Backing Amini AI

There are many reasons to be excited about Amini, but for us it starts with the team led by CEO Kate Kallot. Before founding Amini, Kate led NVIDIA’s ecosystem expansion in emerging markets and also created the NVIDIA Emerging Chapters Program dedicated to supporting developers in Africa and beyond. Prior to NVIDIA, Kate led AI/ML teams at Arm and Intel, and has been recognized as one of the 100 most influential people in AI by TIME magazine

Kate has surrounded herself with an all-star team that’s motivated to build at the convergence of people and planet. The strength of the Amini team is evident in the company’s early success: Amini has already signed a partnership with the global brand Aon and has received inbound interest from numerous organizations that want to leverage Amini’s data to tap into the African market. 

Further, Amini is building in a large and growing market with strong tailwinds. The African data observation market has more demand than any other continent, and the markets for digital climate solutions, earth observation data, and supply chain software are all expanding at a rapid pace.

The problem Amini is tackling is significant and widespread, but we believe it’s the right time and the right team to get the job done. The opportunity for scale and impact is vast. 

What’s ahead?

We’re thrilled to partner with Amini alongside Female Founders Fund and existing investors Pale Blue Dot and Superorganism as Kate and team build their core technology, onboard new customers, and make inroads across Africa. Amini has already started to foster a more sustainable, climate-resilient future for the African continent. In the future, Kate foresees a company that will play a meaningful role in Africa’s economic and environmental transformation.

We’re excited to be along for the journey.

Welcome to Salesforce Ventures Impact Fund, Amini!

Welcome, Hugging Face!

We are thrilled to announce our investment in Hugging Face. Hugging Face has built the largest and most important AI open source community in the world.

  • Founder: Clément Delangue, Julien Chaumond, Thomas Wolf
  • Sector: Artificial Intelligence
  • Location: Brooklyn, NY

At Salesforce Ventures, we strongly believe open source will shape the future of artificial intelligence. Open source enables collaboration, transparency, accountability, innovation, and trust. As AI becomes increasingly critical across industries, an open ecosystem will be key to ensuring AI technology develops responsibly and ethically.

That’s why today, we’re excited to announce we’re leading the Series D round in Hugging Face, the leading open source platform for data science and machine learning (ML). Hugging Face acts as the central hub connecting AI developers, researchers, and enthusiasts—like a GitHub for AI.

Founded by Clément Delangue, Julien Chaumond, and Thomas Wolf in 2016, Hugging Face began as an AI chatbot. The founders soon realized the true potential was not the chatbot itself, but the natural language processing (NLP) models powering it. We decided to back this talented founding team because of their creative vision, technical expertise, and proven ability to build passionate communities around their products. In particular, Clem’s business acumen, Julien’s engineering skills, and Thomas’ research background form a powerful combination capable of advancing the field of AI.

In 2018 and 2019, Hugging Face garnered fame and a burgeoning open source following as it built a Pytorch replica of the popular Transformer model which was widely adopted, and created the Transformers library that provides easy access to thousands of pre-trained models, including state-of-the-art NLP models. Since then, Hugging Face has expanded into other categories beyond NLP, such as computer vision and speech, and built the world’s largest open source community for AI with over 2 million users interacting with its models, datasets, and other resources. 

Open source has been the driving force behind much of the rapid progress in AI over the past decade. Nearly all major ML frameworks like TensorFlow and PyTorch are open source. Similarly, breakthrough models like LLaMA, BERT, and Stable Diffusion are open source. By transparently sharing ideas and innovations, researchers can build upon each other’s work to advance the field faster than any one organization could alone. Open source also promotes transparency and trust in AI systems and allows researchers worldwide to inspect, audit, and improve them. This transparency is critical as AI is deployed into sensitive domains like healthcare and finance. As we hear time and again from enterprises, they want to adopt a hybrid strategy (or approach) that combines the benefits of open and closed-source models.

We believe the future of AI will be driven by a thriving open ecosystem of researchers, developers, enterprises, and enthusiasts collaborating together. Hugging Face sits at the center of the open AI movement and has become the most important community for ML researchers and practitioners. 

The Hugging Face Hub brings together over 1 million repositories, including models, datasets and app demos, in one central platform. This makes it easy for anyone to find and access the latest innovations in AI, collaborate with others, and build on top of existing work. The Hub contains essentially all major open source AI models and is frequently the first destination for researchers to release their work – for instance, the much talked about LLaMA 2 model from Meta, Falcon, Vicuna and even Salesforce research team’s BLIP model – making Hugging Face a one-stop shop for the ML community. It promotes a virtuous cycle where the platform facilitates extremely rapid innovation cycles. This open ecosystem approach has been critical to the meteoric progress in AI over the past decade.

Beyond accelerating research, Hugging Face also provides the tools enterprises need to operationalize AI. This includes inference endpoints to deploy models into production, AutoTrain for scalable, low-code model training, Spaces to easily build and share demos and ML apps, and private Hugging Face Hubs for internal collaboration and model hosting. Hugging Face supports the entire ML workflow from research to deployment, enabling organizations to go from prototype to production seamlessly. This is another vital reason for our investment in Hugging Face – given this platform is already taking up so much of ML developers and researchers’ mindshare, it is the best place to capture the end-to-end ML workflow of its users.

The combination of its massive open community and enterprise-ready tools for deploying AI makes Hugging Face a fundamental part of any organization’s ML strategy. For both researchers and enterprises, Hugging Face is the gateway to the future of AI. This has led to fast-growing enterprise team adoption, an increasing number of whom have started to pay for the various ML tools on the company’s platform. We were deeply impressed by the speed of commercial growth (one of the fastest we have seen in recent history) and at the continuous amazing feedback we received through customer conversations. Many of these customers are not just the next AI native startup or big tech company who already has the ML engineering talent, but established enterprises from traditional industries such as healthcare, financial services and automotive. This speaks to how profoundly impactful the Hugging Face platform is to companies’ journey to AI adoption and the excitement from customers further solidified our belief that Hugging Face will become a generational company.

We believe open source AI, and Hugging Face’s role in catalyzing it, will fundamentally transform industries. We’re thrilled to partner with Clem, Julien, Thomas, and the entire Hugging Face team on their mission to democratize AI. Together, we’ll work to maximize the positive impacts of open AI across industries and society.

Welcome to the Salesforce Ventures family, Hugging Face!

Welcome, Protect AI!

  • Founder: Ian Swason, Daryan Dehghanpisheh, Badar Ahmed
  • Sector: Cybersecurity, Artificial Intelligence
  • Location: Seattle, WA

Building the Future of MLSecOps: Salesforce Ventures Invests in Protect AI

As the wave of AI startups began exploding in the early days of 2023, we started asking ourselves how to apply cybersecurity to this quickly emerging space. Security of ML systems and AI applications differs from typical software development, as the ML pipeline goes beyond code. The dynamic ML development life cycle includes data, unique machine learning artifacts, and a vast ecosystem of open-source libraries, packages, and frameworks. This leads to the need for new tools that provide full transparency and visibility across all elements of an ML pipeline. The tailwind has only strengthened as AI adoption reached a seminal point, with increasing consumer adoption and enterprises gearing up for more deployments of AI systems, driven by interest in generative applications. Regulators are also focusing on AI security. In May this year, White House officials hosted CEOs from leading AI companies and announced new US policies drafted to mitigate AI risks, shortly before the EU approved its AI Act on June 14th, classifying AI systems by risk and mandating development and usage requirements.

As artificial intelligence continues its rapid trajectory, a new startup has emerged to help secure AI systems across the machine learning lifecycle. Salesforce Ventures is excited to announce our investment in Protect AI, the pioneers of MLSecOps.

Founded in 2022 by serial entrepreneurs Ian Swanson, Daryan Dehghanpisheh, and Badar Ahmed, Protect AI recognizes that traditional cybersecurity solutions are insufficient for the unique risks and workflows of machine learning. Their mission is to shift security left by building specialized tools for ML developers and operators.

The Need for AI-Specific Security

Machine learning introduces new attack surfaces and vulnerabilities that legacy security vendors aren’t equipped to address. ML pipelines rely on diverse data sources, rapidly evolving open source libraries, and complex model architectures. Once deployed, models become black boxes that are difficult to monitor and test.

As enterprises accelerate their AI adoption, these security gaps become urgent priorities. Protect AI’s research has uncovered rising exploits of ML systems, foreshadowing potential vulnerabilities as adoption increases. Their solutions cover the entire machine learning lifecycle to provide end-to-end protection.

Enabling Secure and Trustworthy ML

Protect AI takes a developer-first approach to ML security. Their initial open source offering, NB Defense, integrates directly into Jupyter Notebooks to provide security scanning without disrupting workflows. It checks for exposed credentials, vulnerable dependencies, and other common risks.

Their upcoming flagship product, AI Radar, delivers robust capabilities for managing and securing ML pipelines. It provides visibility into the entire ML supply chain, an auditable record of ML development, and AI-aware security testing. Together, these solutions aim to help enterprises build trust and confidence in their machine learning applications.

Beyond its technology, Protect AI also leads important community building in this nascent field. Initiatives like MLSecOps.com provide education and bring together practitioners to advance ML security.

Why We’re Excited About Protect AI

Unsurprisingly, the top reason for our conviction is the amazing team leading Protect AI. Ian Swanson, Co-founder & CEO, is a serial entrepreneur who has assembled a talented team with rare experience in both AI/ML and security. Protect AI is Ian’s third startup, having sold his first company to American Express in 2011 and his second company, DataScience.com, to Oracle in 2018. Most recently, Ian was AWS Worldwide Leader for AI and ML, responsible for all go-to-market efforts across the AWS portfolio.

The other co-founders, Daryan (Global Leader for AI/ML Solution Architects at AWS) and Badar (Head of Engineering at Datascience.com), bring deep technical capabilities in ML. Their recent CISO hire, Diana Kelley, a former cybersecurity executive at Microsoft and IBM, complements the team’s expertise.

Second, the need for a new type of security player and the increasing importance of AI security for CISOs creates a large and opportune market for Protect AI. ML pipelines also incorporate new tools like PyTorch and MLFlow not covered by existing vendors. As a result, CISOs have moved ML Security to a top three budget priority in recent months, which we’ve consistently heard in market research and customer calls. Protect AI’s solutions address the distinct security challenges of ML systems missed by legacy vendors.

And lastly, Protect AI aims to lead the industry in MLSecOps, combining security practices with AI operations. Their end-to-end approach aligns with the need for comprehensive solutions expressed by organizations adopting AI. We believe they have the vision and experience to define leading practices in this emerging category.

Partnering to Define a New Category

As a pioneer in MLSecOps, Protect AI has an enormous opportunity to shape how enterprises secure their AI futures. We believe Ian, Daryan, and Badar have the experience and vision to define leading practices in this space. That’s why we’re excited to partner with them.

At Salesforce Ventures, we look to back companies inventing the future. By integrating security into ML operations, Protect AI unlocks the next generation of trustworthy and ethical AI applications. We can’t wait to see what they build!

Welcome, Runway!

  • Founder: Cristóbal Valenzuela, Anastasis Germanidis, Alejandro Matamala-Ortiz
  • Sector: Artificial Intelligence
  • Location: New York, NY

The Opportunity

When we first met Cris Valenzuela, the co-founder and CEO of Runway, we were immediately captivated by his vision to build a new kind of creative suite. Cris described Runway’s technology in a way that was similar to a new camera, something that could enable a new way of storytelling and creating digital content. We are proud and excited to be investing in a company developing foundational technology that has the potential for new category creation.

Runway was founded in 2018 by Cris, Anastasis, and Alejandro. The initial idea came out of Cris’ thesis project at the Interactive Telecommunications Program at NYU, where he met his co-founders while researching applications of machine learning models for image and video use in the creative domains. Informed by their own experiences as artists, the Runway co-founders set out to answer the question of how a well-built digital tool could simplify the approachability of complex ML models and circumvent the need for deep technical background to give better access to state of the art machine intelligence to artists and designers. The mission of Runway was to democratize access to machine learning so more people can start thinking of new and creative ways to use those models.

The Solution

Runway was first launched in 2019 as a model directory that enabled others to deploy and run open-source machine learning models for a variety of use cases. As the model directory and its user base grew, the team started seeing a usage pattern emerge that led Runway to commit to building more deeply around ML-enabled video editing tools. And staying true to the mission, Runway created tools that require no training to use, unlike many other professional tools in the video editing and visual effects space. Today, Runway is not only a developer of AI-powered editing tools but also text-to-image, video-to-video and text-to-video generation products powered by a set of proprietary state of the art generative models. It provides a full-stack platform from model research to end-user facing applications, which is one of our core theses in partnering with the Runway team.

Since 2020, Runway has invested heavily in foundational research that helps power these tools. Runway Research has co-published, together with LMU Munich, the foundational paper and model “High-Resolution Image Synthesis with Latent Diffusion Models” that gave birth to Latent Diffusion in December 2021 and Stable Diffusion in August 2022. The latest breakthrough by the Runway Research team are two video generation models, “Structure and Content-Guided Video Synthesis with Diffusion Models” also known as Gen-1 (video-to-video, released in March 2023) and Gen-2 (text-/image-/video-to-video, released in early June 2023) that builds on top of Gen-1. Gen-2 is the only commercially available multi-modal AI system that has text-to-video capability today: users can type in text prompts and generate synthesized videos in any style. Users can also use reference or input images and videos to tune the outputs. The videos are not of final production quality yet – as Cris would put it, this is equivalent to when cameras were still in the era of creating black and white photography – but the technology holds significant potential as it moves toward higher fidelity. Runway is continuously training these models in that direction as well. Overall, including these generative tools, Runway has 30+ “AI Magic Tools” that serve different aspects of the creative process.

As we talked with Runway customers, it became clear that Runway, and its users, are continuously innovating in the ways the platform can be used to solve business problems. The most immediate use case is around video editing and visual effects for the creative industries (e.g., film, animation, digital marketing) and marketing functions horizontally. For instance, visual effect editors for the award-winning movie Everything Everywhere All at Once and graphics designers for The Late Show with Stephen Colbert have used Runway to create and edit scenes and videos. Runway has done a great job of organically acquiring users at an incredible speed and converting them into paying customers, which speaks to the potential of its products. 

Why we’re backing Runway

As we look to invest across modalities at the model layer of the AI stack outside of text generation, Runway has the rare combination of a team and products that can completely reinvent the category in which they’re operating in. We have been big fans of Runway’s vision since day one and are impressed by their speed of innovation and shipping new products. Over the course of our meetings, it became clear that Cris, Anastasis and Alejandro are the right leaders to catalyze Runway into a generational company. 

Runway’s products create tangible outputs that demonstrate the creativity and ingenuity of the technology underlying them, which is the foremost reason for our investment. While the AI-powered editing tools can apply the necessary editorial steps in the matter of a couple clicks, the true piece of revolutionary technology is its video generation capability and Runway is likely the leading company in the world today in building this capability. Its Gen-1 and Gen-2 models can support high-quality artistic and stylized text-, image-, and video-to-video generation that can replace manual creative work. The company is also actively and quickly innovating to make the models more powerful and to move toward higher fidelity videos that can be more photorealistic or follow a certain style better if needed. As we saw with ChatGPT, a powerful and easy-to-use product can go viral quickly. We think Runway’s products are showing signs of that kind of virality. The speed of innovation and shipping products is also a key part of their success, which we are seeing with the cadence of product and research releases in recent months.

Runway has an amazing founding team that is product obsessed and understands the need to innovate and build quickly. Cris is highly focused on building great products. His vision is to continuously build better and better products that will sell themselves. Anastasis (co-founder and CTO) complements Cris’ product acumen with his technical background. He was a computer vision researcher at IBM Research and is one of the co-authors behind Gen-1 and Gen-2. Lastly, Alejandro (co-founder and Chief Design Officer) completes the founding team with his experience in design. He was previously a research resident at NYU and co-founded Material, a graphic design studio, and Ediciones DAGA, an independent art book publishing house. The team understands the need for speed and building on existing research to push the boundaries of what Runway’s products can do to really capture this market. They have already built so much with a relatively small team that is just getting to ~50 people today. Cris and his team have also referenced incredibly well among investors and other leading entrepreneurs within the Salesforce Ventures portfolio. 

Runway also services a large addressable market that will only expand as Runway’s products get better. Given the nascency of this new category of content creation and the breadth of use cases Runway touches and has the potential to address, the company’s addressable market is evolving. But we know that directionally and intuitively, the market is big. Runway sits at the intersection of content editing and visual effects, both of which are multi-billion dollar software markets. Given the large percentage of individual creators, the company has mostly attracted “prosumers” so far, but we are beginning to see enterprise traction and that is an area we are super excited about. 

What’s ahead?

The realm of human creativity is continuously evolving and adapting to new technology. When photography was invented, it gave voice to artists who didn’t have the upbringing to enroll in art schools. Similarly, video art in the 1960s allowed more female artists to become vanguards of artistic expression in America. In today’s world, the format of video is drastically becoming one of the most popular forms of digital content being consumed globally, and everyone can be a creator. As other modalities are seeing generative capabilities of artificial intelligence seeping through, video is due for an upgrade as well. Runway’s approach to make AI available in the hands of all creators is a very exciting mission, and one we are proud to support.

Welcome to Salesforce Ventures, Runway!

Welcome, Mnemonic!

  • Founders: Andrii Yasinetsky, Elena Ikonomovska, Ben Metcalfe
  • Sector: Web3, Data and Analytics
  • Location: San Francisco, CA

The Opportunity

Mnemonic is solving the unique and increasingly complex challenges of NFT data by building a data and analytics platform to enable developers and business users to derive insights from their data. They serve brands, builders, and enterprises creating experiences in Web3 with APIs that bring rich Web3 analytics, audience insights, and customizable audience segmentation abilities.

The Solution 

NFTs enable the tokenization of everything in both the physical and digital world with the opportunity to reflect provenance, ownership, valuation, and other characteristics. This data can then be used to enrich existing data sets – for example, by unifying Web3 and Web2 identities to build a full customer picture of an individual. However, the complexities of the web3 ecosystem require bespoke solutions that help enterprises of all sizes not only access relevant data but also turn it into actionable insights that fulfill their objectives and enable them to build personalized, memorable, and safe products and experiences in Web3. Mnemonic solves this by live indexing on-chain data to real-time product data to provide a highly available and reliable platform built for enterprises. Mnemonic’s B2B API platform provides instant access to NFT data, collection analytics and insights into billions of transactions. The platform enables developers to power Web3 experiences at scale, including Web3 wallets, social media, analytics tools, and Web3 marketing platforms. 

Why we’re backing Mnemonic 

At Salesforce Ventures, we have seen enterprises increasingly exploring and requesting services to facilitate Web 3.0 strategies. We see the business use cases around NFTs rapidly evolving. One example is brands exploring how a commerce site may distribute NFTs. The brand can then  target the community of NFT holders to create personalized, omnichannel experiences that drive greater customer engagement and long-term customer relationships. Mnemonic supports this by providing powerful audience insights and wallet segmentation capabilities to fully understand the performance of a collection, the behavior of its holders and identify new opportunities to reach and engage with new audiences. The platform also enables creators and brands to better analyze and understand their fans by gaining insight into the collection owners, with the opportunity to drive better engagement with fans. 

In supporting Mnemonic, we are also partnering with exceptional co-founders Andrii Yasinetsky and Elena Ikonomovska. The team has decades of experience building large-scale infrastructure, data science, machine learning, and big data applications. Andrii and Elena combine excellent technical backgrounds (from companies including Uber, Google, and Reddit) with a visionary understanding of the powerful potential of Web3. 

We see a huge opportunity for companies building the infrastructure and applications that will enable broader adoption of blockchain technology. Mnemonic’s data and analytics platform solves the complex challenges of NFT data and enables customers to quickly benefit from powerful analytics and intelligence capabilities across Web3. We are excited to invest in best-in-class companies building enterprise solutions in Web3, and playing a key role in supporting partners in the ecosystem. Indeed, Mnemonic was most recently announced as an official launch partner for Base, Coinbase’s L2 scaling solution, in addition to their existing partnerships with leading Web3 players including Polygon.

What’s Ahead 

This is just the start – we are only in the earliest innings of exploring the full utility of Web3 technology. Traditional enterprises and brands are increasingly engaging with NFT strategies and exploring use cases that include digital twins, metaverse commerce, supply chain inventory, provenance as well as advertising and marketing. Salesforce Ventures is excited about the potential for NFTs and wallet analytics to allow for unique engagement between enterprises and their customers with enriched customer profiles and real-time insights into new audiences. Mnemonic is building a platform that is playing a key role in enabling enterprises and brands to derive value from their Web3 strategy.

Welcome, Cohere!

  • Founders: Aidan Gomez, Nick Frosst, Ivan Zhang
  • Sector: Artificial Intelligence
  • Location: Toronto, Canada

The Cohere Vision

The invention of the Transformer models at Google Brain in 2017 was a revolutionary breakthrough that spread across Google’s product portfolio. The new model architecture was introduced in the paper, Attention is All You Need, which pioneered a new approach to neural network NLP that captured the context and meaning of words more accurately than previous NLP models, and serves as the underpinning of language models today. Aidan Gomez, the CEO and Co-Founder of Cohere, was a research intern at Google Brain at the time and one of the co-authors of the Transformer paper. Two years later, the impact and benefit of the Transformer model still hadn’t widely caught on outside of Google. As a result, Aidan, along with his co-founders Nick Frosst (who collaborated with Geoffrey Hinton, becoming Hinton’s first employee at Google Brain’s Toronto lab) and Ivan Zhang (who worked alongside Gomez at FOR.ai, an independent research group), founded Cohere in early 2019 to bring “Google-quality AI to the masses.” Fast forward to today, and they have successfully done that with a clear vision to build the leading AI platform for enterprises, offering data-secure deployment options in companies’ existing cloud environments, customization, and customer support.

Instead of focusing on artificial general intelligence (“AGI”) or creating large language models (“LLMs”) with the highest number of parameters, Cohere has chosen to focus on building a generative AI platform for enterprises that is easy to access, customizable, and secure. Cohere’s AI platform can be trained or finetuned for specific use cases that are most relevant to customers including writing content, building conversational chatbots, aggregating customer feedback and analyzing sentiments, and content moderation. Cohere has already found notable applications with leading companies including Jasper and Hyperwrite, for copywriting generation tasks such as creating marketing content, drafting emails, or developing product descriptions, and LivePerson, who Cohere is partnering with to deliver custom LLMs for customer engagement and turn conversations into live actions.

While its existing AI suite encompasses a significant amount of enterprise use cases today, Cohere continues to layer on more advanced capabilities. The company is working on retrieval-augmentation to ensure generation remains grounded in facts and action-based models that can interface with and drive external systems, allowing agents to take actions and drive processes. These models have the potential for higher quality automation and more complex workflows that customers can utilize.

The focus on being enterprise-facing has always been a part of Cohere’s DNA and instilled in the company a continuous desire to prioritize security and privacy. The platform is built to be available on every cloud provider — deployed inside a customers’ existing cloud environment, virtual private cloud (VPC), or even on-site — to meet companies where their data is. This is oftentimes a significant hurdle for customers and Cohere addresses it head-on. The company’s vision is always to push for better and safer models that customers can easily use while putting a premium on privacy and data protection.

Why we’re backing Cohere

We are eager to invest across the AI stack from the foundational AI layer, to tooling that helps companies train, finetune and deploy models, to applications that have unique data or distribution moats. The foundational layer is an area where we see significant value accruing and Cohere is one of the key players.

Many founders building at this foundational layer come from specific academic backgrounds given the deep technical know-how required for research and development, which often means they don’t have as much experience with GTM and business building. That was why when we first met Aidan and his co-founders, we were surprised to find how incredibly sophisticated and mature the team already is at speaking the language of their customers and having a clear vision for GTM. Over the course of our diligence, we further built conviction on the technical and commercial potential of the business, as well as a level of trust and confidence in Aidan and the leadership team he has gathered at Cohere’s helm. We believe Cohere is one of very few leading foundational model players that can capitalize on the generative AI paradigm shift. Cohere is one of the first few investments we made from the new $250 million Generative AI fund Salesforce Ventures launched in March 2023. Its distinctive enterprise-focused approach aligns with our own mission and values – we are excited to partner with Cohere and strengthen the relationship between our two companies.

Cohere is uniquely poised to capture the enterprise AI market: the company has industry-leading foundational AI, has remained independent and cloud agnostic, ensures data security for its customers, and is backed by a top-tier team that has both strong technical background and enterprise experience. Customers have consistently referred to Cohere as one of the most advanced AI providers in the world. Meanwhile, while academic benchmarks have their limitations, the Stanford HELM results (as of June 2023) indicate that Cohere is currently leading the pack in accuracy and fairness. Cohere also recently released the first-ever publicly available multilingual understanding model trained on authentic data from native speakers – it is equipped to read and understand over 100 of the world’s most commonly spoken languages. 

Cohere’s technical prowess comes from and is supported by a deep bench of great hires. Despite the intense competition for talent in AI/ML, Aidan and his founding team have successfully attracted well-known researchers in the NLP space, including Phil Blunsom (Chief Scientist at Cohere), who led DeepMind’s Natural Language team for 7 years (2014-2022) and has been a professor of computer science at Oxford since 2009, as well as Nils Reimers (Director of Machine Learning at Cohere), who built one of the most popular open source models (SBERT) and brings significant experience in embedding. Both the Chief Product Officer, Jaron Waldman, and SVP of Engineering, Saurabh Baji, have years of experience building products for enterprise use cases from their time at Apple and AWS, respectively. Jaron himself was a two-time founder who has built amazing products and successfully sold his startups to Apple and Rakuten.

On top of that, Cohere has demonstrated early GTM maturity with a laser focus on serving enterprise customers. Cohere has shown a clear and thorough understanding of how to drive GTM strategy in this space and create value for enterprise customers. The company is highly focused on building deep partnerships with a select group of enterprise businesses to drive the flywheel effect of selling to their downstream customers. Cohere can create custom models and provide custom re-training to really hone in on what these enterprise customers need. It built out capabilities for AWS Sagemaker and VPC or on-prem deployments early on, which is something that these customers look for right away. Additionally, the GTM team is led by a veteran experienced in selling into the enterprise market: Martin Kon, President & COO at Cohere, who was formerly Youtube’s CFO and a senior partner at BCG. As a result, Cohere’s keen focus on building customer-facing products has also allowed it to be more capital efficient with training and hosting models.

What’s ahead?

As we had mentioned in a previous investment blog, we believe we are at the beginning of the generative AI revolution, and we are still at the very first inning of putting AI into production for enterprise use cases. As customers become more and more sophisticated in choosing the right AI partners, the developers of AI are also constantly pushing the boundaries of what models can do and striving to build something that is better, faster and more powerful. Cohere is one of the companies innovating at the frontier and is well set up in this environment with the right team and right focus. We are beyond excited to be investing in this team and helping them accelerate the development of Cohere’s world-class AI platform and empower enterprises around the world to build incredible products.

Welcome to Salesforce Ventures, Cohere!

Welcome, Anthropic!

The Anthropic Vision

The advancements in foundational models over the last couple of years have precipitated a paradigm shift in how everyone uses, thinks about, and builds technology. While the landscape of AI research is ever-changing, an increasing area of focus that has persisted since day one is the safe and reliable use of AI systems. This has only received more attention in recent months as technologists, industry luminaries, and governments from all around the world came out to debate the potential benefits and harms AI can exact on humanity and the right framework for implementing regulation.

Anthropic is at the forefront of innovation driving this paradigm shift and has preempted this debate since 2019 when its founding team left OpenAI to make a concentrated bet on AI safety. Anthropic is founded by Dario Amodei (previously VP of Research at OpenAI), his sister, Daniela Amodei (previously VP of Safety and Policy at OpenAI), and a team of amazing former OpenAI researchers. Since its founding, the team has trained one of the most capable large language models (“LLM”) in the world today, called Claude. Claude is a general purpose model that excels at a wide range of tasks from sophisticated dialogue and creative content generation to Q&A, coding, detailed instruction following and reasoning. Recently, Anthropic released a lighter, less expensive, and much faster option called Claude Instant, which can handle similar tasks such as casual dialogue, text analysis, summarization, and document question-answering. Customers can request access to Claude and Claude Instant via API and try Claude in Slack.

Claude is based on Anthropic’s research breakthrough in Constitutional AI, which is a unique approach the company is taking toward AI safety. As AI systems become more capable, they can supervise other AI systems with self-critique and self-revision – the only human oversight is a predetermined set of principles. Claude was trained using Constitutional AI to promote the principles of helpfulness, harmlessness, and honesty in its outputs and prevent misuse or harmful and undesirable behaviors. Because of the self-improvement aspect of Constitutional AI, it also allows Anthropic to improve model safety without sacrificing model performance. Claude can take a limited amount of the highest-quality human feedback, create synthetic data modeled off this feedback, and train itself on this data. This allows Anthropic to provide increased transparency and steerability while still maintaining high degrees of natural language fluency.  

Beyond its focus on AI safety and the high quality of its models, Anthropic is also winning enterprise customers’ mindshare with an increasing emphasis on customization, which will drive long-term defensibility. It enables customization through a few different areas, such as incorporating a customer’s expert human-labeled feedback to the model, making it better at specific tasks, or working with the customer to create a “constitution” that is based on values and branding of the company to shape overall behavior of the model.

Why we’re backing Anthropic

Claude’s capabilities and Dario’s long-term vision for the company quickly captivated us. As we spent more time with the Anthropic team, it was clear that Anthropic and Salesforce share a clear vision for creating innovative technology that is rooted in safety. We built strong conviction that Anthropic is one of very few foundational model players that have built the right technology and have the right team to capitalize on this paradigm shift. Anthropic is one of the first investments we made out of the new $250 million Generative AI fund Salesforce Ventures launched in March 2023. We are excited to partner with Anthropic and strengthen the relationship between our two companies. 

As we look at the AI tech stack, we believe value will accrue in the AI tech stack at the core foundational model layer. Anthropic is one of a few companies operating at this layer that has built technological differentiation and is well-positioned to maintain its lead. The technical know-how needed to train foundational models to a highly performative state is rare, and the capital needed to purchase compute for model training and hosting creates a natural barrier that prevents heavy proliferation. While the open-source side of the AI community has made more progress recently, there are still meaningful questions around safety of use and a clear and significant gap between the performance of open-source and closed-source models in production. Anthropic’s models are viewed as some of the best in the world by customers, developers, and academic institutions. The important research the Anthropic team has published around Constitutional AI, reinforcement learning from human feedback (“RLHF”), and others also indicate that it has the technical capabilities today to compete with others. Both open and closed source will continue to evolve, and there will be demand for both types of models – the future will be hybrid. In the meantime, Anthropic is well positioned to support growth and push forward the next generation of Claude.

On top of that, Anthropic has an A+ team with an incredibly strong research background. Anthropic is led by the former head of research at OpenAI, Dario Amodei, who spearheaded the GPT-2 and GPT-3 projects. The rest of the founding team and technical leadership come from OpenAI, Google Brain, and Baidu (leaders in AI research who co-authored the papers at OpenAI). One of the members of the founding and leadership team is Jared Kaplan, a theoretical physicist by training who taught at Johns Hopkins for over 10 years. Jared was a research consultant at OpenAI and developed the alignment model that is essential to Constitutional AI and Claude. A specialized research background is required to maintain a technical advantage and truly compete in a fast moving environment where new model developments are happening weekly if not daily. The gravitas and mindshare Anthropic holds in the research community also helps them attract the right talent in an arena where competition for talent is becoming more intense. 

Constitutional AI remains a key differentiator and aligns well with Salesforce values. Anthropic’s research team pioneered the concept of Constitutional AI which enables models to be trained with a set of constraints designed to promote helpfulness, harmlessness, and honesty. This approach and its continuous focus on safe AI aligns well with Salesforce’s vision for trusted generative AI and is a key differentiator as AI safety is always top of mind for Salesforce and its customers. 

What’s ahead?

We are in the first inning of generative AI adoption. A new survey of more than 500 senior IT leaders conducted by Salesforce reveals that 67% are prioritizing generative AI for their businesses within the next 18 months, with one-third naming it as a top priority. Most of them believe that generative AI is a “game changer,” and the technology has the potential to help them better serve their customers, take advantage of data, and operate more efficiently. As the survey results suggest, enterprise usage of AI systems will continue to grow, and customers will only become more mature and sophisticated in evaluating AI technology. Concurrently, regulatory changes are inevitable and will help create a structure for AI system deployments. Anthropic is well positioned to address any challenges coming out of this environment, and we expect Claude to become one of the go-to LLM partners for customers. 

Welcome to Salesforce Ventures, Anthropic!

Archive main content for default archive pages