The Anthropic Vision
The advancements in foundational models over the last couple of years have precipitated a paradigm shift in how everyone uses, thinks about, and builds technology. While the landscape of AI research is ever-changing, an increasing area of focus that has persisted since day one is the safe and reliable use of AI systems. This has only received more attention in recent months as technologists, industry luminaries, and governments from all around the world came out to debate the potential benefits and harms AI can exact on humanity and the right framework for implementing regulation.
Anthropic is at the forefront of innovation driving this paradigm shift and has preempted this debate since 2019 when its founding team left OpenAI to make a concentrated bet on AI safety. Anthropic is founded by Dario Amodei (previously VP of Research at OpenAI), his sister, Daniela Amodei (previously VP of Safety and Policy at OpenAI), and a team of amazing former OpenAI researchers. Since its founding, the team has trained one of the most capable large language models (“LLM”) in the world today, called Claude. Claude is a general purpose model that excels at a wide range of tasks from sophisticated dialogue and creative content generation to Q&A, coding, detailed instruction following and reasoning. Recently, Anthropic released a lighter, less expensive, and much faster option called Claude Instant, which can handle similar tasks such as casual dialogue, text analysis, summarization, and document question-answering. Customers can request access to Claude and Claude Instant via API and try Claude in Slack.
Claude is based on Anthropic’s research breakthrough in Constitutional AI, which is a unique approach the company is taking toward AI safety. As AI systems become more capable, they can supervise other AI systems with self-critique and self-revision – the only human oversight is a predetermined set of principles. Claude was trained using Constitutional AI to promote the principles of helpfulness, harmlessness, and honesty in its outputs and prevent misuse or harmful and undesirable behaviors. Because of the self-improvement aspect of Constitutional AI, it also allows Anthropic to improve model safety without sacrificing model performance. Claude can take a limited amount of the highest-quality human feedback, create synthetic data modeled off this feedback, and train itself on this data. This allows Anthropic to provide increased transparency and steerability while still maintaining high degrees of natural language fluency.
Beyond its focus on AI safety and the high quality of its models, Anthropic is also winning enterprise customers’ mindshare with an increasing emphasis on customization, which will drive long-term defensibility. It enables customization through a few different areas, such as incorporating a customer’s expert human-labeled feedback to the model, making it better at specific tasks, or working with the customer to create a “constitution” that is based on values and branding of the company to shape overall behavior of the model.
Why we’re backing Anthropic
Claude’s capabilities and Dario’s long-term vision for the company quickly captivated us. As we spent more time with the Anthropic team, it was clear that Anthropic and Salesforce share a clear vision for creating innovative technology that is rooted in safety. We built strong conviction that Anthropic is one of very few foundational model players that have built the right technology and have the right team to capitalize on this paradigm shift. Anthropic is one of the first investments we made out of the new $250 million Generative AI fund Salesforce Ventures launched in March 2023. We are excited to partner with Anthropic and strengthen the relationship between our two companies.
As we look at the AI tech stack, we believe value will accrue in the AI tech stack at the core foundational model layer. Anthropic is one of a few companies operating at this layer that has built technological differentiation and is well-positioned to maintain its lead. The technical know-how needed to train foundational models to a highly performative state is rare, and the capital needed to purchase compute for model training and hosting creates a natural barrier that prevents heavy proliferation. While the open-source side of the AI community has made more progress recently, there are still meaningful questions around safety of use and a clear and significant gap between the performance of open-source and closed-source models in production. Anthropic’s models are viewed as some of the best in the world by customers, developers, and academic institutions. The important research the Anthropic team has published around Constitutional AI, reinforcement learning from human feedback (“RLHF”), and others also indicate that it has the technical capabilities today to compete with others. Both open and closed source will continue to evolve, and there will be demand for both types of models – the future will be hybrid. In the meantime, Anthropic is well positioned to support growth and push forward the next generation of Claude.
On top of that, Anthropic has an A+ team with an incredibly strong research background. Anthropic is led by the former head of research at OpenAI, Dario Amodei, who spearheaded the GPT-2 and GPT-3 projects. The rest of the founding team and technical leadership come from OpenAI, Google Brain, and Baidu (leaders in AI research who co-authored the papers at OpenAI). One of the members of the founding and leadership team is Jared Kaplan, a theoretical physicist by training who taught at Johns Hopkins for over 10 years. Jared was a research consultant at OpenAI and developed the alignment model that is essential to Constitutional AI and Claude. A specialized research background is required to maintain a technical advantage and truly compete in a fast moving environment where new model developments are happening weekly if not daily. The gravitas and mindshare Anthropic holds in the research community also helps them attract the right talent in an arena where competition for talent is becoming more intense.
Constitutional AI remains a key differentiator and aligns well with Salesforce values. Anthropic’s research team pioneered the concept of Constitutional AI which enables models to be trained with a set of constraints designed to promote helpfulness, harmlessness, and honesty. This approach and its continuous focus on safe AI aligns well with Salesforce’s vision for trusted generative AI and is a key differentiator as AI safety is always top of mind for Salesforce and its customers.
We are in the first inning of generative AI adoption. A new survey of more than 500 senior IT leaders conducted by Salesforce reveals that 67% are prioritizing generative AI for their businesses within the next 18 months, with one-third naming it as a top priority. Most of them believe that generative AI is a “game changer,” and the technology has the potential to help them better serve their customers, take advantage of data, and operate more efficiently. As the survey results suggest, enterprise usage of AI systems will continue to grow, and customers will only become more mature and sophisticated in evaluating AI technology. Concurrently, regulatory changes are inevitable and will help create a structure for AI system deployments. Anthropic is well positioned to address any challenges coming out of this environment, and we expect Claude to become one of the go-to LLM partners for customers.
Welcome to Salesforce Ventures, Anthropic!