In Conversation With Anthropic Co-Founder Tom Brown
perspectives / Insights

In Conversation With Anthropic Co-Founder Tom Brown

Salesforce Ventures hosted a fireside chat with Anthropic co-founder Tom Brown and Salesforce President David Schmaier.

Salesforce Ventures
December 19, 2023

Salesforce Ventures recently hosted a dinner and networking event for a group of portfolio companies and Fortune 500 executives in San Francisco. The evening was highlighted by a fireside chat between Tom Brown, co-founder of Salesforce Ventures’ portfolio company Anthropic, and Salesforce President and Chief Product Officer David Schmaier. 

The duo discussed the origins of Anthropic, the imperative of AI safety, techniques for generating better LLM outputs, generative AI use cases, improving AI accuracy, the prospect of artificial general intelligence (AGI), and how to ensure AI will be used as a force for good in the world. 

Their conversation featured a ton of great insights for founders, builders, and AI enthusiasts alike. Here were a few of our top takeaways*…

*Quotes have been edited for clarity and concision

On Anthropic’s approach to creating ‘harmless’ AI…

“Many models are trained using reinforcement learning from human feedback (RLHF). The idea behind RLHF is you reward and punish the model for doing well or not doing well on the paths you care about. We had people upvoting or downvoting how well the model does a task. That’s how you can make the model become a harmless assistant.” 

“I think we noticed that as the models were getting smarter, they started to do most tasks well. We developed constitutional AI to turn a model into the entity that upvotes or downvotes another model. A person can write up a constitution of what it means to be helpful, harmless, and honest, and then a model will read the interactions between the human and the assistant and consider if the assistant is acting in accordance with the constitution. This is a way to take a simple document and turn it into a model personality.”

On the impact of ‘stacking’ LLMs to generate higher-quality outputs…

“We have Claude 2.1, which is a large model, and then we have Claude Instant, which is a smaller model. Depending on the task, sometimes you’ll want a smaller model because it’s faster and cheaper. For example, Midjourney is one of our customers. Whenever you put any prompt into Midjourney to generate text, it’ll pass it through Claude Instant and Claude Instant checks if it’s violating Midjourney’s terms of service. And if it thinks it might be, it’ll give a little message to the user saying ‘this might be against our terms. Do you want to appeal it?’ And if you hit yes, it goes to Claude’s Instant’s boss, which is Claude 2.1, who thinks about it a little bit longer, and maybe says, ‘Sorry, Claude Instant was totally wrong. You’re fine actually.’”

On building domain-specific models…

“There’s two different ways I think about building a domain-specific model. One is that you take a large model and fine tune it to make it better at a specific task. The other is you build a narrow model that’s good at one specific thing. Claude Instant is faster and cheaper than Claude 2.1, but less performant. You can fine tune either one, but Claude 2.1 will still do better. So I think that’s my normal mental model for what’s going on with fine tuning. You could imagine someone building a very narrow model that’s very good at one specific thing, but it feels like fine tuning a larger model is the thing people have successfully done rather than making a super small model good at some narrow field.”

On new AI use cases….

“I think code is a place where the models do great. Retrieval augmented generation (RAG) for quality assurance is a broad area that the models can do and be really useful at. I feel like customer service is an area that fits that very well.”

On cutting down on AI hallucinations…

“We have a bunch of internal metrics for hallucination rates that we measure and a team that’s fully focused on bringing that number down. Claude 2.1 came out last month, about four months after Claude 2. On our internal dashboards it reduced the error rate by 2x. It still has some hallucinations, but there are half as many as there used to be. So if, for example, you’re seeing 98% accuracy with your model now and you need to get to 99%, maybe it’ll only take four months. If you need to get 99.9%, it might take a year or something like that.”

“Also, if it’s a harder task, the model is more likely to hallucinate or make a mistake. It’s the same as if a person is trying to juggle a bunch of tasks at once. You’ll make more mistakes than if you’re just focused on one thing.”

On the prospect of AGI…

“I think we already have weak AGI. You can talk with ChatGPT or Claude and it’s somewhat intelligent, and with each year it’s improving.”

“Right now you may prompt the model to build a successful startup. It’ll try to do it, but it’ll get stuck. But as we add more compute, the model gets smarter. The model’s IQ goes up. So I think it would be surprising if the models suddenly stopped getting better. And then it’s a question of how much better can they get? One thing we don’t know is how far the model has to go for it to be an AI entrepreneur that could do a great job. But every year it seems like we’re getting more IQ points so at some point I think we’ll get models that are better at engineering than I am.”

On whether AI will be used for good or evil…

“I feel like I’m cautiously optimistic in general. People aren’t angels and people aren’t devils. People are people. So I think people will use AI for all sorts of different stuff and it’s up to us as a society to make sure that the benefits outweigh the costs. I’ve been really heartened by the recent regulatory updates. I’ve always worried that the government would get involved and do things that don’t quite make sense. But the recent AI executive order seems quite sensible. So I think I’m much more optimistic now than I was a year and a half ago.”

To learn more about Salesforce Ventures’ investment in Anthropic, click here. To read more about our Generative AI fund, click here.