AI Needs Interpretability

AI Needs Interpretability

Why we're investing in Goodfire.

February 5, 2026
  • Founders: Eric Ho, Daniel Balsam, Thomas McGrath
  • Sector: AI Infrastructure
  • Location: San Francisco, CA

More than three years into the generative AI era, enterprise customers care more about the ROI they see from their AI investments than ever. In a recent UBS survey of 130 IT execs, “unclear ROI” was the most commonly cited challenge to enterprise AI adoption. From our own observation, a big driver of ROI concern is that enterprise customers cannot steer AI models to behave reliably, consistently. For these reasons and more, we’re excited to partner with Goodfire as they build the foundational infrastructure for understanding and intentionally designing modern AI. 

Goodfire is solving for the lack of AI reliability and control over training using  interpretability. While traditional approaches rely on “black-box” methods, Goodfire inspects model internals directly. This approach provides a precise, legible signal of a model’s intent and reasoning, moving the industry from guessing at behavior to measuring and precisely shaping it so that users can change a model’s behavior in the ways they want to. 

Goodfire creates a powerful new feedback loop for the enterprise. Insights from Goodfire can be used to guide model training, fine-tuning, and alignment strategies, helping teams improve reliability and safety and design their models without relying solely on prompts or black-box training methods. Goodfire’s platform offers insights into a model’s inner workings and enables scientists to directly extract novel discoveries from powerful scientific AI models, as demonstrated by their recent identification of a novel class of Alzheimer’s biomarkers with Prima Mente.

Recently, the company developed new methods to precisely target parts of a model’s internals, enabling more exact modifications to a model’s behavior. Unlocking the ability to precisely design and debug a model’s behavior will enable more AI use cases to be put into production and open up the possibility of working more reliably with different models, including open source ones.

Goodfire was founded by Eric Ho, Daniel Balsam, and Thomas McGrath. Eric and Daniel built one of the first applications in applied AI through their prior company, RippleMatch, a recruitment automation platform. Thomas, Goodfire’s Chief Scientist, is a pioneer in interpretability, having co-founded the interpretability team at DeepMind, and continues to be a highly respected researcher in the field. In just a few years, Goodfire has built a team consisting of many of the leading researchers across interpretability. We believe Goodfire is the best independent team doing interpretability research and building real products on top of that research to democratize access to interpretability for enterprises and model labs alike.

Goodfire is building critical infrastructure for the next phase of AI deployment, and we’re proud to support Eric, Daniel, Thomas, and the entire team as they turn interpretability into a production-ready standard for the industry.

Welcome to Salesforce Ventures, Goodfire!