Measuring AI Impact: 5 Lessons For Teams
Best practices for determining the impact of your AI efforts.
When Kate Jensen started in her role as Head of Global Revenue at Anthropic, she was tasked with measuring AI’s impact on her customers’ work — an increasingly common challenge across the industry given how AI can fundamentally change workflows.
“We were dealing with a technology that could potentially transform entire business processes, not just improve a single KPI,” Kate said at Dreamforce last year.
To measure AI impact, her team realized they had to go beyond simple ROI calculations and embrace the transformative nature of AI. In this blog post, we’ll dive deep on how to approach this essential challenge, offering lessons for any team or organization looking to determine the impact of their AI efforts.
1. Start With Imperfect Metrics
It’s hard to know the “right” way to measure AI, and the technology is so new that waiting for guidance can mean missing crucial insights and opportunities. Anthropic’s team learned to start with imperfect metrics, refining their approach as they gathered more data and insights.
For example, when deploying Claude for customer service automation at companies like United Airlines and DoorDash, Anthropic initially focused on basic metrics like the number of customer service requests automated. This data provided a baseline and insight into where AI value is coming from.
Advice for founders: Begin with readily available metrics, but be prepared to evolve your measurement approach as you gather more data. Consider tracking:
- Number of tasks automated by AI (e.g., “200 customer inquiries handled by AI per day”).
- Cumulative time saved on repetitive processes (e.g., “45 hours per week returned to team”).
- Average resolution time reduction (e.g., “Customer issues resolved 40% faster month-over-month”).
- Employee AI tool adoption rate (e.g., “75% of eligible employees actively using AI tools weekly”).
Establish a 30-day baseline for these metrics before implementing AI. This gives you a clear “before and after” comparison to demonstrate impact.
2. Combine Quantitative Info with Qualitative Insights
It’s obviously important to look at the data, but Anthropic also benefited from customer feedback. Qualitative information helped Anthropic ideate benefits and use cases for Claude that weren’t evident from data alone.
Kate noted several concrete applications where Claude had delivered significant value for customers:
- United Airlines and DoorDash successfully deployed Claude to automate a substantial portion of their customer service operations.
- Sales teams leveraged Claude for more effective account planning.
- Customer success teams used Claude to improve response times.
- Large enterprises utilized Claude Enterprise to analyze massive datasets, including up to 100 30-minute sales conversations, 100,000 lines of code, and 15 financial reports simultaneously.
- GitLab implemented Claude for content creation and proposal automation.
You can collect qualitative data through in-depth interviews with key users, stakeholders, and via focus groups. You can also analyze customer feedback and success stories, or use AI-powered sentiment analysis on user responses to identify potential trends or patterns that inspire product improvements.
3. Decide Your Key Performance Indicators (KPIs)
As Claude adoption expanded, the Anthropic team realized they could go beyond those early, imperfect metrics. With a growing user base across numerous industries, the team had enough real-world usage data to identify meaningful patterns and outcomes. This critical mass of users helped the Anthropic team understand which metrics truly mattered for driving customer value.
The team worked to create a comprehensive set of KPIs — ones that captured the multi-faceted impact AI had across their user segments.
KPIs organizations may consider leveraging include:
- Business Impact: Market share expansion, new market entry (facilitated by AI), and AI-driven product innovations.
- Operational Efficiency: Reduction in error rates, improvement in decision-making speed, and enhanced forecast accuracy.
- Customer Experience: Changes in customer retention rates, customer lifetime value, and the complexity of issues that AI can handle.
- Innovation Capacity: New AI-enabled products or services, patents filed, and reduction in time-to-market for new offerings.
- Scalability and Adaptability: How well AI solutions handle more work, adapt to new data, and extend to new use cases.
You should create a scorecard for AI initiatives that captures impact across all these dimensions, and regularly adjust these metrics as AI capabilities evolve.
Tips for tracking KPIs effectively:
- Business Impact:
- Track revenue growth from AI-enhanced products (% increase)
- Market penetration rates in new segments (%)
- Conversion rates from AI-powered features
- Operational Efficiency:
- Measure process cycle time reduction (%)
- Error rate changes (%)
- The speed with which a team can make decisions
- Customer Experience:
- Compare before/after customer satisfaction surveys (CSAT, NPS)
- Measure retention rate changes (%)
- Track response times or issue resolution rates
- Innovation Capacity:
- Count new product features attributed to AI
- Measure development cycle acceleration (% reduction in time-to-market)
- Track patent applications related to AI innovations
- Scalability:
- Measure performance under increased load (response time at 2x, 5x, 10x normal usage)
- Track cost-per-transaction as volume increases
- Monitor successful adaptation to new use cases (% of attempted adaptations that succeed)
4. Implement an Ongoing Improvement Strategy
AI isn’t a “set it and forget it” tool. You need to constantly check on its performance, and make changes as needed. Claude’s success is a result of Anthropic’s commitment to continuous refinement based on real-world performance data.
We recommend the following best practices to quality-check your AI.
- Implement comprehensive monitoring systems that track both technical performance and business outcomes.
- Conduct regular model audits to check for drift or bias.
- Establish feedback loops with end-users and key stakeholders.
- Stay abreast of the latest AI research and industry trends by keeping up with newsletters, reading field publications, and participating in webinars, online conferences, and workshops.
- Regularly update your models and retrain on new data monthly or quarterly.
Also set up automated alerts for any significant deviations in key metrics, and schedule regular deep-dives to look for more subtle trends or emerging issues.
5. Create a Transparent, Open Culture
Initially, Kate shielded her team from less-flattering metrics to protect morale. But she realized this approach was hindering the team’s progress. By embracing full transparency, Anthropic opened the doors to faster problem-solving and more innovative thinking.
Ways to share AI performance data across your organization include:
- A central dashboard for all team members.
- Regular meetings to discuss AI successes, challenges, and lessons.
- Encouraging cross-functional teams to collaborate.
- Celebrating AI wins, while creating spaces to discuss and learn from setbacks.
The Bottom Line: AI Measurement as a Strategic Imperative
As Anthropic’s journey demonstrates, when you measure AI’s impact, you should also consider continuous improvement, newfound opportunities, and staying ahead in a rapidly evolving field.
As you implement and scale AI in your organization, ask yourself:
- How is AI reshaping our core business processes and value proposition?
- Where are the unexpected benefits or challenges emerging?
- How can we use our AI measurement insights to drive new innovation?
With a comprehensive, nuanced approach to AI measurement — which embraces quantitative and qualitative insights, fosters transparency, and adapts with time — you can ensure your AI initiatives drive business value and give you a competitive advantage.
For a full guide to implementing AI effectively and efficiently in your organization, check out our new AI Implementation Playbook >>>