Building Trust in AI: 3 Approaches That Work
perspectives / Insights

Building Trust in AI: 3 Approaches That Work

Lessons from Anthropic, GitLab, and Autogen AI.

Matthew Speiser
March 20, 2025

AI adoption continues to boom across industries, but recent research reveals important differences in how it’s being received. 

According to the February 2025 Omidyar Network study, while roughly 69% of workers remain optimistic about digital technology overall, they have specific concerns about AI implementation. Only 25% of workers fully trust AI tools, and 71% worry about automation replacing humans in their roles. Meanwhile, business enthusiasm is evident — mentions of terms like “agentic AI” and “AI workforce” increased 779% in corporate earnings calls over the past year. This disconnect highlights a critical need: as companies rapidly integrate AI capabilities, they must address worker concerns through transparent implementation, proper training (currently received by only 25% of daily AI users), and equitable distribution of productivity gains.

Our portfolio companies Anthropic and Autogen AI — as well as GitLab, which leverages Anthrophic’s Claude — have developed effective approaches to building trust in enterprise AI. Their approaches include early attention to codifying trust within AI systems and building trust through transparency and validation.

Here, we’ll examine how these companies established trust with executives and employees as they implemented AI in their technology. 

Anthropic: Aligning AI Behavior with a Human ‘Constitution’

From the beginning, Anthropic aimed to harness AI’s transformative potential while building systems businesses can trust. 

“You can’t tack safety or ethics on after the fact. We use constitutional AI, which are guardrails we bake into our tech from Day 1,” co-founder Daniela Amodei said at Dreamforce.

Anthropic’s constitutional AI aligns AI’s behavior with a “constitution” that consists of principles like avoiding harm and providing accurate information. Constitutional AI embeds ethical considerations and safety measures directly into Anthropic’s AI architecture. These built-in guardrails have helped position Anthropic’s large language model (LLM) Claude as a direct competitor to OpenAI’s ChatGPT. Anthropic is now one of the most valuable startups in the world, with its latest funding round of $3.5 billion taking it to a $61.5 billion valuation.

With a foundation of trust and security, Claude has become the choice tool of major enterprises like United Airlines and DoorDash for automating significant portions of their customer service operations. Anthropic’s AI-driven workflows aim to handle complex customer issues from initial contact to resolution, involving multiple departments seamlessly.

What’s most important to enterprise clients is the knowledge that Claude will operate within clear, predefined ethical boundaries. Anthropic’s proactive move to build trust allows enterprises to deploy Claude confidently at scale. 

And its growth continues. Anthropic recently launched Claude 3.7 Sonnet (the first hybrid reasoning model on the market) and Claude Code (a command line tool for agentic coding). It’s angling to truly augment human capabilities with its AI.

Anthropic’s focus on safety and reliability has made it a go-to partner for companies that need to know they can trust their systems’ output every time.

GitLab: Transparent Processes at Every Step

Software development platform GitLab — an early adopter of Anthropic’s Claude — builds trust through their transparent validation processes and clear evaluation criteria.

When GitLab began implementing AI, they focused on specific use cases like proposal automation to prove value quickly. The company created evaluation criteria measuring both accuracy and guideline adherence, which they then embedded into structured workflows. Their Duo Code Suggestions feature emerged from this process, using AI to help developers write code more efficiently through real-time suggestions.

The system can now tackle complex challenges like generating Tornado Web Servers, even for developers unfamiliar with the framework. What sets GitLab’s approach apart is visibility — developers can see exactly how the system works and verify its output. This transparency has been crucial for building trust, turning skeptics into advocates by making the entire process verifiable at every step.

“Throughout our evaluation of code generation models, Claude stood out for its ability to mitigate distracting, unsafe, or deceptive behaviors,” noted Taylor McCaslin, Group Manager of Product, Data Science AI/ML at GitLab. “Claude demonstrated consistent and accurate code generation throughout our testing.”

As GitLab shows, the key to building trust is to start small, measure results, and gradually expand successful applications across your organization. Through iteration, trust in a system can be confirmed and checked regularly.

AutogenAI: Validation From Clients, Success Metrics, and Partnerships 

AutogenAI, a generative AI tool for writing proposals and grants, has built enterprise credibility by delivering measurable impact to Fortune 500 companies, international government agencies, and management consultancies. Their secure, bespoke language engines are transforming how organizations approach the complex bid writing process.

AutogenAI’s approach includes:

  • Strong security measures that protect sensitive information, helping them earn meetings with numerous potential enterprise clients within six months
  • Clear performance tracking showing how much time teams save and how many more bids they win when using the platform
  • A step-by-step implementation plan that helps teams gradually adopt AI at a comfortable pace while maintaining quality

To date, AutogenAI has helped businesses on average achieve a 30% increase in win rates and cut associated costs by up to 85%. AutogenAI has been appointed to the UK Government’s AI Framework, which allows them to offer their services to all UK public sector organizations, and they recently expanded services to the U.S. and Australia. These real-world results and partnerships show how proving your value with concrete numbers builds the trust needed for AI adoption.

Key Takeaways: What AI Companies Can Learn 

What do these three companies’ approaches teach us about emphasizing trust when implementing AI?

Build with trust and safety in mind: As with Anthropic’s constitutional AI, embed trust and safety guardrails into your AI systems from the beginning. Don’t treat trust as an afterthought.

Create clear validation frameworks: Follow GitLab’s example by establishing transparent processes for validating AI decisions and performance.

Show measurable results: Take a page from AutogenAI’s playbook by focusing on concrete metrics, progressive validation, and partnerships with established players to build credibility.

For all three companies, transparency and quality are key. There are many ways to build trust, but they all involve clear expectations, unobscured information, and platforms that work effectively and intuitively for their users.  

By integrating trust into every process, enterprises can accelerate their AI adoption while gaining the confidence of their clients.

Ready to transform your AI implementation journey? Download our comprehensive AI Implementation Playbook and learn how industry leaders like Anthropic are building trusted AI systems that deliver real business value.