default hero for archive pages

AI-Driven Transformation: The New Era of BizOps Software

We recently published a guide for startups thinking about implementing generative AI into their products. In part two of this series, we explore a slightly different question: how can companies leverage AI to improve their own operations?

Generative AI has the ability to fundamentally change how we work and how companies operate. We’re already seeing an influx of augmentative chatbots, AI assistants, and copilots that drive greater productivity across coding, customer service, and sales. The not-so-distant future will likely see even broader task automation and the further re-imagining of work.

We spoke to a number of our portfolio companies about how they’re thinking about leveraging AI in their businesses and how they anticipate AI transforming the nature of business operations in the long term. Here are a few of our takeaways…

Think in Terms of Tasks, Not Jobs

When considering how to leverage AI internally, CEOs must apply a new framework to think about how work gets done. Whereas one might typically think of their organization as a collection of employees with skills, successfully applying AI requires executives to think in terms of the series of tasks that must be completed for the business to execute its mission. 

By analyzing the tasks that can be automated, leadership will be better able to identify high-impact use cases for generative AI, as well as how to most effectively implement the technology alongside employees. Automating some aspects of a role also allows employees to spend more time on higher value activities.

“CEOs should think about the organization from a first principles perspective and ask how they would design their org if they were restarting today with generative AI. At Reejig, we prioritized tasks for automation based on what was costing us the most money, what was causing the business the most pain, and what had the highest potential to be successfully automated.” – Siobahn Savage, CEO, Reejig

Re-Evaluate Resource Allocation

The emergence of AI workers requires founders to reimagine their organizational structure in terms of both employees and augmentative / autonomous software solutions. Scaling an organization will require thoughtfulness around where employees can drive the most value and where AI solutions and automation can augment work to drive greater productivity. We see an opportunity for founders to drive greater capital efficiency and for employees to find greater fulfillment once they’re able to automate away repetitive and mundane tasks related to their job.

The quality of AI solutions, the mode of delivery (from copilot to AI worker), the breadth of tasks they can address, and the level of comfort with which companies deploy these tools will continue to evolve. Companies must be flexible and responsive in order to best leverage these solutions to optimize their organizations.

“In a role like customer service, 70-80% of the tasks are automatable. You should start there with an AI agent but also pay attention to where the AI falls short. This is how we think about resource allocation at Reejig.” – Siobahn Savage, CEO, Reejig

Reduce Information Silos 

For AI to drive real efficiency within an organization, it needs data that offers context across the entire business. For instance, an AI tasked with sales prospecting must be trained on historical sales and marketing data to be an effective agent. As such, AI will be a catalyst for the reduction of data silos across organizations, and it should be an organizational priority to ensure that information can flow more freely between teams. In practice, this means that companies will need to address current barriers around trust and security when leveraging their data with AI technology. The most effective companies will embed a data and AI-centric mindset across their teams to ensure the technology is being best leveraged to drive efficiencies.

“We want to add a data scientist to all of our teams to focus on leveraging data and finding ways for the AI to drive new efficiencies.” – Josef Starýchfojtů, Chief Product & Technology Officer, Mews

Be Prepared to Move Fast…

The current generation of generative AI technologies is suitable for tasks across a variety of roles and industries. For instance, summarization tools can drive better insights from customer calls, code-gen tools can significantly improve developer productivity, and AI-powered search tools can improve customer support. CEOs must move fast to take advantage of these value-adds to keep up with their competitors while simultaneously keeping an eye on the rapidly changing landscape. With the current pace of innovation, something orders of magnitude better than the current industry standard might come along in just six months. 

We are looking at third-party vendors but we know the technology is changing very fast. So we are not signing contracts for more than a year and we are prepared to switch to the latest AI technology when the next iteration comes along.” – Tomas Dostal Freire, Director of Business Transformation, Miro

And Slow…

While there are many obvious use cases for generative AI right now, there are also many areas where AI is not yet able to drive value within an organization. Trying to implement AI where there isn’t a clear value-add is not a good use of an organization’s time or resources. Instead, organizations should exercise patience, keep an eye out for new innovations, and only leverage new AI solutions when they identify another obvious use case. Additionally, the shift toward more usage-based pricing for AI products may mean companies need to be selective about the highest ROI use cases for AI products given related costs (i.e., computing, storage, etc.).

“We look at AI less as a fix-all than as an innovation imperative. We saw AI could provide us value across marketing and GTM but was less mature for finance, legal, and accounting right now. Founders should look where AI has the fastest road to value, but don’t treat it as the hammer for every nail.” –  Tomas Dostal Freire, Director of Business Transformation, Miro

Shift from Point Solutions to AI Workers

While there are many point solutions currently in the market, the future belongs to organizations that can develop AI workers that can perform a variety of complex tasks. As companies start to implement AI solutions internally, they should be preparing their tech stack and organizations for this future. 

“We value vendors that can expand from initial use cases beyond their niche, building towards agentic AI. We’re looking for collective intelligence with fewer tools.” – Tomas Dostal Freire, Director of Business Transformation, Miro

Our AI Investment Approach

At Salesforce Ventures, we’re excited about the potential of AI applications to drive efficiency and productivity gains across organizations. We encourage our portfolio companies to move fast in implementing responsible AI and identify easy wins related to AI automation in order to drive efficiency and maintain a competitive edge. 

We’re also really excited to meet with founders who are building the tools and platforms that will enable organizations to achieve these gains. Our evolving AI investment framework reflects how we think about assessing these opportunities:

  • How much value is the tool providing? Is this a use case that is worth leveraging generative AI for? Is it providing enough value, and are customers willing to pay in the long-term (i.e., no AI-tourism)? The best platforms today deliver 5-10x+ value in terms of efficiencies and/or new revenue generation opportunities for customers. 
  • What’s the moat? Some AI applications may have technical moats, having fine-tuned third-party models or even built proprietary models in-house. However, we’re also seeing the emergence of many AI applications that may not have significant technical advantages (i.e. “wrappers” on foundation models). In which case, what is the “right to win” (e.g., UX, proprietary dataset, distribution, etc.)?
  • Does the founder have an edge in understanding a customer’s needs and behavior today and how it might evolve in the future? This question is particularly relevant with AI-enabled vertical SaaS applications. We see an outsize opportunity for AI to drive value creation in industries that have historically lagged behind in technological transformation. We note that these industries often require a targeted GTM and deep understanding of the problem statement.
  • Why is this use case better resolved by a newcomer vs. an established incumbent? What is the new entrant’s edge against companies that are themselves investing in AI, and that also have the advantages of pre-existing distribution and proprietary data sets? Our perspective on this factor varies somewhat depending on the behavior of the incumbents in the segment, as well as on the potential we see for new, AI-first solutions to meet an unaddressed customer need that could then provide a wedge to build out a competitive platform.
  • How is the company thinking about evolving modes of delivery (e.g., from standalone chatbot to more opinionated ‘suggestions’ in workflows to truly autonomous agents)? Is mode of delivery a competitive differentiator? What customers are happy using today may not ultimately be the best mode of delivery in the future. Do we feel confident that this company is structured to keep up with rapidly evolving methods of delivery as the technology and user behavior evolves? 
  • Ultimately, what would it take for an AI tool to become an industry-standard platform? Salesforce Ventures admires products that are deeply embedded in core workflows and that are being used on a daily basis. We’re interested in founders who have identified opportunities to deliver significant ROI to core business needs, and that have the vision to build out broader platforms.

If you’re a founder building at the intersection of AI and BizOps, we’d love to chat! To get in touch with Laura Rowson, email her at laura@salesforceventures.com. To get in touch with Jess Bartos, email her at jess@salesforceventures.com

The Future of Software: Embedding Generative AI in Your Product

Amid the generative AI fervor, founders and CEOs face the challenge of incorporating AI in a way that makes sense for their business. This begs a question: should companies take an incremental approach to implementing generative AI, or should they aim for re-invention? Additionally, how can they determine which AI use cases their customers value, and how can they measure success?

To answer some of these important questions, Salesforce Ventures recently hosted an exclusive workshop for our portfolio founders with Adam Evans and Cai GoGwilt. Adam is the former Co-Founder and CTO of Salesforce Ventures portfolio company Airkit, and the current SVP of Product for the Salesforce Einstein AI platform. Cai is the Co-Founder and Chief Architect at Salesforce Ventures portfolio company Ironclad — a digital contracting platform that was one of the first in its sector to incorporate generative AI.

Their workshop featured plenty of great insights for founders seeking to build generative AI into their products. Here were a few of our team’s favorite takeaways…

Move Fast in Leveraging AI

When it comes to AI transformation, startups usually have a greater appetite to experiment, more flexible & modern tech stacks, and the ability to pivot faster. Startups should view this as an advantage over incumbents, who often cannot move as quickly to incorporate AI and are typically serving larger enterprise customers who’re slower to adopt new technologies.

“If you have time to make changes, do it now. It’s easier for you as a small business to do this, as opposed to when you’re at scale. The faster that you move into reinventing yourself, it will probably end up helping you as far as the future, valuations, and more.” – Adam Evans

Identify the Most Meaningful Use Cases for Your Business

There are endless potential use cases for generative AI in a given product, but businesses only have so much time and resources to dedicate to any particular project. To drive the greatest ROI with generative AI, startups must ruthlessly prioritize use cases that significantly move the needle for their product and help define the long term differentiation and success of the business.

“As we think  about why our business is going to be great in 10 years, we’ve taken those thoughts and asked ourselves whether this is something that is still 3-5 to 10 years away, or is actually something that we could build today.” – Cai GoGwilt

Run Hackathons to Identify Use Cases

It’s much easier to recognize a good solution than it is to come up with one. As such, a productive way for founders to identify AI use cases and get more of their team involved in AI transformation is to organize hackathons. Ironclad runs hackathons at least quarterly for both their application and platform teams, and uses the outputs to inform their product roadmap.

“You can hack together an awesome prototype in two days, which is really good for building internal excitement and taking the burden off founders to dream up these applications. Through hackathons we’ve been able to productize prototypes and identify people on the team who have a proclivity towards generative AI.” – Cai GoGwilt

Avoid Commoditized AI

Businesses should not expect to convert commoditized AI tools — like summarization and text generation — into revenue. Many users already view these features as table stakes. Companies can and should still add these features to their products if they believe the user will benefit from it. However, startups shouldn’t expect commoditized AI to be a big revenue generator or defensible as a product moat.

“Stay away from things like summarization or formatting based on your personal history. That’s going to feel like spellcheck soon. It’s just going to be embedded in the OS.” – Adam Evans

Move Quickly From Prototype to Beta

Once the team has a viable prototype, startups should aim to share the beta with a trusted circle of users as soon as possible in order to understand the product’s defects, gather data, and accelerate development

“You won’t know what you’re actually dealing with until you see what users are doing with it. You might see that they aren’t using it how you’d imagined, but that’s also a massive positive; your users might show you things that they can do with your product that you didn’t even realize you could do, and even give you some ideas on other applications you could build.”  – Cai GoGwilt

Expand AI Teams Beyond Engineers

As companies prototype new products and features, it’s important for founders to find the right structure and membership for the team dedicated to a project. Oftentimes, AI teams need technical and non-technical members — with the latter focused on experimentation and understanding how the product is working.

“Prototyping is a multiplayer thing. Don’t just give it to engineers. You’re going to go way too slow. You need to get your product team involved.” – Adam Evans

Measure Success Through User Adoption

AI adoption can be slow and nonlinear. As such, the efficacy of any new AI tool or feature should be determined by how many users are leveraging it on a recurring basis, and for what reason

“A lot of  generative AI feature adoption doesn’t look like an explosion of users instantly. It begins with a smaller set of users finding ways that they like to use it, sometimes not ways you intended them to use it, and then listening to that feedback, iterating, and sticking with it for a while.” – Cai GoGwilt

Conclusion

AI product development is still in its infancy, so there’s no singular tried and true approach to building an awesome AI application. However, we believe fast-moving startups that focus on technological transformation with innovative use cases that speak directly to customer pain points will win the day in the coming months and years — and we at Salesforce Ventures look forward to investing in and nurturing these projects. 

_

Are you a founder building in the generative AI space? We’d love to talk! To get in touch, email Laura at laura@salesforceventures.com and Jessica at jess@salesforceventures.com

How Can Startups Sell to Enterprises in 2024?

Virtually every startup the Salesforce Ventures team talks to wants to break into enterprise sales. These contracts can be extremely lucrative for a startup and serve as a powerful validation of the product—which, in turn, can attract additional enterprise customers. This virtuous cycle has helped countless startups hit their growth goals, achieve scale, and realize meaningful outcomes.

However, enterprise sales are much easier said than done. A successful enterprise sales motion requires a large upfront investment, discipline, patience, and continuous iteration. Founders often don’t know where to get started.

As a leader in enterprise technology, Salesforce Ventures believes it’s part of our value-add to understand how startups can sell to enterprises, and share our findings with the broader startup ecosystem. This spring we surveyed 180+ startup sales leaders about their enterprise go-to-market (GTM) motion and synthesized their responses into a 50-page report detailing every aspect of the modern enterprise sales motion.

This report is a valuable asset for any founder or sales leader angling to break into enterprise sales.

A few key findings:

Enterprise annual contract values (ACVs) are growing

Respondents to our survey on average reported a 19% increase in ACV of their enterprise clients compared to 12 months ago.

Enterprise sales reps are in demand

The headcount of enterprise sales reps at the startups surveyed has grown by an average of 18% compared to 12 months ago.

Sales teams are becoming more efficient

60% of respondents observed an increase in sales team productivity compared to 12 months ago, aided at least in part by AI-powered automation.

With enterprises generally expanding IT budgets after a prolonged period of contraction, now is a great time for startups to re-evaluate their enterprise GTM motion to capitalize on these more favorable market conditions and set themselves up for sustainable growth.

The Salesforce Ventures Enterprise GTM Report is a great asset for any founder or sales leader seeking to refine their approach and win new business. 

To download the report, click here.

Scaling in the U.S. Market: GTM Strategy

Successfully launching in any new geographic market presents unique challenges, and that’s especially true of the large, complex, and hypercompetitive U.S. market. U.S. businesses are sophisticated buyers that expect customized solutions. For many international startups, addressing this market requires that they tailor their products and sales approach to meet the unique demands of U.S. businesses.

As part of our series focused on supporting international companies scaling in the U.S., Salesforce Ventures recently hosted a workshop designed to help guide portfolio companies as they develop and launch their go-to-market (GTM) strategy for the U.S. market. We were pleased to moderate a conversation with Salesforce Ventures portfolio company leaders Eyal Feder-Levy (CEO and co-founder of AI-powered community trust platform Zencity) and Jen Abel (co-founder of founder-led sales consultancy JJELLYFISH).

Here are some of the insights Eyal and Jen shared with our founders in attendance. 

Start With Founder-Led Efforts

A key early question for founders expanding into the U.S. is whether to move there themselves or to hire a new head of region. Both Jen and Eyal encourage founders to be on the frontline leading their company’s U.S. sales launch because the founder is uniquely equipped to sell the idea of the companynot just the product. Taking part in early discussions also helps founders refine their sales pitch and better position their business to U.S. customers. Only once the business finds product/market fit (PMF) in the U.S. market should founders shift from founder-led sales to founder-managed sales (more on this later).

If choosing not to relocate, there are some trade-offs. For example, Zencity took into account its strong hiring brand in Israel, and decided to launch with a U.S. sales team based out of its Tel Aviv headquarters. This allowed Eyal to stay close to his leadership team and impact the workforce culture. However, the trade-off was constant travel.

“To do founder-led sales calls, you need to be on the ground a lot,” said Eyal. “I was traveling 12 to 15 times a year between Tel Aviv and the United States. And I would take late-night calls three to four nights a week.”

Develop a New GTM Strategy

Expect to create a new sales strategy and GTM plan for a U.S. launch. “A founder’s day-one vision when they come into the U.S. is almost always invalidated because the U.S. market is vast, and foreign businesses often struggle to find the best segment or sub-segment of the market to target,” Jen said. “Tactics from a business’s local market often don’t translate abroad.”

U.S. prospects tend to discount international experience and want to see product validation from U.S. customers. Additionally, U.S. buyers expect something tailored to their needs. “The customer’s mindset is: We’ve purchased a lot of tech in the past. And we want to make sure that you understand this problem, maybe better than we do,” Jen said. 

Companies that have gained traction in their local markets sometimes develop false confidence about the likelihood of U.S.-based success. “We did an analysis that found it took 1.5x longer for these companies to find PMF in the United States,” explained Jen. “That traction at home actually slowed them down.”

Identify the Local Ideal Customer Profile (ICP)

In some instances, a company comes to the U.S. market planning to serve one industry (usually its target market back home), only to find success in a completely different vertical—often one that’s much more specialized than the founder expected.

A startup Jen worked with expected its tech to support a variety of different business needs for U.S. customers. But after speaking with customers, the company learned that existing vendors were already covering their needs. The team performed deeper market research to identify more specific use cases where their tech could provide value. They ultimately re-focused their pitch on a more targeted function, and were able to find PMF through this approach.

Similarly, Eyal wasn’t sure which types of U.S. governments would be the best fit for Zencity, so he identified 10-15 industry events where Zencity could get in front of U.S. prospects. This approach helped the company identify the best prospects on which to focus its resources. Today, Zencity serves many large U.S. cities, including Los Angeles, Houston, Chicago, and New York.

Find the Right Messaging

Spend time creating a pitch that resonates with the company’s ICP. After hundreds of conversations, a sales team will get comfortable talking about the product and managing objections. They’ll start seeing similar use cases from each new lead. Break things down into simple, repetitive steps. This also makes it easier to predict sales outcomes and delegate. “Selling is so hard, you want to remove as much complexity as possible from the equation,” Eyal explained. “This kind of consistency can generate individual sales and create a positive feedback loop that brings in a steady stream of leads.”

It took time for Zencity to find the right way to present its value proposition to U.S. buyers. But once the team found the right language, Eyal said they were able to sign their first 10 U.S. customers in under eight months. 

Hire for Scrappiness

It’s a challenge to find talented U.S. salespeople when a business has little brand equity in a new market and only a handful of customers. Jen and Eyal recommend looking for “scrappy” salespeople who have a proven track record working in unstructured environments with limited marketing and enablement functions and low-budget travel. 

When a founder is ready to make the transition to founder-managed sales, Jen and Eyal recommend starting with two sales reps instead of hiring a large team, and to be prepared for churn—assume one out of every two salespeople won’t work out. A failed sales hire can be costly because the business loses the cost of the salary plus the value of the lost sales and wasted leads. Zencity’s threshold for salespeople is typically two bad quarters.

A business’s first hires should know how to create demand using tactics like scouring conference lists, searching LinkedIn, and cold calling or cold emailing. With limited resources, startups may not be able to hire business development reps (BDRs) for lead generation. Zencity relied on account executives (AEs), waiting to hire BDRs until the company had achieved $1M in annual recurring revenue (ARR).

Jen noted that many salespeople have never been trained to generate demand—they’ve always been given leads. As such, when interviewing candidates, Jen recommends asking salespeople about their average deal value and sales cycle, and exploring the ways their leads were generated. Did they source the deal? Did they close the deal?

Remember that some salespeople will be inherently better at the top or the bottom of the funnel. Others may have a higher “likeability factor” with decision makers in a given vertical—and it’s not always who you expect. For example, organizations that skew older may prefer working with younger salespeople because they offer a perspective that isn’t currently represented internally. 

A business’s sales incentive structure is also critical. Encourage AEs to do everything—from prospecting to closing—by offering twice the commission for any leads they create.

Learn more about hiring U.S.-based talent in our recent blog post: Scaling in the U.S. Market: How to Build a Winning Team.

Shift to Founder-Managed Sales

To achieve scale, an organization must gradually shift from founder-led to founder-managed sales. Before this shift can occur, a sales team needs to generate healthy conversion and win rates. A good rule of thumb is 10-20% conversion rates for outbound sales and 30-50% for inbound sales—although this rate may vary by industry and where the business is in sorting out PMF. Jen recommends the following “gate checks” to track a company’s progress:

  1. Can the BDRs generate demand? Are they hitting the number of leads and conversion rates that the company needs to meet its goals?
  2. Can the AEs take the intro call? Are they able to convert prospects into sales-qualified leads at a similar rate to the founder?

When founders are ready to make the transition, they should first move out of the sales process at the top of the funnel and then work their way down. If numbers fall after the sales team begins leading the process, the founder should get back into the sales cycle.

“Whenever we had challenges in our sales team, I stepped in and cleared my schedule of other things,” Eyal said. “Because if this doesn’t work, the rest of the business isn’t viable.”

Develop a New GTM Plan for Each Vertical

Companies entering the U.S. market should wait for consistent performance in one vertical market before expanding to another. “Every vertical is going to have different messaging, different buyers, different willingness to spend, different sales cycles, and different specific problems they want to solve,” explained Jen. “You can’t serve them all the same way.”

Once a business moves to a new vertical, it will need a new GTM plan. Make sure to assign new AEs to each new vertical so they become experts in that industry.

Conclusion

Selling to U.S. businesses as an international startup is challenging, but often worth it: Almost every startup that achieves scale has a U.S. presence. However, breaking into the U.S. market requires a rock-solid strategy, continuous iteration, and patience. “It’ll take twice as long as you think to truly gain a foothold,” Jen said. “But if you can weather the storm, the payoff can be huge.” 

_

Salesforce Ventures hosts frequent workshops with experts and industry thought leaders designed to address our portfolio companies’ recurring challenges and to support them on their path to success. To learn more about Salesforce Ventures, visit our website.

Scaling in the U.S. Market: How to Build a Winning Team

For many international early-stage companies, the U.S. market represents a huge commercial opportunity. To truly address this opportunity, many businesses aim to establish a local presence in order to drive brand recognition and to make inroads with U.S.-based customers. Indeed, the number of American workers hired by international companies grew 62% between 2022 and 2023. For startups looking to launch and grow their businesses in the U.S., hiring strong U.S.- based talent is an important first step. 

However, for many early-stage startups based outside the United States, it can be challenging to attract, hire, and manage U.S. employees. Doing so requires navigating a myriad of regulations, including federal and state employment laws and payroll taxes. Companies seeking to gain a foothold in the United States also need to understand employees’ unique expectations around compensation, benefits packages, and corporate culture.

As the first in a series of workshops focused on supporting companies scaling in the U.S., Salesforce Ventures recently hosted an event to help guide portfolio companies as they expand their workforce globally. I was pleased to moderate conversations with a group of Salesforce Ventures portfolio company leaders and Salesforce executives: Tony Jamous (CEO and Co-Founder of Oyster) joined Kate Hallick (VP of Recruiting at Salesforce) to discuss how to find and attract new talent worldwide. Amanda Buck (Senior Director, Global Talent Acquisition at BigID) and Rachel Cochran (SVP of People at Algolia) talked about how to manage a global workforce.

The workshop offered great insights for founders seeking to expand into U.S. markets. In this article I’ll share some of my favorite takeaways. For more, I encourage you to watch the video of the workshop below:

Start With Compliance

When hiring U.S.-based talent, international startups have a choice between setting up a legal entity or hiring an employer of record (EOR). It’s important to understand the different obligations these choices place on the employer. If a startup sets up a legal entity in the United States, the company will be directly responsible for managing payroll and complying with state and local employment laws. It can be costly and time-consuming to establish an entity, but the business will also have more autonomy with this option. In the long run, setting up an entity may be better as the business scales or if the business plans to hire a large number of employees.

Another popular option is to use an employer of record (EOR). EORs can manage background checks, hiring, and onboarding. They can also help with payroll, including tax withholding and social security contributions, currency conversions, and visa assistance, if needed. They can provide compliant employment agreements, offer insurance coverage, and help you research salary ranges for specific roles in different locations.

Find the Right Partners to Tap into U.S. Talent

Partners like EORs and benefit brokers can also help international startups map out the U.S. talent landscape and distinguish their company’s brand. BigID, for example, relies on an EOR to help stay up to date on changes to regulations and the hiring environment. 

Partners can also help startups develop a thoughtful compensation philosophy for their U.S. workers. For example, they can help determine positioning: Do you want to pay a premium for talent? How do you want to distribute equity? Rachel explained that a lack of thoughtfulness and clear communication about how compensation is determined can damage the company culture and distract teams from their core responsibilities. Startups should document their company philosophy, and take steps to ensure everyone conveys the same message to each employee and candidate.

Your First U.S. Hires Are Critical

The first employees a business hires in the United States should have the energy and network to jump-start the U.S. talent search. Ideally, they’ll also have the skills to bring people together. Startups should be sure that each new employee aligns with their company’s mission and culture. They should also recognize that hiring in a new market is an opportunity to hire from more diverse backgrounds.

When hiring internationally, Tony looks for a characteristic he calls “ego flexibility,” or the ability to adapt and quickly adjust to new ways of working. He explained that employees need to be open, flexible, and agile as the workplace evolves in new ways, and this trait is particularly important for first hires in new geographies.

Most U.S. workers don’t have the mandatory probation period that may be common in other regions. This means there is less flexibility around deciding mutual fit with potential employees. Startups may be tempted to rush to hire their first few U.S. candidates, but Tony cautions that employers should balance the cost of an empty seat against quickly hiring someone who isn’t quite right for the job. Businesses should be specific about the type of talent they need. Define the skills, attributes, and competencies that will make a great hire. Ensure new U.S. hires align with the company’s key priorities and operating plan.

Businesses should also make sure each new hire understands what success looks like for themselves, their teams, and the company—and connect the three. Develop detailed objectives and key results (OKRs) for each.

Establish Your Brand Where Top Talent Resides

U.S.-based talent may not be comfortable applying for a job unless they have a clear sense of the employer’s brand. As such, a first step international startups should take when hiring in the U.S. is defining their employer brand. “The smaller the company, the more competitive the market in getting your name out there,” said Kate. She urged companies to think about “how you can build your brand in the region or location where you’re going to be hiring.” Establishing a strong employer brand also has the additional benefit of improving a company’s reputation, which can attract high-quality customers, partners, and investors.

Startups should consider creating a dedicated U.S. career website to communicate a clear mission or value statement in a way that’s tailored to the local market. The information shared should reflect the culture of the business inside and outside the workplace, as well as local cultural nuances. The business’s regional leader should be a key part of the company’s local brand, and should be out front promoting its mission. For example, Salesforce Ventures’ London-based portfolio company Autogen AI appointed Elizabeth Lukas as U.S. CEO to support their expansion in the U.S. market. Elizabeth regularly posts news about Autogen AI on social media channels.

Be Transparent About the Opportunity

Create transparency by describing the business’s corporate culture in each online job posting. This allows new employees to understand the values and purpose of the business. This is particularly important if the company can’t offer a higher-level of compensation than its competitors. The U.S. recruiting site might answer common candidate questions, such as: How can I be successful at the company? How informal is the work environment? What’s the pay structure? Startups should also consider including employee photos and quotes about why they enjoy their jobs. These kinds of details make it easier for prospective employees to learn what it’s like to work at the company.

When Oyster is hiring for its own workforce, Tony asks Oyster’s employees to post job reviews on recruiting and social networking sites, a practice he calls “open sourcing.” He explained during the panel that Oyster has grown in just four years to 500 employees in 70 countries thanks in part to open sourcing. Oyster’s career website lists its core values and highlights the employee review scores on Glassdoor and Culture Amp.

Create a Global Corporate Culture…

Startups must foster a global culture to attract talent. Part of this shift involves letting go of the business’s national identity and giving all employees access to the same career opportunities, regardless of where they live. Rachel says this approach allows everyone to have an equal “share of voice” because teams aren’t just accommodating executives at the global headquarters.

“You’re not a French company anymore,” said Tony. “You’re not a British company or an American company. You’re simply a company.”

…While Cultivating a Regional Comradery

The business’s U.S.-based employees can still create and sustain a local culture while being part of a global team. Group messaging channels for each city or region of the country facilitate comradery and make it easy for colleagues to connect and plan events. Algolia offers a Community Leaders program where each location with 10 or more employees can arrange events such as team dinners, virtual happy hours, and office volunteer days. They’ve found this offering raises morale.

Make a Plan for Asynchronous Collaboration 

It can be tricky for teams across the globe to collaborate, especially if they don’t have overlapping work hours. So it’s important to perfect asynchronous work. Algolia and BigID leadership spend time building internal processes that govern how work gets done, regardless of location.

Startups should map out a clear process with workflows and deadlines for each time zone. For example, a team in France should add their remarks to a document by 3PM GMT+2 so the New York team can incorporate their edits when they start work at 9AM ET. Trust becomes critical when employees are working in different time zones. Company’s must assume the best intent to help create psychological safety. Rachel explained that it’s important to have “empathy for the teams that are experiencing different things across the globe.”

Businesses should choose common communications tools, whether that’s Slack, email, or messaging apps. All employees should post their locations, time zones, working hours, and vacation schedules on a platform where colleagues can easily find them.

Limit meetings that will inconvenience particular regions. When meetings are necessary, ensure that leads send specific pre-meeting documentation at least 24 hours ahead of time so everyone can review them. Consider sending translated meeting summaries to employees who may be non-native speakers, and employing AI-powered tools for real-time captioning. Ensure employees across regions can find ways to connect directly for further training, onboarding, or mentoring. 

“Async work is not always faster, but I think it gives you a big opportunity to be inclusive,” said Rachel. “Train your managers on best practices when interacting with colleagues in other time zones and hold them accountable.”

Foster Growth Through U.S. Expansion

The U.S. market represents a phenomenal growth opportunity for many international startups. Having high-quality U.S.-based talent can help a business attract new customers, find new investment, and improve brand perception. It was a pleasure to host a group of business leaders who’ve accomplished this challenging expansion. I hope their insights prove valuable to other founders seeking to grow their business internationally.

_

Salesforce Ventures hosts frequent workshops with Salesforce experts and industry thought leaders designed to address our portfolio companies’ recurring challenges and to support them in their path to success. To learn more about Salesforce Ventures, visit our website.

DISCLAIMER

The information provided in this article does not, and is not intended to, constitute legal or financial advice; instead, all information, content, and materials available are for general informational purposes only. Readers should contact their attorney or financial representative to obtain advice with respect to any particular legal or financial matter. Opinions of the referenced presenters and/or author are their own and do not necessarily reflect the official position of Salesforce.

AI Infrastructure Explained

Innovative applications of AI have captured the public’s imagination over the past year and a half. What’s less appreciated or understood is the infrastructure powering these AI-enabled technologies. But as foundational models get more powerful, we’ll need a strong technology stack that balances performance, cost, and security to enable widespread AI adoption and innovation.

As such, Salesforce Ventures views AI infrastructure as a crucial part of the market to build and invest in. Within AI infrastructure, a few key elements are most critical: Graphic Processing Units (known as “GPUs”), software that enables usage of GPUs, and the cloud providers that link the hardware and software together.

Understanding these three elements—how they work, how they’re delivered, and what the market looks like—will help founders and innovators execute more effectively and identify new opportunities. In this write-up, we present our analysis of the AI infrastructure stack, starting with the basics of GPUs and software components and moving to the ways products are delivered and how the market is segmented. 

GPU Hardware

Source: ResearchGate

GPUs are the hardware that powers AI. 

While many traditional servers utilize computer processing units (CPUs), these processors aren’t designed to support parallel computing, which is needed for specialized tasks like deep learning, video gaming / animations, autonomous vehicles, and cryptography. The key difference between GPUs and CPUs is that CPUs have fewer processing cores, and these cores are generalized for running many types of code. GPU cores are simpler and specialized for data-parallel numerical computations, meaning they can perform the same operation on many data points in parallel. 

GPUs are organized into nodes (single server/computing unit), racks (enclosures designed to house multiple sets of computing units and components so they can be stacked), and clusters (a group of connected nodes) within data centers. Users have access to the GPUs at these data centers through virtualization into cloud instances. Data centers that house GPUs must be configured differently than traditional CPU data centers. This is because GPUs require much higher bandwidth for communication between nodes during distributed training, making proprietary interconnects such as InfiniBands necessary. 

The density of GPU servers and their high power draw calls for planning around power provisioning, backup, and liquid cooling to ensure uptime. Further, the topology (physical interconnection layout) of GPUs is also different from CPUs—GPU interconnect topologies are specially designed to deliver maximum bi-sectional bandwidth to satisfy the communication demands of massively parallel GPU workloads. By contrast, CPU interconnects focus more on low-latency data sharing. 

In terms of GPU manufacturers, NVIDIA currently dominates with an 80%+ market share. Other noteworthy players include AMD and Intel. Demand for GPUs currently far outweighs supply, prompting intense competition amongst manufacturers and encouraging customers to experiment with building their own AI chips

Aside from big tech companies, there are also a few startups attempting chip design. These new entrants are oftentimes focused on optimizing the design for specific use cases or AI workloads. For example, Groq’s LPU (Language Processing Unit) is solving for cost-effective latency, and Etched’s Sohu chip is designed to run transformer-based models efficiently. We’re excited to see the innovation in the space and market supply and demand dynamics at work. 

GPU Software

Supporting these GPUs are the various software solutions that interact directly with the GPU clusters and are installed on different levels (i.e., nodes, racks, or clusters). The following is a non-comprehensive list of the types of software associated with GPU workloads:

  • Operating systems: The operating system handles the scheduling of processes and threads across CPUs and GPUs. It allocates memory and I/O appropriately. Examples of GPU operating systems include CentOS, RHEL, and Ubuntu. 
  • GPU drivers: GPU drivers are vendor-provided parallel computing platforms that control the GPU hardware. Examples of GPU drivers include NVIDIA, CUDA, and the open-source AMD ROCm.
  • Cluster management / job scheduling: Cluster management software allocates GPUs to submitted jobs based on constraints and availability, distributes batch jobs and processes across the cluster, manages queues and priorities for diverse workloads, and integrates with provisioning tools. Examples include Kubernetes and Slurm.
  • Provisioning tools: Provisioning tools provide containers / isolated environments for applications or jobs to run on the cluster and allow for portability to different environments. Examples include Docker and Singularity. 
  • Monitoring software: Monitoring software tracks specialized metrics and data specific to AI operations. Examples include Prometheus, Grafana, and Elastic Stack.
  • Deep learning frameworks: Frameworks that are specifically designed to take advantage of GPU hardware. These are essentially libraries for programming with tensors (multi-dimensional arrays that represent data and parameters in deep neural networks) and are used to develop deep learning models. Examples of deep learning frameworks include TensorFlow and PyTorch.
  • Compilers: Compilers are development environments to build optimized code. Examples include the NVCC compiler for NVIDIA’s CUDA code and HCC/HIP compilers for AMD ROCm GPU code.

These softwares help infrastructure teams provision, maintain, and monitor GPU cloud resources.

The hardware / software combination matters a lot for the type of AI workloads being performed. For example, distributed training (training an AI model as fast and effectively as possible) typically requires multiple servers with best-in-class GPUs and high node-to-node bandwidth. Meanwhile, production inference (running the job of an AI application) needs GPU clusters that are configured in a way to handle thousands of requests simultaneously, usually relying on optimized inference engines like TensorRT, vLLM, or proprietary stacks, such as Together AI’s Inference Engine.

In the next section, we provide an overview of what we consider the current landscape of cloud infrastructure that can accommodate various AI workloads. 

The 3 Types of GPU Cloud Providers

Salesforce Ventures’ view of the market is that there are currently three types of GPU cloud providers. Each provider has its own benefits depending on the desired use case. 

Hyperscalers

The first type of GPU cloud provider are Hyperscalers: cloud computing providers that operate massive, globally distributed data centers and cloud infrastructure and offer a wide array of computing services. While Hyperscalers haven’t historically focused on GPUs, they’ve recently expanded their offerings to GPUs to capture the immense market demand for AI technologies. Notable Hyperscalers include household names like AWS, Google Cloud, Azure, Oracle, and IBM Cloud.

In terms of infrastructure, Hyperscalers own their GPUs but co-locate these GPUs with colocation data center operators or maintain their own data centers where their chips are housed. Software provided by Hyperscalers is largely product dependent—some might have lower-level software just for managing GPU clusters, while others provide higher-level abstractions that are MLOps focused.

For example, AWS has EC2 instances that offer different types of GPUs (e.g., T4, A10, A100, H100, etc.) as well as deep learning software solutions (e.g., AWS Deep Learning AMIs) that include the training frameworks, dependencies, and tools developers can utilize to build and deploy models. Meanwhile, AWS Bedrock offers API endpoints of popular open-source and closed-source models, abstracting away the interim steps of deployment.

Specialized Cloud Providers

The second type of GPU cloud provider is what we call “Specialized Cloud Providers.” Unlike Hyperscalers, these organizations are focused on providing GPU-specific infrastructure for AI and high-performance computing workloads. Examples of specialized cloud providers include CoreWeave, Lambda Labs, Massed Compute, Crusoe, and RunPod.

Like Hyperscalers, Specialized Cloud Providers own their GPUs, but either co-locate them with colocation data center operators or operate their own data centers. These providers either offer bare-metal GPU clusters (hardware units networked together) or GPU clusters with a basic software layer that enables users to operate the clusters and virtualization layers to spin up cloud instances—similar to the EC2 instances at AWS.

Both Hyperscalers and Specialized Cloud Providers require massive upfront capital outlays to buy and install the GPUs in their data centers, and sometimes build and operate the data centers themselves (if they’re not “colocated”).

Inference-as-a-Service / Serverless Endpoints

The third type of GPU cloud provider encompasses a broader array of companies that we bucket under “Inference-as-a-Service” or “Serverless Endpoints.” These are newer entrants to the market that offer software abstraction on top of GPU clouds so users only interact with the API endpoints where models are fine-tuned and deployed for inference. Examples of companies in this space include Together AI, which we recently led a round of financing in, Fireworks.ai, Baseten, Anyscale, Modal, OctoML, Lepton AI, and Fal, among others.

Most Inference-as-a-Service providers get their GPUs from Hyperscalers or Specialized Cloud Providers. Some rent GPUs and make a margin on top of the unit cost, some form revenue-sharing partnerships, and some are pure passthrough (i.e., revenue goes directly to the GPU supplier). These companies typically have a software layer with the highest level of abstraction so that users likely don’t have visibility into what GPU cloud provider or specific SKU of GPU/networking configuration is utilized. Users also rely on these companies to perform MLOps-type value-adds such as autoscaling, resolving cold starts, and maintaining the best performance possible. Oftentimes, Serverless Endpoint companies build their own proprietary stack of optimizations to improve cost and performance (often a balance between latency and throughput). 

Inference-as-a-Service has gained traction with the new wave of generative AI because these providers take away many steps around provisioning and maintaining infrastructure. If a startup opted to use a product from a cloud vendor, it’d likely need to:

  • Select a GPU instance (based on model performance requirements), 
  • Launch that instance, 
  • Install and set up the necessary software (e.g., GPU drivers, deep learning libraries), 
  • Containerize the model, 
  • Transfer the container to the EC2 instance, 
  • Install serving software depending on model format and deployment requirements, and
  • Configure the serving software to load the model and expose it as an API endpoint. 

However, if a startup was to deploy an LLM with a provider like Together AI, all it’d need to do is select the relevant model from the Together Playground, launch the model using the serverless endpoint provided by Together AI, and build the inference endpoint into a generative AI application using the API key and Python/Javascript SDK. Together AI also performs maintenance for its users.

Also note that some players in this broad category offer products targeting distributed training workloads, including Together AI and Foundry. Given the need for larger GPU clusters to train models (vs. serving/running inference), these products have a different form factor from Serverless Endpoints.

Organizing AI Infrastructure

We hope our organization of the current AI infrastructure stack can inform founders on how to orchestrate their own infrastructure and spark ideas for how to innovate and improve upon the current technologies. In our next post, we’ll dive deeper into the viability of the various types of GPU cloud providers, and detail where we see opportunities for innovation.

If you’re a founder building in AI, we’d love to talk. Salesforce Ventures is currently investing in best-in-class AI tooling and horizontal or vertical applications. To learn more, email me at emily@salesforceventures.com

Selling to Utilities: A Guide for Early-Stage Climate & SaaS Startups

Utilities sit at the center of the clean energy transition. These enterprises own the energy grid infrastructure that powers our homes and businesses. Importantly, they must meet sustainability targets while ensuring reliable and affordable electricity supply to customers.

At Salesforce Ventures Impact Fund, we’ve been investing in climate tech for the past seven years. We understand how critical utility innovation is to a cleaner future. At the same time, we recognize the difficulties startups face selling these pioneering technologies to utility companies. 

For many early-stage startups, earning a contract with a utility provider is a major breakthrough that can set a business on a path to success. However, the process of selling to utilities can be beset by long sales-cycles, numerous decision-makers, varying regulations, and a culture that can be risk averse to emerging technologies. 

For these reasons, Salesforce Ventures recently hosted a workshop to support portfolio companies selling to regulated utilities. The session featured Larry Goldstein, Senior Director of Product Strategy for Salesforce’s Energy & Utilities Cloud, and Leo Trudel, Director of Innovation and Technology at Indigo Advisory Group, a digital strategy consulting firm for electric utilities.

Larry and Leo shared practical advice for startups hoping to break into the utilities market. In this guide, we present an overview of their framework, supplemented with our own insights and best practices for a utility go-to-market (GTM) motion. 

Understand your ideal customer profile (ICP)

The ideal customer profile defines the qualities of the stakeholders being targeted by a sales pitch. Utilities are large enterprises with multiple stakeholders involved in purchasing decisions. Selling into utilities requires a strategic selling approach that includes mapping these stakeholders, their key drivers, and their influence on the decision making process. While every utility company is different, there are generally five groups of stakeholders that startups should focus on during the sales process, each with their own set of priorities:

  1. C-suite: Utility company executives are primarily concerned with achieving their key performance indicators (KPIs). These KPIs may pertain to customer satisfaction, net zero goals, safety, reliability, or operational efficiency. Startups should aim to present the C-suite with proof points of other utility companies that hit their KPIs using the startup’s solution.
  2. Business users: Business users will look to integrate a new product or service if they believe it will help them achieve success on a project or program. The business user’s need can help push an IT organization that otherwise may not be interested in integrating an emerging technology from an early-stage company. 
  3. IT: IT will evaluate new software based on a number of criteria. IT will likely first consider whether an existing software can address the problem or if they can build the solution internally. IT will then consider ease of integration with the utility’s existing workflows and systems. Next, IT will conduct a security review process, with specific criteria for cloud-hosted software. It’s important to remember that large utilities typically have hundreds of applications, so they will be comparing new solutions to existing platforms. Also note that IT stakeholders are heavily influenced by their third-party systems integrators (SIs). SIs have their own set of priorities, which sometimes differ from those of the utility and third party vendors. When selling to IT, the business user will be the startup’s best ally throughout the sale, as IT ultimately serves the interests of the business user. Pushback from IT typically occurs when the team feels they can build the solution internally, the software poses security or skill set concerns (i.e., IT doesn’t understand the software enough to support it or has an SI that understands it), or there are budget or timing constraints.
  4. Procurement: The procurement department oversees all software acquisitions. Given the lengthy cycles associated with utility sales, these stakeholders are most concerned with vendor viability (i.e., how well-capitalized the vendor is and who its customers are).
  5. Regulatory affairs: Regulatory affairs is focused on meeting the utility’s regulatory requirements and regulatory stakeholder management. The regulatory team is also the lead in utility filings and proceedings related to utility funding—including the utility general rate case. The rate case is the primary regulatory funding proceeding for any investor-owned utility, as well as program-specific filings such those for energy efficiency and electric vehicle programs (more on rate cases further down). Engaging the regulatory team can help startups understand if and how software and implementation costs can be covered by various funding sources. For example, startups can explore whether the cost of the software license can be capitalized (CapEx) or treated as an operational expense (OpEx). Regulatory has an agenda and goals with each of their regulatory filings. If a startup’s software solution helps in supporting that regulatory agenda, it can earn a strong advocate within the utility.

Given every utility company operates somewhat differently, a best practice is to rewrite the ICPs for each individual utility provider being sold into. 

“Be rigorous in mapping your stakeholders—the people who make decisions, influence decisions, or who can shut you down,” Goldstein said. “Understand who they are, what their title is, what their concerns are, and what their motivations are.”

While utility sales cycles can be lengthy, generating interest and excitement from internal customer groups can be accomplished using ordinary sales motions. If startups present a solution that solves a business problem the utility can’t accomplish on its own, requires little training, has good data transparency, and a positive market reputation, they’ll be able to generate interest. 

Learn the culture of utilities

Utilities companies must deliver safe and reliable service to customers. As such, these organizations move cautiously and are often slow to change as a means of managing risk. Commonly, utilities are also siloed, with communication between departments restricted as a result of longstanding cultural norms and mandatory security protocols. Further, utility companies are not particularly tech-forward relative to their industry counterparts, meaning they’re not overly familiar with SaaS solutions or how they can support the business.

Given this culture, startups selling to utilities must wear three different hats:

  1. Evangelist: Startups must promote their product and find or develop internal champions to promote it to colleagues (e.g., business users, regulatory affairs). Because utility companies are siloed, startups must take the initiative to move horizontally through the organization to ensure all potential stakeholders are aware of their solution and see value in it. Note that utilities prioritize reliability in virtually all purchases they make. They often associate reliability with brands that have solid, long-standing reputations. For this reason, utility companies may avoid doing business with startups. As such, it can be advantageous for startups to emphasize any high-profile partnerships (including well-known investors, affiliates, systems integrators) to foster credibility with stakeholders. Given regulated utilities do not directly compete, they tend to communicate and share information with each other and often look to other utilities as references on a proposed solution.
  2. Educator: Startups must explain how to think about the solution and how stakeholders should evaluate it. Operators at utility companies have typically been in their role for many years and may not know how best to gauge the value of a SaaS tool. So if, for example, the product in question uses computer vision to assess asset health, explain to Procurement that the best way to evaluate their options is to administer tests that are graded on processing time and accuracy. 
  3. Sales Engineer / Systems Architect: A startup should be able to propose architecture for how its solutions can fit into the utility’s existing environment both from a systems replacement and integration perspective. Because utilities are risk-averse, startups should expect roadblocks and come to the table with ideas on how to navigate them. 

It’s important for startups to learn the “idiosyncrasies” of the utility in order to engage in the most effective fashion.

“Each utility is different, and you’re not going to figure out how these organizations buy software unless you do your research and talk to a bunch of internal stakeholders,” Trudel said. “And then once you understand those dynamics, it’s up to you to execute a strategy that gets everybody on board and the deal moving forward.”

Understand where the money comes from

Utilities receive the majority of their funding through regulatory filings and proceedings. The largest of these, as previously mentioned, is the general rate case.

Rate casing is the process by which a utility company sets the rate it charges customers for services to match the costs incurred to provide those services. The rate case process involves preparing and filing a rate case with the regulatory body that governs the utility, evidence gathering, and several rounds of public hearings (this video provides a good overview of the rate case process).

Having a rate case approved essentially enables the utility to pass on increases in costs (say, from a new SaaS tool) to customers. Supporting a utility provider with their rate case can help a startup secure the sale. Because the rate case process is different for every utility (utilities report into different federal and state regulators), startups should look into whether their target customers have rate-cased SaaS products in the past. If they have, seek to understand how the utility successfully rate cased the software, as this can provide insight into how future SaaS expenses can be passed along to customers. Note that rate case eligibility typically applies to CapEx and not ongoing expenses, such as recurring SaaS fees. 

“One thing that can be really helpful is looking at old rate cases that have included SaaS software, and then asking the regulatory affairs folks how they handled that process,” Trudel said. “You can then play the role of sales engineer and figure out what you can do to get your software included in the rate case based on the rules the utility company must abide by.”

There are 50 different state regulators as well as numerous federal regulators governing more than 5,000 utilities providers in the U.S. Each of these regulators has a different level of regulatory oversight and compliance related requirements. What’s more, within each utility there are numerous individuals who have varying knowledge of regulatory processes and compliance procedures. 

This mosaic of compliance rules and stakeholder interpretations means there’s no single solution for winning rate case approval that applies to all customers. SaaS companies should engage with stakeholders to gain knowledge of each client’s interpretation of their unique regulatory frameworks, share best practices and anecdotes of other utilities that won rate case approval for SaaS products, and see if there’s a path forward for securing funding via rate increases, rather than from utilities’ profit margins. 

Because rate-casing is a complex and time-intensive process, startups should also consider other sources of capital that may be used to compensate vendors. Again, this requires understanding the idiosyncrasies of the utility provider. There are regulatory programs for energy efficiency, electric vehicles, and income qualified programs that provide funding specific to meeting those regulatory program goals.

Understanding what funding source(s) are relevant and can be applied to software license and implementation costs is critical to getting a SaaS purchase project funded—as is understanding how those software and implementation costs are allowed to be treated (i.e., CapEx vs. OpEx).

Utility sales in summary

Utility sales is a complex enterprise selling cycle that can feel daunting to a startup. But startups that take the time and effort to understand the organization, stakeholder dynamics, and funding mechanisms described above can find success. The value of these contracts—both financial and reputational—often makes utility sales a worthwhile investment. 

Climate tech startups have incredible solutions from which utilities could benefit. We hope the sales strategies we’ve detailed in this guide can facilitate increased adoption of these solutions in the coming years.

Salesforce Ventures Impact Fund hosts frequent workshops with Salesforce experts designed to address our portfolio companies’ recurring challenges and chart a path to success. The Impact Fund is currently investing in enterprise software startups that drive measurable social and environmental impact. To learn more about the Salesforce Ventures Impact Fund, visit our website.

Disclaimer

The information provided in this article does not, and is not intended to, constitute legal or financial advice; instead, all information, content, and materials available are for general informational purposes only. Readers should contact their attorney or financial representative to obtain advice with respect to any particular legal or financial matter. Opinions of the referenced presenters and/or author are their own and do not necessarily reflect the official position of Salesforce.

Salesforce Ventures Founder John Somorjai on the ‘Ask More of AI’ Podcast

John Somorjai, Chief Corporate Development and Investments Officer at Salesforce Ventures, recently sat down with Clara Shih, CEO of Salesforce AI, on the “Ask More of AI” podcast for a wide-ranging conversation on the origins of Salesforce Ventures, Salesforce Ventures’ AI investment portfolio, how Salesforce Ventures applies its values in all investment decisions, and much more.

Here were a few of our favorite takeaways*.

*Quotes have been edited for clarity and concision.

On what inspired John to launch Salesforce Ventures… 

“We started talking about creating a venture firm towards the end of 2008. It was the middle of the financial crisis, and we had many AppExchange partners who were struggling to raise funding. We had created this incredible ecosystem of partners that would tightly integrate with our products and it was really important for our business and our customers that they be able to access these solutions.”

“So we worked with Marc (Benioff) to create a venture arm that would fund within the broader Salesforce ecosystem and allow Salesforce to become the anchor investor of a fundraising round and pull in other investors. Companies we invested in during that time include DocuSign and Box.”

“From our origins, we could have never imagined the success Salesforce Ventures has had. The world changed and enterprises realized they could run their companies more efficiently by moving their systems to the cloud. So Salesforce, and all the companies that grew up around us, really benefited from changes in the industry.”

On what differentiates Salesforce Ventures from other VC firms…

“Through that early experience, we realized we have a few advantages as investors. One is we understand buying signals and have a good sense for the types of companies customers would want to buy from. Additionally, we have in-house experts that understand enterprise software and how to create the best products for the cloud that are secure, reliable, and scalable. So we can leverage the expertise of our entire organization to make better investment decisions.”

On launching Salesforce Ventures’ Generative AI Fund…

“Generative AI is one of the most transformative technologies we’ve seen in a long time in our industry. When we started to see interesting business models take off with this technology, we decided we wanted to be at the forefront of deploying capital into the industry to make sure we’re getting into the best companies while building an ecosystem of partners around our internal AI efforts.”

On Salesforce Ventures’ investment in application-layer AI companies…

“Notable application layer investments include Runway, Typeface, and AutogenAI. What all of these companies have in common is how much productivity they add back to the company. McKinsey says AI is going to save $4 trillion every year for companies. That’s an enormous productivity uplift. I think these companies are really on the cusp of breaking through and becoming very, very successful large businesses because when you can drive that much efficiency for a business, who’s not going to want to buy?”

On how Salesforce Ventures leads with its values… 

“One of the greatest things about Salesforce is how much we value giving back. Our 1-1-1 model stipulates 1% of employee time is dedicated to volunteerism, 1% of our product is given away for free, and we put 1% of our equity into a 501(c)(3) foundation, which has now been able to give away over $700M in grants. We’ve brought our 1-1-1 model to our portfolio, and now have 190+ companies using it. They’re building on our success and helping their communities.”

“Another example is our focus on diversity and inclusion. We invest in many minority founders, and fund the Black Venture Institute, which trains black operators to become check writers. So I think everything we do with our values internally, we try to bring that to the companies we work with.”

_

Watch John’s full conversation with Clara here:

In Conversation With Anthropic Co-Founder Tom Brown

Salesforce Ventures recently hosted a dinner and networking event for a group of portfolio companies and Fortune 500 executives in San Francisco. The evening was highlighted by a fireside chat between Tom Brown, co-founder of Salesforce Ventures’ portfolio company Anthropic, and Salesforce President and Chief Product Officer David Schmaier. 

The duo discussed the origins of Anthropic, the imperative of AI safety, techniques for generating better LLM outputs, generative AI use cases, improving AI accuracy, the prospect of artificial general intelligence (AGI), and how to ensure AI will be used as a force for good in the world. 

Their conversation featured a ton of great insights for founders, builders, and AI enthusiasts alike. Here were a few of our top takeaways*…

*Quotes have been edited for clarity and concision

On Anthropic’s approach to creating ‘harmless’ AI…

“Many models are trained using reinforcement learning from human feedback (RLHF). The idea behind RLHF is you reward and punish the model for doing well or not doing well on the paths you care about. We had people upvoting or downvoting how well the model does a task. That’s how you can make the model become a harmless assistant.” 

“I think we noticed that as the models were getting smarter, they started to do most tasks well. We developed constitutional AI to turn a model into the entity that upvotes or downvotes another model. A person can write up a constitution of what it means to be helpful, harmless, and honest, and then a model will read the interactions between the human and the assistant and consider if the assistant is acting in accordance with the constitution. This is a way to take a simple document and turn it into a model personality.”

On the impact of ‘stacking’ LLMs to generate higher-quality outputs…

“We have Claude 2.1, which is a large model, and then we have Claude Instant, which is a smaller model. Depending on the task, sometimes you’ll want a smaller model because it’s faster and cheaper. For example, Midjourney is one of our customers. Whenever you put any prompt into Midjourney to generate text, it’ll pass it through Claude Instant and Claude Instant checks if it’s violating Midjourney’s terms of service. And if it thinks it might be, it’ll give a little message to the user saying ‘this might be against our terms. Do you want to appeal it?’ And if you hit yes, it goes to Claude’s Instant’s boss, which is Claude 2.1, who thinks about it a little bit longer, and maybe says, ‘Sorry, Claude Instant was totally wrong. You’re fine actually.’”

On building domain-specific models…

“There’s two different ways I think about building a domain-specific model. One is that you take a large model and fine tune it to make it better at a specific task. The other is you build a narrow model that’s good at one specific thing. Claude Instant is faster and cheaper than Claude 2.1, but less performant. You can fine tune either one, but Claude 2.1 will still do better. So I think that’s my normal mental model for what’s going on with fine tuning. You could imagine someone building a very narrow model that’s very good at one specific thing, but it feels like fine tuning a larger model is the thing people have successfully done rather than making a super small model good at some narrow field.”

On new AI use cases….

“I think code is a place where the models do great. Retrieval augmented generation (RAG) for quality assurance is a broad area that the models can do and be really useful at. I feel like customer service is an area that fits that very well.”

On cutting down on AI hallucinations…

“We have a bunch of internal metrics for hallucination rates that we measure and a team that’s fully focused on bringing that number down. Claude 2.1 came out last month, about four months after Claude 2. On our internal dashboards it reduced the error rate by 2x. It still has some hallucinations, but there are half as many as there used to be. So if, for example, you’re seeing 98% accuracy with your model now and you need to get to 99%, maybe it’ll only take four months. If you need to get 99.9%, it might take a year or something like that.”

“Also, if it’s a harder task, the model is more likely to hallucinate or make a mistake. It’s the same as if a person is trying to juggle a bunch of tasks at once. You’ll make more mistakes than if you’re just focused on one thing.”

On the prospect of AGI…

“I think we already have weak AGI. You can talk with ChatGPT or Claude and it’s somewhat intelligent, and with each year it’s improving.”

“Right now you may prompt the model to build a successful startup. It’ll try to do it, but it’ll get stuck. But as we add more compute, the model gets smarter. The model’s IQ goes up. So I think it would be surprising if the models suddenly stopped getting better. And then it’s a question of how much better can they get? One thing we don’t know is how far the model has to go for it to be an AI entrepreneur that could do a great job. But every year it seems like we’re getting more IQ points so at some point I think we’ll get models that are better at engineering than I am.”

On whether AI will be used for good or evil…

“I feel like I’m cautiously optimistic in general. People aren’t angels and people aren’t devils. People are people. So I think people will use AI for all sorts of different stuff and it’s up to us as a society to make sure that the benefits outweigh the costs. I’ve been really heartened by the recent regulatory updates. I’ve always worried that the government would get involved and do things that don’t quite make sense. But the recent AI executive order seems quite sensible. So I think I’m much more optimistic now than I was a year and a half ago.”

To learn more about Salesforce Ventures’ investment in Anthropic, click here. To read more about our Generative AI fund, click here.

FY23 Year-in-Review

Archive main content for default archive pages