📝

What you need to believe: Anthropic at $170B

At the time of writing, there is a fundraise happening for Anthropic of $5B at a valuation near $170B (TC) shortly after OpenAI raises $8.3B at $300B (TC). Given the blistering pace of growth in both consumer and enterprise adoption of these products and their associated revenue, I wanted to see “what you need to believe” to be investing at these valuations. I’ll break this up into a few considerations: (1) top-line revenue expectations and (2) strategic considerations. I think that these should cover a basic but holistic (and naive) perspective on future valuation. This is 99% a for-fun exercise at midnight before I go to sleep, so don’t take it too seriously.

Top-line revenue expectations

image

OpenAI has grown from $3.5M in 2020 to $12.7B projected by EO2025 (Reuters)

Anthropic similarly from $10M in 2022 to $9.0B projected by EO2025 (Reuters)

Revenue acceleration is clear across the burgeoning industry of foundational LLMs and specialized use cases. Data is fast evolving, and reports become outdated in a matter of months. By July 31, 2025, Menlo Ventures estimated in their Mid-Year LLM Market Update that GenAI spend to be in the $14B range where Model API spending has already doubled in 6 months from ~$3.5B to ~$8.4B.

LLMs are a unique combination of consumer and enterprise use cases. OpenAI seems to be taking the direction of a “do-it-all” foundational model and primarily focusing on consumer search while Anthropic is taking an enterprise-first, secure-infra-first approach, as evidenced by their focus on code generation and their growing share in enterprise API calls.

image

For this reason, my mind goes to juxtapose Anthropic with the growth curve of cloud computing, a similar developer-forward model that offered pay-as-you-go that improved developer productivity and capability. AWS first released in 2006 as a storage solution and not-so-quickly added other features like compute. Comparatively, LLM adoption is clearly outpacing cloud adoption of those times.

💡

It took AWS 9 years to reach $4.6B ARR in 2014 (source), while it took OpenAI <4 years from first revenue and Anthropic <3 years.

Today in 2025, cloud computing is a >$300B revenue market with ~66% share dominated by the top 3 players (AWS: ~$110B, Azure: ~$80B, GCP: $54B). What you need to believe in further growth and market share dominance by leading frontier model developers like Anthropic is:

  1. The market has to keep growing. Adoption needs to continue to grow into one of the most compulsory purchases across multi-modal and multi-surface use cases. Technology has to be built using language model generation. This would make it a top purchasing priority for CTOs, similar to Cloud Computing, driving top-line demand for the market as a whole.
  2. Customer use cases have to get much more clearly economical. The increase in value delivery has to be commensurate with a product’s increase in their ability to monetize that value. LLM use either needs to lead to more acquisition, more retention, or better pricing for a business. Otherwise, businesses are just signing up for margin compression, which I find hard to believe. This means models like Anthropic Claude have to remain dominantly valuable (and thus economically productive for its users) to maintain share in a currently fragmented market.
  3. Model-owner pricing needs to be stabilized. Today, usage is highly variable, which causes problems in a prosumer subscription business model, evidenced by new rate limits imposed in July 2025 by Anthropic. It’s not clear yet if the winning pricing model is cost-per-token.

I think these are pretty easy to believe, that the market will grow, that users will find more valuable use cases, and pricing will reach an equilibrium for those who find it valuable for what they’re working on and those that find it too expensive. Top-line growth is inevitable at this point. It’s just not clear exactly where the ceiling is. But, maybe like Cloud Computing, we don’t have to worry about getting there for quite a few years.

Strategic considerations

Market fragmentation leading to an alternative is a real possibility, maybe an inevitability. In the leading research, we observe that low latency high-throughput small language models are starting to proliferate in more practical use cases. Distillation and quantization and other pruning techniques seem to produce 80-90% of model performance at almost 10% of the model size or complexity. It’s a certainty that new competitors to the Claude model suite will try to penetrate Anthropic’s market share.

The capital cost of training at-the-frontier performant models is prohibitive. New models have to overcome (1) high capital expense costs to train a performant model or time to fine-tune for performance (estimated ~$500M for latest Claude training) and (2) will lose against first-mover advantage that gets iteratively better the more and more companies use Anthropic for code generation in their own codebase compared to an off-the-shelf model without fine-tuning. However, it still means that Anthropic has to pay those costs each time. New entrants will definitely be fighting uphill, as customer-specific models fine-tuned on a company’s codebase as Claude is intuitively feels extremely sticky.

Offering additional value to remain dominant. It’s possible that eking out that last 10-20% of model performance is that important to companies since sloppy code could actually slow an organization down more than it speeds it up. However, I will run with the assumption that model performance will be a small moat. It’s possible that smaller models in code generation may lead to lower quality or more security-vulnerable code, and so enterprise security will grow as a value proposition compared to rising competition. Anthropic has already realized this and is already positioning itself as a leader in the space via Model Context Protocol (MCP) and being infra/compliance forward.