Anthropic
Anthropic PBC is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Founded in January 2021 by former OpenAI executives Dario Amodei and Daniela Amodei along with several colleagues, Anthropic is best known for developing the Claude family of large language models. The company is organized as a public-benefit corporation and describes its mission as conducting research and building AI systems that are "safe, beneficial, and understandable."[1]
Anthropic is considered one of the leading AI alignment research organizations and, alongside OpenAI and Google DeepMind, is regarded as one of the three frontier AI laboratories driving the development of general-purpose AI systems in the mid-2020s.
History
Founding (2021)
Anthropic was founded in January 2021 by Dario Amodei, who had served as Vice President of Research at OpenAI, and his sister Daniela Amodei, previously OpenAI's Vice President of Safety and Policy. The founding team included several other former OpenAI researchers, among them Tom Brown (lead author of the GPT-3 paper), Sam McCandlish, Jack Clark, Jared Kaplan, and Chris Olah. The departures were widely reported to reflect disagreements over OpenAI's direction following its 2019 partnership with Microsoft and concerns about the prioritization of AI safety research.[2]
The company closed a $124 million Series A funding round in May 2021, led by Jaan Tallinn with participation from James McClave, Dustin Moskovitz, Eric Schmidt, and the Center for Emerging Risk Research.
Growth and Claude (2022–2023)
Throughout 2022, Anthropic operated primarily as a research organization, publishing influential papers on topics including reinforcement learning from human feedback (RLHF), Constitutional AI, mechanistic interpretability, and scaling laws. In December 2022, the company introduced Claude, its first large language model assistant, which entered closed beta in early 2023 and was publicly released in March 2023 as a direct competitor to OpenAI's ChatGPT.
Anthropic received a $300 million investment from Google in early 2023 and, in September 2023, announced that Amazon would invest up to $4 billion in the company, making Amazon Web Services Anthropic's primary cloud provider. Google subsequently committed a further $2 billion in late 2023.[3]
Frontier models and expansion (2024–2025)
In March 2024, Anthropic released the Claude 3 model family, consisting of the Haiku, Sonnet, and Opus variants, positioned at different points on the capability–cost curve. Claude 3 Opus briefly topped several public benchmarks, including the MMLU and GPQA evaluations, marking the first time a non-OpenAI model had clearly led frontier benchmarks since the release of GPT-4.
The Claude 3.5 Sonnet model, released in June 2024, introduced the "Artifacts" feature and was followed in October 2024 by a new version with computer-use capabilities, allowing the model to operate a virtual desktop by interpreting screenshots and generating mouse and keyboard actions. The Claude 4 family—including Opus 4 and Sonnet 4—launched in 2025 and further expanded agentic capabilities and long-context reasoning, with context windows of up to one million tokens on select versions.
By 2025, Anthropic was reported to be generating several billion dollars in annualised revenue, driven primarily by API access to Claude and its Claude Code developer tool.
Research
Anthropic's research output spans both capabilities and safety, with a stated emphasis on the latter. Notable research contributions include:
- Constitutional AI — a training technique in which a model critiques and revises its own outputs according to a written "constitution" of principles, reducing reliance on human preference labels.[4]
- Mechanistic interpretability — Anthropic's interpretability team, led by Chris Olah, has published widely on reverse-engineering the internal computations of transformer models, including work on induction heads, superposition, and the extraction of monosemantic features using sparse autoencoders.
- Scaling laws and alignment — extending earlier work by Kaplan et al. on neural scaling laws, and investigating how alignment techniques generalise as models scale.
- Responsible Scaling Policy (RSP) — a self-imposed framework introduced in 2023 committing Anthropic to pause or restrict the deployment of models that meet certain capability thresholds until corresponding safety mitigations are in place.
Products
Claude
Template:Main Claude is Anthropic's flagship product line of conversational large language models. It is offered through a consumer chat interface at claude.ai, a developer API, and integrations with platforms including Amazon Bedrock and Google Vertex AI.
Claude Code
Released in 2025, Claude Code is a command-line agent for software engineering that runs in a developer's terminal and can read, edit, and execute code across a project.
Corporate structure
Anthropic is incorporated as a Delaware public-benefit corporation and is governed alongside a separate entity called the Long-Term Benefit Trust, a body of independent trustees that holds a special class of stock and has the power to elect a portion of Anthropic's board of directors. The arrangement is intended to insulate the company's long-term safety mission from ordinary shareholder pressure.
Reception and criticism
Anthropic has been broadly credited with elevating the profile of AI safety research within commercial AI development and with producing some of the most carefully documented model releases in the industry. Critics have argued that, despite its safety-focused branding, Anthropic's rapid deployment of frontier models contributes to the same race dynamics its founders originally warned about. Dario Amodei has responded that a safety-focused laboratory must remain at the frontier in order to credibly influence norms and policy.