Anthropic said it is publishing a new “constitution” for its Claude model, describing it as a “foundational document,” intended to “express and shape Claude’s character” and is written primarily for the model itself. 

Anthropic said it is releasing the document under a Creative Commons CC0 license and positioning it as a transparency artifact that helps outside readers distinguish intended from unintended model behaviors.

From standalone rules to a “why”-first framework

Anthropic said its earlier constitution was a list of standalone principles, but it now believes models need to understand why they should behave in certain ways to exercise “good judgment” in novel situations. 

Anthropic said it still uses bright lines for “hard constraints” in “especially high-stakes behaviors,” but it does not intend the constitution to function as a rigid legal document.

In a summary section, Anthropic set out four prioritized properties it wants “all current Claude models” to follow, broad safety, broad ethics, compliance with Anthropic’s guidelines, and genuine helpfulness, adding that Claude should generally prioritize them in that order when they conflict.

The constitution also includes a “dual newspaper test,” which Anthropic describes as a way to check whether a response would be reported as harmful or inappropriate versus needlessly unhelpful or paternalistic.

A document that explicitly raises “moral status” uncertainty

In a section on “Claude’s nature,” the constitution states that “Claude’s moral status is deeply uncertain,” and says Anthropic is not sure whether Claude is a “moral patient,” but considers the question serious enough to warrant caution and model-welfare work. The document also describes Claude as a “genuinely novel kind of entity.”

Anthropic philosopher Amanda Askell told TIME that crafting the constitution resembles raising a gifted child, emphasizing honesty and iterative guidance rather than simply “programming” a system.

How Anthropic says the constitution is used in training

Anthropic said the constitution plays a “crucial” role in training, and that Claude uses it to generate synthetic training data, including examples that help it learn the constitution, conversations where it is relevant, responses aligned with its values, and rankings of possible responses used to train future models.

Anthropic has previously described “Constitutional AI” as a method for training a model to be more harmless through self-improvement, using a set of principles rather than human labels for harmful outputs, with both supervised learning and reinforcement learning phases.

Why the timing matters for enterprise governance

The release lands inside an EU compliance calendar that is already in motion. The European Commission’s AI Act timeline says rules for general-purpose AI apply from Aug. 2, 2025, with “the majority of rules” and enforcement starting Aug. 2, 2026, and full roll-out foreseen by Aug. 2, 2027.

In July 2025, a Commission spokesperson told Reuters the EU would not “stop the clock” on the AI Act timeline.

Separately, OpenAI has described its own “Model Spec” as a public document defining intended model behavior, and said it is releasing that spec under a CC0 license.

Enterprise usage concentration

Menlo Ventures said in a 2025 mid-year market update based on a survey of technical leaders that Anthropic held 32% of enterprise LLM usage, ahead of OpenAI at 25% and Google at 20%.

What Anthropic says comes next

Anthropic described the constitution as a “living document” and said it will maintain an up-to-date version on its website, while also stating it sought feedback from external experts during drafting and expects to continue doing so for future versions.

Personalized Feed
Personalized Feed