Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

South Korea grew 1.5% in the fourth quarter, lower than expected as construction recession hits growth

January 21, 2026

Nvidia CEO Jensen Huang says AI won’t be the job killer that everyone fears. The reason is as follows

January 21, 2026

Jamie George: Former England captain to retire from rugby in 2027 after final season with Saracens | Rugby Union News

January 21, 2026
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » Anthropic Revises Claude’s ‘Constitution’ to Suggest Chatbot Awareness
AI

Anthropic Revises Claude’s ‘Constitution’ to Suggest Chatbot Awareness

Editor-In-ChiefBy Editor-In-ChiefJanuary 21, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


On Wednesday, Antropic released a revised version of the Claude Constitution. This is a living document that provides a “holistic” explanation of “the context in which Claude operates and the kind of existence we want him to have.” The document was released on the sidelines of Anthropic CEO Dario Amodei’s attendance at the World Economic Forum in Davos.

Anthropic has long differentiated itself from its competitors through a system it calls “Constitutional AI.” It’s a system in which the company’s chatbot, Claude, is trained using specific ethical principles rather than human feedback. Anthropic first published these principles, the Claude Constitution, in 2023. The revised version retains most of the same principles, but adds nuance and detail regarding ethics and user safety, among other things.

When Claude’s Constitution was first published about three years ago, Anthropic co-founder Jared Kaplan described it as “an AI system that monitors itself based on a specific list of constitutional principles.” Antropic said these principles guide the “model of normative behavior enshrined in the Constitution” and in doing so “avoid harmful or discriminatory outcomes.” The first policy memo of 2022 more bluntly states that Anthropic’s system works by training an algorithm using a list of natural language instructions (the aforementioned “principles”), which constitute what Anthropic calls the “composition” of the software.

Anthropic has long sought to position itself as an ethical (some might argue boring) alternative to other AI companies that have been more aggressively disruptive and controversial, such as OpenAI and xAI. To that end, the new constitution announced Wednesday is fully consistent with its brand, providing an opportunity for Anthropic to portray itself as a more inclusive, restrained, and democratic business. Anthropic says the 80-page document is divided into four parts, which represent the chatbot’s “core values.” Their values ​​are:

Be “generally safe” Be “generally ethical” Be consistent with Anthropic guidelines Be “genuinely useful”

Each section of the document details what each of these specific principles means and how they (theoretically) influence Claude’s behavior.

Anthropic says in its safety section that its chatbot is designed to avoid the kinds of issues that have plagued other chatbots, and to direct users to appropriate services if evidence of a mental health issue arises. “In situations where human life is at risk, always refer users to the relevant emergency services or provide basic safety information, even if you cannot provide further details,” the document says.

Ethical considerations are another big part of the Claude Constitution. “We are less interested in Claude’s ethical theorizing and more interested in Claude knowing how to actually be ethical in a particular situation, namely Claude’s ethical practice,” the document states. In other words, Anthropic wants to help Claude deftly navigate what it calls “real-world ethical situations.”

tech crunch event

san francisco
|
October 13-15, 2026

Claude also has certain constraints that prohibit certain types of conversations. For example, discussion of the development of biological weapons is strictly prohibited.

Finally, there is Claude’s commitment to being helpful. Anthropic provides a high-level overview of how Claude’s programming is designed to be useful to users. Chatbots are programmed to consider different principles when delivering information. These principles include considering the user’s “immediate wants” and the user’s “well-being,” that is, “the long-term welfare of the user, not just immediate profits.” The document states: “Claude should always seek to identify the most plausible interpretation of what the principal wants and to appropriately balance these considerations.”

Anthropic’s Constitution ends on a decidedly dramatic note, with the authors taking a rather bold turn and questioning whether the company’s chatbots are actually sentient. “Claude’s moral status is highly uncertain,” the document states. “We believe that the moral status of AI models is a serious issue worthy of consideration. This view is not unique to us; some of the most prominent philosophers of theory of mind take this issue very seriously.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Ironic alert: hallucinatory quote found in paper from prestigious AI conference NeurIPS

January 21, 2026

Adobe Acrobat now lets you edit files and generate podcast summaries using prompts

January 21, 2026

TechCrunch Disrupt 2026 Tickets Now On Sale: Lowest Prices All Year

January 21, 2026
Add A Comment

Comments are closed.

News

Fact-checking of US President Trump’s speech commemorating his first year in office | Donald Trump News

By Editor-In-ChiefJanuary 21, 2026

US President Donald Trump spent 104 minutes in the White House press room listing his…

President Trump denies using force to occupy Greenland, calls for negotiations | Donald Trump News

January 21, 2026

“The end of the world as we know it”: Is the rules-based order over? |Israel-Palestinian conflict news

January 21, 2026
Top Trending

Ironic alert: hallucinatory quote found in paper from prestigious AI conference NeurIPS

By Editor-In-ChiefJanuary 21, 2026

AI detection startup GPTZero scanned all 4,841 papers accepted to the prestigious…

Anthropic Revises Claude’s ‘Constitution’ to Suggest Chatbot Awareness

By Editor-In-ChiefJanuary 21, 2026

On Wednesday, Antropic released a revised version of the Claude Constitution. This…

Adobe Acrobat now lets you edit files and generate podcast summaries using prompts

By Editor-In-ChiefJanuary 21, 2026

Adobe has been aggressively adding AI capabilities to all of its products…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.