If Elon Musk’s posts are to be believed, Elon Musk’s X is the latest social network to roll out the ability to label edited images as “manipulated media.” However, the company did not say how it makes this decision or whether images that have been edited using traditional tools such as Adobe’s Photoshop will be included.
So far, the only details about this new feature have come from Elon Musk’s cryptic “Warning about edited visuals” X post, which reshares the new X feature announcement by the anonymous X account DogeDesigner. This account is often used as a proxy for introducing new X features, as Musk reposts from that account to share news.
Details of the new system are still sparse. DogeDesigner’s post claimed that X’s new features could “make it more difficult for traditional media groups to spread misleading clips and images.” They also claimed that this feature is new to X.
Before it was acquired and renamed to X, the company known as Twitter labeled tweets with manipulated, deceptively altered, or fabricated media instead of deleting them. The policy is not limited to AI, but includes things like “selective editing, cropping, slowing down, overdubbing, or manipulating subtitles,” site integrity officer Yoel Roth said in 2020.
It’s unclear whether X is adopting the same rules or making significant changes to tackle AI. Its help documentation says it currently has a policy against sharing inauthentic media, but it’s rarely enforced, as the recent deepfake fiasco in which users shared non-consensual nude images showed. Additionally, even the White House has started sharing manipulated images.
Calling something “manipulated media” or “AI imagery” has different nuances.
Given that the Additionally, users should know if there is any dispute resolution beyond X’s crowdsourced community notes.
tech crunch event
boston, massachusetts
|
June 23, 2026
As Meta discovered when it introduced AI image labeling in 2024, detection systems can easily go wrong. In this case, Meta was found to have incorrectly labeled the actual photo as “Created with AI” even though it was not created using generative AI.
This has happened as AI capabilities are increasingly integrated into the creative tools used by photographers and graphic artists. (Apple’s new Creator Studio suite, released today, is one recent example.)
As it turned out, this confused Meta’s identification tools. For example, Adobe’s crop tool flattened the image before saving it as a JPEG, triggering Meta’s AI detector. In another example, Adobe’s Generative AI Fill, which is used to remove objects such as shirt wrinkles and unwanted reflections, was also causing images that were only edited with AI tools to be labeled “Created with AI.”
Eventually, Meta updated the label to “AI Information” and no longer overtly labeled images as “Created using AI.”
There is now a standards-setting body for verifying the authenticity of digital content and its provenance, known as the Coalition for Content Provenance and Authenticity (C2PA). There are also related initiatives such as CAI (Content Authenticity Initiative) and Project Origin, which focuses on adding tamper-proof provenance metadata to media content.
Presumably, the implementation of X will follow some known process for identifying AI content, but X owner Elon Musk has not said what it is. He also didn’t say whether he was talking specifically about AI images or anything other than photos uploaded directly to X from a smartphone’s camera. It’s not even clear if this feature is entirely new, as DogeDesigner claims.
Company X is not alone in its struggle with manipulated media. In addition to Meta, TikTok also labels AI content. Streaming services like Deezer and Spotify are also expanding their efforts to identify and label AI music. Google Photos uses C2PA to show you how photos on the platform were created. C2PA’s steering committee includes Microsoft, BBC, Adobe, Arm, Intel, Sony, OpenAI, and many more companies as members.
X is not currently listed as a member, but we contacted C2PA to see if it has changed recently. X typically does not respond to requests for comment, but we asked for one anyway.
