Digital Forensics

C2PA Content Credentials: What They Are and How to Remove Them

C2PA content credentials are cryptographically-signed manifests now baked into photos by Google Pixel 10, Adobe, OpenAI, and others. Here's what they contain, who embeds them, why social platforms strip them anyway, and how you can remove them.

MC
MetaClean Team
May 15, 2026
13 min read
💬

Short Answer

C2PA Content Credentials are cryptographically-signed metadata manifests embedded inside image, video, and PDF files. They can record the device that captured an image, the software that edited it, the timestamp, the GPS location, and whether AI was involved in generating it. Google Pixel 10 signs every photo at the hardware level by default. Adobe Firefly, OpenAI DALL-E 3, and Sora embed them automatically. The EU AI Act's Article 50 effectively mandates them for AI-generated content from August 2026. And you can remove them — though doing so from content you don't own raises legal questions worth understanding.

What Is C2PA — and Why Your Phone Now Signs Every Photo

Most people learned about hidden photo metadata when someone got caught sharing a geotagged selfie that revealed their home address. That metadata — EXIF data — has been baked into digital photos for decades: GPS coordinates, camera model, capture timestamp. Standard metadata. Manageable once you know it exists.

C2PA is a different animal. The Coalition for Content Provenance and Authenticity is an open technical standard co-developed by Adobe, Microsoft, Intel, the BBC, and Arm — and it doesn't just record where a photo was taken. It creates a cryptographically signed chain of custody: who created the file, with what tool, what edits were applied, whether AI generated any part of it, and whether anyone tampered with it afterward. A broken signature means the manifest was altered. That's the point.

The standard moved from industry experiment to daily reality faster than almost anyone predicted. When Google shipped the Pixel 10 in late 2025, it became the first consumer smartphone to sign every single photo by default using hardware keys stored inside the Titan M2 security chip — keys that never leave the device. Not just AI-edited images. Every photo. Taken by the native camera app. Signed. Stamped. Verifiable.

⚠️

This Is Already Happening

As of 2026, C2PA content credentials are embedded by default in photos taken on Google Pixel 10 devices, all images generated by Adobe Firefly, all images and videos from OpenAI DALL-E 3 and Sora, images from Bing Image Creator, and increasingly in output from other major AI generators. If you've used any of these tools recently, your files already carry a manifest.

What's Actually Inside a C2PA Manifest

A C2PA manifest — formally called a Content Credential — is a structured data payload embedded directly in the file using a format called JUMBF (JPEG Universal Metadata Box Format). Think of it as a signed, tamper-evident container attached to the image itself. Here's what it can contain:

Hard binding. A cryptographic hash of the asset's pixel data, so any modification to the image invalidates the credential. This is required in every valid C2PA manifest.

Actions assertion. A log of what happened to the file: "created," "edited," "transcoded," "AI-generated." Each action carries a timestamp and identifies the software that performed it. Also required in every manifest.

Device metadata. Camera make and model, firmware version, lens information, serial number — the same fields you'd find in traditional EXIF, but now cryptographically bound and verifiable rather than freely editable.

GPS coordinates. Location data, if captured at creation time and included in the manifest. The C2PA spec allows GPS to be asserted alongside all other capture metadata.

AI generation assertions. Whether any AI tool was used to generate or significantly alter the content, and which model or system was involved. This is the field that EU AI Act compliance depends on.

Identity assertion (optional). The signing certificate — an X.509 digital certificate that identifies the organization or individual who signed the manifest. This is where C2PA's privacy implications get serious: the World Privacy Forum's technical review found that "the claim generator's signing identity is always present as an X.509 certificate, and you can't remove it without invalidating the manifest." Once your identity is embedded and the file circulates, it can't be retroactively unlinked from copies already in distribution.

None of this is encrypted. C2PA metadata is publicly readable by any conformant reader — c2patool, Adobe's Content Credentials viewer, or any application that implements the standard. You don't need any special access to read it.

20+
standard assertion types are defined in the C2PA specification — covering everything from capture device details to AI generation flags to editorial history. Custom assertions can be added beyond these.

Who's Embedding C2PA — Hardware, Software, and AI Tools

Understanding the C2PA ecosystem matters because the source of the credential determines exactly what metadata gets embedded — and how verifiable it is.

Google Pixel 10 (hardware-level signing). This is the most significant implementation to date. The Pixel 10 generates and stores C2PA signing keys inside the Titan M2 hardware security chip using Android StrongBox. The keys never leave secure hardware — even a fully compromised Android OS can't extract them. Each photo is signed with a unique certificate. The implementation achieved Assurance Level 2 certification from the C2PA Conformance Program, the highest currently defined. Unlike Samsung's Galaxy S25, which only applies C2PA credentials to AI-edited photos, the Pixel 10 signs every photo from the native camera by default.

Adobe Firefly. Adobe is a founding member of the C2PA coalition, and Firefly has embedded content credentials in every generated image since launch. The manifest records that the image was AI-generated using Firefly, the date, and Adobe's signing certificate. Firefly was the platform that popularized the "cr" (content credentials) icon visible in Adobe products. For a detailed breakdown of how different AI image tools handle metadata — not just C2PA but also embedded prompts, seeds, and model identifiers — see our guide to AI-generated image metadata.

OpenAI DALL-E 3 and Sora. OpenAI signs every DALL-E 3 image with a C2PA manifest identifying it as AI-generated by OpenAI. Sora-generated videos carry the same treatment. OpenAI also applies a SynthID-style invisible watermark as a secondary layer — watermarking being more resilient to metadata stripping than manifest embedding.

Bing Image Creator and Microsoft tools. As a C2PA founding member, Microsoft applies content credentials through Bing Image Creator and Azure AI services.

Camera manufacturers. Leica, Nikon (select models via firmware update), and Sony have shipped or announced C2PA support. The journalism industry — which needs verifiable photo provenance for news authenticity — has been a major driver of camera-side adoption.

The practical result: if you've generated an AI image with a major tool, taken a photo on a Pixel 10, or exported from Adobe Creative Cloud in the past year, your files almost certainly carry a C2PA manifest. Most people have no idea this is happening.

💡

How to Check for C2PA Credentials

Visit contentcredentials.org and upload your image to see if it carries a manifest. Adobe's Content Credentials viewer shows the full history of actions, the signing identity, and any AI generation flags. Alternatively, run c2patool inspect <filename> from the command line if you have the open-source tool installed.

The EU AI Act Connection — Why C2PA Is Now Compliance Infrastructure

C2PA isn't just a technical curiosity anymore. The EU AI Act's Article 50 — which takes full effect on August 2, 2026 — establishes transparency obligations that effectively mandate machine-readable labeling for AI-generated content. Providers of generative AI systems must mark outputs in formats that are both human-readable and machine-detectable.

C2PA isn't named explicitly in the legislation, but it's the most technically mature pathway to satisfying Article 50's requirements. The European Code of Practice for General-Purpose AI lists C2PA among recommended technologies for synthetic content marking. The regulation prescribes a multi-layer approach: metadata embedding, invisible watermarking, and logging — C2PA covers the metadata embedding layer, SynthID and similar tools cover the watermarking layer.

The penalty structure gives this teeth: violations of Article 50 transparency obligations start at €7.5 million or 1.5% of global annual turnover, whichever is higher. For major AI providers, that's a meaningful enforcement pressure.

The practical effect is that C2PA is transitioning from an optional industry best practice to regulatory infrastructure for any AI image or video tool operating in or serving the EU market. If you're running an AI content pipeline and you're not embedding C2PA credentials, you may be non-compliant as of August 2026. For users on the receiving end of AI-generated content, C2PA is increasingly the mechanism that platforms and regulators will rely on to distinguish authentic captures from AI-generated synthetic content.

For a closer look at how the EU AI Act reshapes content metadata requirements, see our upcoming deep-dive at /blog/eu-ai-act-2026-ai-content-metadata.

The Privacy Implications — What C2PA Reveals About You

Here's the tension at the heart of C2PA: it's designed to establish trust in content, but doing so requires embedding information about content creators. Information that, in many cases, people would prefer not to share.

The World Privacy Forum's technical review identified this directly. Once an organization implements identity assertions in C2PA — and many implementations do by default — every piece of signed content exports the creator's identity. At scale, that means verified identity, location metadata, device identifiers, and publication timestamps across every file leaving that pipeline. "You've got a serious surveillance surface," the report concluded.

For everyday photographers using a Pixel 10, the immediate concern is GPS coordinates embedded in the manifest. Just as traditional EXIF GPS data can reveal where you live, work, or regularly spend time, C2PA-embedded location data carries the same risk — with the added wrinkle that the manifest is cryptographically signed, making it more "official" than easily-editable EXIF fields.

For professionals, the concerns are more specific. Journalists using C2PA-signing cameras to photograph sensitive situations create a verifiable record of their presence — potentially useful for accountability, but also potentially dangerous if the manifest ends up in adversarial hands. Photographers working under pseudonyms find that the signing certificate can identify the organization behind the certificate, even if the individual's name isn't directly listed.

There's also a surveillance infrastructure concern. C2PA's design requires a public key infrastructure (PKI) to verify signatures. That PKI involves trust anchors — centralized authorities that validate signing certificates. The Center for Democracy and Technology has raised questions about who controls these trust anchors and what accountability mechanisms exist around them. Content provenance, by definition, requires attribution. The question of who controls the attribution infrastructure matters.

Our complete guide to photo metadata privacy covers the broader landscape of what's embedded in your files — C2PA is a new layer on top of the EXIF and XMP metadata that photographers have been navigating for years.

The Identity Lock-In Problem

The World Privacy Forum found that once identity data is embedded in a C2PA manifest and the file is distributed, it cannot be retroactively removed from copies already in circulation. You can strip credentials from files before sharing — but once a credentialed file is shared, those copies carry the manifest permanently. Pre-sharing removal is the only effective privacy protection.

The Platform Irony — Why Social Media Strips the Credentials It Claims to Support

Here's where C2PA's 2026 reality gets genuinely strange. Nearly every major social media platform — Meta (Instagram, Facebook, Threads), X/Twitter, LinkedIn, TikTok — has publicly committed to supporting C2PA and displaying content credentials to users. The platforms want the trust signal. They want to show users when content is AI-generated or verified as authentic.

But in practice, most of these platforms strip C2PA manifests during upload processing. When you upload a photo to Instagram, the platform recompresses the image, converts it through its CDN pipeline, and in doing so destroys the JUMBF container that carries the C2PA manifest. The same reprocessing that happens to EXIF GPS data — which is why Twitter/X strips EXIF from public posts — also strips C2PA credentials.

The result is a paradox: content that most needs verifiable provenance (content shared virally on social media) is precisely the content most likely to lose its C2PA metadata during distribution. The platforms that loudest support C2PA are the same platforms whose infrastructure makes C2PA non-functional at scale.

C2PA 2.0 introduced a partial solution: "soft bindings," specifically invisible watermarking technology that embeds provenance data into the pixel values themselves rather than in detachable metadata. A soft-bound credential survives recompression and platform processing. OpenAI's SynthID watermarking and Google's Pixel 10 both use soft binding as a secondary layer alongside the manifest. But soft binding requires watermark-reading infrastructure on the verification side — infrastructure that no major social platform currently has deployed at scale.

The manifest store approach — where credentials are stored in a cloud repository and linked to the file via an identifier rather than embedded directly — offers another path around platform stripping. But it requires platforms to actively query the manifest store rather than simply reading embedded metadata, which again requires infrastructure investment they haven't made.

For a broader look at how social media handles all forms of photo metadata, see our analysis of what EXIF data is and how it works.

🔒

Platform C2PA Status (2026)

  • Instagram/Facebook/Threads: Publicly committed to showing C2PA labels — but upload recompression strips manifests in most cases
  • X/Twitter: Strips all EXIF and metadata including C2PA during public post processing
  • LinkedIn: Similar stripping behavior to Instagram despite C2PA commitments
  • TikTok: AI-generated content labels use platform-native detection, not C2PA manifest reading
  • YouTube: Working on C2PA integration but not yet reading manifests for user-uploaded content at scale

C2PA vs EXIF vs XMP — Understanding the Metadata Stack

C2PA doesn't replace EXIF and XMP — it overlays them. Understanding how these three metadata systems relate to each other is essential for understanding what's actually in your files.

EXIF (Exchangeable Image File Format) is the classic camera metadata layer: GPS coordinates, camera model, aperture, shutter speed, ISO, capture timestamp. It's embedded in JPEG and TIFF files, has been around since 1995, and is entirely unprotected — any software can read, modify, or delete EXIF fields without leaving a trace. Our complete EXIF guide covers the full field set.

XMP (Extensible Metadata Platform) is Adobe's extensible metadata standard — a broader container that can hold creator information, copyright, keywords, editing history, and custom fields. XMP overlaps with EXIF in some fields (GPS, timestamp) and extends into creative workflow territory (Lightroom develop settings, Photoshop history). Also unprotected; freely editable.

C2PA adds a layer above both: cryptographic protection. The manifest can reference EXIF and XMP values, assert that they're accurate, and bind them into the signed structure. If someone changes the EXIF GPS coordinates after C2PA signing, the signature breaks. That's the fundamental difference — C2PA doesn't just record metadata, it makes tampering detectable.

What this means practically: stripping EXIF and XMP from a file (something tools have done for years) does not necessarily remove a C2PA manifest. The manifest uses its own JUMBF container, embedded differently from EXIF headers. A tool that only strips EXIF tags won't touch a C2PA manifest. You need a tool that explicitly handles all three layers.

How to Remove C2PA Content Credentials — and When You Should

Removing C2PA manifests from your own files is legal in virtually all jurisdictions. You have the right to manage metadata in content you created or own. The caveats: removing C2PA from AI-generated content you created may technically remove a required transparency label under EU AI Act Article 50 if you're distributing that content commercially in the EU. Removing C2PA from content you don't own — to strip an AI label and misrepresent synthetic content as authentic — raises both terms-of-service and potential fraud concerns.

But for the legitimate use cases — removing your device signature and GPS from personal photos before sharing them online, stripping AI generation metadata from images before commercial distribution, or cleaning credentials from your own professional photography workflow — removal is straightforward.

The manual method: ExifTool + c2patool. The open-source c2patool (from the Content Authenticity Initiative) can read and write C2PA manifests. But removing the manifest invalidates the credential chain — the tool won't strip credentials while preserving a valid manifest, by design. For most users, ExifTool with the appropriate options strips all metadata containers including JUMBF. Command: exiftool -all= -overwrite_original yourphoto.jpg. This removes EXIF, XMP, IPTC, and the C2PA JUMBF container in one pass. Verify the result with c2patool inspect yourphoto.jpg — a clean file shows no manifest.

The easier method: MetaClean. If the command line isn't your thing, MetaClean's browser-based tool handles the full metadata stack — EXIF, XMP, IPTC, and C2PA manifests — entirely client-side. Nothing is uploaded to any server. Your files are processed in your browser and downloaded clean. For photographers, journalists, or anyone wanting to strip credentials before sharing, this is the workflow: drop the file in, download the clean version, share the version without the manifest.

MetaClean supports JPEG, PNG, HEIC, WebP, PDF, and several video formats — the same formats C2PA supports embedding in. Our free image metadata remover handles all three metadata layers in one step, and our guide to client-side vs server-side processing explains why processing in the browser matters for privacy — your files never leave your device.

One important nuance: if you've already shared a credentialed file, the copies that have already circulated carry the manifest. You can clean your local copy and all future shares, but you can't retroactively strip credentials from files already in other people's possession. Pre-sharing removal is the only complete protection. This is why building C2PA removal into your workflow before sharing — not after — is the approach that actually preserves privacy.

Key Takeaway

C2PA removal works best as a pre-sharing step. Strip manifests before uploading, before sending via DM, before submitting to stock libraries. MetaClean processes your files client-side — nothing leaves your browser — and handles C2PA, EXIF, and XMP in one pass.

What C2PA Doesn't Do — the Honest Limitations

C2PA is genuinely useful for specific problems. For photojournalism, where editors need to verify that a submitted photo wasn't AI-generated or manipulated, a hardware-signed C2PA manifest from a camera or Pixel 10 provides real assurance. For news agencies, it's a meaningful tool for source verification.

But C2PA has real limitations worth understanding before treating it as a comprehensive solution to misinformation:

C2PA proves authenticity when present — it doesn't prove inauthenticity when absent. A photo without a manifest might be an unmanipulated capture from a 2019 camera that never supported C2PA, or it might be a manipulated image with the manifest stripped. You can't conclude anything from the absence of credentials.

Social media distribution breaks the chain. As we covered above, the platforms where misinformation spreads fastest are the same platforms that strip C2PA manifests during upload. The verification layer breaks at the distribution point that matters most.

The credentials can be stripped and re-signed. Someone with the technical knowledge and a valid signing certificate could strip an original manifest and re-sign the file with fabricated assertions. C2PA's trust model depends on the integrity of the PKI and the trustworthiness of certificate holders — both assumptions that require ongoing institutional maintenance.

It doesn't address the underlying authenticity of the content. C2PA can tell you that a photo was taken on a Pixel 10 at a certain GPS coordinate on a certain date. It can't tell you whether the scene in the photo accurately represents reality. A signed, credentialed photo can still be selective, misleading, or context-dependent.

The Hacker Factor blog — which has done extensive technical analysis of C2PA implementations — documented significant real-world failures in the Pixel 10's implementation, including cases where legitimate photos failed verification and edge cases where the trust model broke down under specific conditions. C2PA is an improvement, not a solution.

Frequently Asked Questions

What is C2PA and who created it?

C2PA stands for Coalition for Content Provenance and Authenticity. It's an open technical standard developed by Adobe, Microsoft, Intel, the BBC, Arm, and other organizations to establish verifiable provenance for digital media. The standard defines how cryptographically signed manifests — called Content Credentials — are embedded in image, video, and document files to record their creation history and any AI involvement.

Does every phone now put C2PA in photos?

Not every phone — but it's becoming common. Google Pixel 10 (released late 2025) signs every photo taken with the native camera app by default, using hardware keys in the Titan M2 chip. Samsung Galaxy S25 applies C2PA only to AI-edited images. Other Android manufacturers and iPhones do not yet embed C2PA manifests by default, though this is expected to expand as the standard matures and EU AI Act compliance pressure increases.

Can I remove C2PA content credentials from my own photos?

Yes. Removing metadata from files you own is legal in virtually all jurisdictions. You can use ExifTool from the command line (exiftool -all= yourphoto.jpg) or a browser-based tool like MetaClean, which processes files entirely client-side without uploading them anywhere. Note that EU AI Act Article 50 may require AI-generated content distributed commercially in the EU to carry AI labeling — removing credentials from AI-generated images you're selling or publishing commercially carries regulatory risk.

Do social media platforms read and preserve C2PA credentials?

Mostly no, despite public commitments. Instagram, X/Twitter, LinkedIn, and TikTok all reprocess uploaded images through compression pipelines that strip metadata — including C2PA manifests. The platforms have announced plans to display content credentials, but their upload infrastructure destroys the manifests before they can be displayed. This is one of C2PA's most significant real-world limitations in 2026.

What's the difference between C2PA and regular EXIF metadata?

EXIF metadata is unprotected — any software can read, modify, or delete it without leaving a trace. C2PA manifests are cryptographically signed: any modification to either the file or the manifest after signing breaks the signature, making tampering detectable. C2PA can also reference and bind EXIF and XMP values into its signed structure. The practical effect is that C2PA is EXIF with cryptographic accountability layered on top.

Is C2PA required by law?

Not directly, but it's becoming de facto regulatory infrastructure. The EU AI Act's Article 50, fully effective August 2, 2026, requires machine-readable labeling of AI-generated content. C2PA is not named explicitly, but it's the most technically mature implementation of what Article 50 requires. The European Code of Practice for General-Purpose AI lists C2PA among recommended technologies. For AI content providers operating in the EU, C2PA implementation is increasingly the expected compliance pathway.

Free Online Tool
Remove Metadata Now

Strip EXIF data, GPS location & hidden metadata from your photos and PDFs — instantly. Files never leave your device.