The EU AI Act came into full effect in August 2025. Article 52 creates transparency obligations for AI systems that interact with natural persons or generate content. For publishers, media companies, and content creators, this creates specific compliance requirements that many organizations are still working to understand. This guide explains what Article 52 requires and how to comply.
What Article 52 requires
Article 52 creates three distinct transparency obligations:
1. AI system interaction disclosure. Providers of AI systems intended to interact with natural persons must ensure those systems disclose that the user is interacting with an AI, unless it is obvious from context. This applies to chatbots, virtual assistants, and AI-generated customer service interfaces.
2. Deepfake disclosure. Persons using AI to generate or manipulate image, audio, or video content that appreciably resembles existing persons, places, or events must disclose that the content has been artificially generated or manipulated. Exceptions apply for legitimate artistic, satirical, and fictional uses where disclosure would be impractical, provided appropriate labels are included in the final content.
3. AI-generated text disclosure. Providers of AI systems that generate text published with the purpose of informing the public on matters of public interest must ensure the text is marked as AI-generated. This explicitly applies to news articles, opinion pieces, and analysis published online.
What constitutes adequate disclosure
The EU AI Act does not prescribe the exact form of disclosure. Acceptable approaches include machine-readable C2PA content credentials (which are increasingly recognized by regulators as meeting the disclosure requirement), visible labels on published content, and platform-level labeling systems for user-generated content. The disclosure must be clearly visible before the content is consumed — not buried in footnotes or terms of service.
Enforcement and penalties
Violations of Article 52 transparency requirements can result in fines of up to €15 million or 3% of global annual turnover, whichever is higher. National data protection authorities are the primary enforcement bodies, with the European AI Office taking jurisdiction over cross-border cases involving providers operating in multiple member states.
Practical compliance steps for publishers
Publishers should implement AI content tracking in their CMS to record which articles or assets were generated or substantially edited by AI tools. TruthScan's verification certificate provides a tamper-evident record that an AI detection check was performed on a given asset, which is useful documentation for demonstrating compliance. For content that is AI-generated, the C2PA badge generator at AIGeneratedIt creates embeddable credentials that satisfy the machine-readable disclosure requirement under Article 52.