The EU AI Act is no longer a future concern – it’s current law with real deadlines. The transparency obligations under Article 50 take effect on August 2, 2026, and fines for non-compliance can reach €35 million or 7% of global annual turnover.
If you’re building AI systems that serve European users or clients, here’s what your development team needs to know – and do – before that deadline.
What the EU AI Act Actually Requires
The EU AI Act classifies AI systems into four risk categories, each with different obligations:
Unacceptable Risk (Banned)
- Social scoring by governments
- Real-time biometric identification in public spaces (with limited exceptions)
- Manipulation of vulnerable groups
- Emotion recognition in workplaces and schools
High Risk (Heavy Regulation)
- AI in critical infrastructure (energy, transport, water)
- Educational and vocational training systems
- Employment and worker management tools
- Credit scoring and insurance pricing
- Law enforcement and border control
- Healthcare diagnostic and treatment systems
Limited Risk (Transparency Obligations)
- Chatbots and conversational AI
- AI-generated content (text, images, video, audio)
- Emotion recognition systems (where permitted)
- Biometric categorisation
Minimal Risk (No Specific Obligations)
- Spam filters
- AI-assisted video games
- Inventory management systems
Most enterprise AI systems fall into the limited risk or high risk categories. The first step is knowing where yours sits.
The August 2026 Deadline: Article 50
Article 50 transparency obligations apply to all AI systems that interact with people or generate content. This is the deadline most development teams need to focus on.
What Article 50 Requires
1. Chatbots and conversational AI must disclose they are AI
If your system interacts with people, they must be informed that they’re communicating with an AI – unless it’s “obvious from the circumstances.”
Engineering implication: Add clear disclosure banners, conversation starters, or UI indicators. Don’t hide it in terms of service.
2. AI-generated content must be marked
Any text, image, audio, or video generated by AI must be labelled as such in a machine-readable way. This includes:
- Synthetic media (deepfakes, AI-generated images)
- AI-generated articles, reports, and summaries
- AI-synthesised audio and voice
Engineering implication: Embed metadata markers (C2PA standard is emerging as the preferred approach) and add visible labels in your UI.
3. Emotion recognition and biometric systems require explicit notice
If your AI detects emotions or categorises people based on biometric data, affected individuals must be clearly informed.
4. Deep fakes must be labelled
Any AI-generated or manipulated content that appears authentic must carry a disclosure.
Compliance Checklist for Development Teams
Phase 1: Classification (Do This Now)
- Inventory all AI systems – List every AI model, pipeline, and feature in your products
- Classify each system by risk level – Use the four-tier framework above
- Identify systems affected by Article 50 – Any system that interacts with users or generates content
- Map data flows – Document what data enters and exits each AI system
- Identify high-risk systems that need conformity assessments
Phase 2: Architecture Changes (Q2 2026)
- Add AI disclosure to conversational interfaces – Clear, unavoidable notice that users are interacting with AI
- Implement content marking – Machine-readable metadata on AI-generated content (explore C2PA)
- Add visible labels – User-facing indicators on AI-generated text, images, and summaries
- Build human oversight mechanisms – For high-risk systems, implement human review checkpoints
- Implement logging and audit trails – Every AI decision should be traceable and explainable
Phase 3: Documentation (Before August 2026)
- Create AI system cards – Technical documentation describing each system’s purpose, capabilities, limitations, and risk assessment
- Write data flow diagrams – Visual documentation of how data moves through your AI pipelines
- Prepare conformity assessments – For high-risk systems, complete the formal assessment process
- Update privacy policies – Reflect AI processing, transparency measures, and user rights
- Document training data provenance – Where your training data came from and how it was processed
Phase 4: Verification & Testing
- Test AI disclosure visibility – Can users actually see and understand the AI notices?
- Validate content marking – Is metadata correctly embedded in generated outputs?
- Audit logging completeness – Are all AI decisions being captured?
- Review human oversight flows – Do human reviewers actually have the information and authority to intervene?
- Run adversarial tests – What happens when someone tries to circumvent your transparency measures?
High-Risk Systems: Additional Requirements
If your AI system falls into the high-risk category, you face substantially more obligations:
Technical Requirements
- Risk management system – Ongoing identification and mitigation of risks
- Data governance – Training data must be relevant, representative, and free of errors
- Technical documentation – Comprehensive docs covering design, development, and testing
- Record-keeping – Automatic logging of system operations
- Transparency – Clear instructions for downstream deployers
- Human oversight – Mechanisms for human intervention and override
- Accuracy, robustness, and cybersecurity – Appropriate levels for the intended purpose
Conformity Assessment
High-risk systems must undergo a conformity assessment before deployment. For most systems, this is a self-assessment. For biometric systems used by law enforcement, it requires assessment by a notified body.
Penalties
The EU AI Act has a tiered penalty structure:
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices | €35M or 7% of annual turnover |
| High-risk system violations | €15M or 3% of annual turnover |
| Providing incorrect information | €7.5M or 1% of annual turnover |
These fines are per violation and apply to both providers and deployers of AI systems.
Common Mistakes to Avoid
1. Assuming “low risk” means “no obligations”
Article 50 transparency requirements apply even to minimal-risk chatbots. If your system talks to people or generates content, you have obligations.
2. Treating compliance as a legal problem
The EU AI Act requires engineering changes – content marking, logging, human oversight mechanisms. This isn’t something your legal team can handle with a terms-of-service update.
3. Waiting until July 2026
Content marking, AI system cards, and human oversight mechanisms take months to implement properly. Start now.
4. Ignoring the GDPR overlap
The EU AI Act doesn’t replace GDPR – it adds to it. Your AI systems need to comply with both. Data minimisation, consent management, and right-to-erasure still apply. Read our guide on building GDPR-compliant GenAI systems for the full picture.
See how EU AI Act compliance applies to your industry:
- EU AI Act for healthcare — High-risk classification and conformity assessments
- EU AI Act for fintech — Automated decision-making under GDPR Article 22
How We Can Help
At HASORIX, we build AI systems with EU AI Act compliance engineered from the architecture layer – not bolted on before a deadline. From risk classification to Article 50 content marking to comprehensive documentation, we handle the engineering so your team can focus on building great products.
Learn about our compliance engineering approach or start a conversation about your AI system.