The EU AI Act is the world's first legislation specifically aimed at artificial intelligence. Since February 2025, an important component is in effect: Article 4 on AI literacy. But what does this concretely mean for your team?
Why this law?
AI systems are increasingly being used in daily work. From ChatGPT for writing texts to AI tools for data analysis and decision-making. This technology offers enormous opportunities, but also brings risks:
- Discrimination: AI can unintentionally disadvantage certain groups
- Manipulation: Misleading content that is difficult to distinguish from real
- Privacy: Careless handling of sensitive data
- Safety: Wrong decisions based on AI output
The EU AI Act is designed to protect citizens while allowing innovation to continue.
Article 4: AI Literacy
The most direct consequence for organizations is Article 4. This article states that everyone who uses or develops AI systems must have sufficient knowledge and skills. Organizations are responsible for facilitating this.
Concretely, this means:
- Employees who use AI tools must understand how they work
- They must know the limitations (such as hallucinations and bias)
- They must know which data can and cannot be shared
- They must be able to critically assess AI output
Risk classification: Which AI falls under what?
The EU AI Act divides AI systems into four risk categories:
Prohibited AI (Unacceptable risk)
Some AI applications are simply prohibited in the EU. Think of social scoring systems or AI that manipulates human behavior. You cannot use these, period.
High-risk AI
AI systems used for important decisions about people fall under strict rules. Examples include:
- HR selection systems and CV screening
- Credit assessments
- Medical diagnosis support
Strict requirements apply to these systems regarding documentation, transparency, and human oversight.
Limited risk
Chatbots and AI systems that generate content usually fall here. The main requirement: transparency. Users must know they are communicating with AI.
Minimal risk
Most AI tools that teams use daily – such as spam filters, spell check, and recommendation systems – fall in this category. No specific requirements apply, but AI literacy remains important.
Practical checklist for your team
What can you do now to prepare?
- Inventory: Which AI tools does your team use? Make a list.
- Classify: Which risk category does each tool fall into?
- Train: Ensure team members have sufficient AI literacy.
- Document: Record important decisions made with AI support.
- Review: Check AI output before using it for important matters.
Important deadlines
The EU AI Act is being implemented in phases:
- February 2025: AI literacy mandatory (Article 4) – now active!
- August 2026: Full compliance required
- 2027: Enforcement of high-risk AI systems
The time to act is now. Organizations that wait until the deadline risk not only fines but also reputational damage and missed opportunities.
Developing AI literacy
The good news: AI literacy is relatively easy to develop. It's not about becoming an AI expert, but about:
- Understanding what AI can and cannot do
- Learning to prompt effectively
- Recognizing and avoiding risks
- Critically evaluating output
Our AI Basics Training is specially designed to teach teams these skills. In one day, you learn the fundamentals of AI literacy, including a complete module on the EU AI Act.
Conclusion
The EU AI Act is not a bureaucratic hurdle, but an opportunity to use AI responsibly and effectively. By investing in AI literacy now, you prepare your team for the future – and comply with legal requirements.
The question is not whether your team will use AI, but how well they do it. Make sure they're ready.
Written by

Merijn Visman
Certified Scrum Trainer
For over 15 years, I have been helping professionals and organizations work more effectively with Agile and Scrum. My trainings are practical, interactive, and immediately applicable in your daily work.
More about the trainer →