April 6, 2026
Artificial intelligence is fundamentally transforming business. Since August 2024, the EU AI Act has been in force – the world's first comprehensive AI regulation. For companies operating in or serving the European market, the message is clear: if you develop or deploy AI, you need to act. This article explains the key rules, deadlines, and concrete steps to take.
Many mid-sized companies assume that AI regulation is only relevant to big tech. This is a dangerous misconception. The EU AI Act doesn't just regulate the development of AI systems – it explicitly covers their deployment as well. If your company uses an AI-powered tool – whether it's a customer service chatbot, a RAG-based internal knowledge system, or a recruiting tool with automated screening – you potentially fall under the regulation.
What matters is not the size of your company, but the role you play in the AI ecosystem and the level of risk posed by the systems you use.
At its core, the EU AI Act distinguishes two main roles:
Provider: Any entity that develops an AI system or places one on the market under its own name. Providers carry the heaviest obligations – from technical documentation and risk management to conformity assessments for high-risk systems.
Deployer: Any entity that uses an AI system in the course of its professional activity. Deployers have fewer obligations but must still ensure intended use, maintain human oversight, and comply with transparency requirements.
Particularly relevant for IT service providers and system integrators is Article 25: anyone who substantially modifies an existing AI system, markets it under their own name, or changes its intended purpose becomes a provider in the eyes of the law – with all associated obligations. This "quasi-provider" trap typically affects companies that build custom solutions on top of third-party models and sell them to clients.
The EU AI Act follows a risk-based model with four tiers:
Unacceptable risk (prohibited): Certain AI applications are banned outright – including social scoring, manipulative systems, and real-time remote biometric identification in public spaces without a qualifying exception. These prohibitions have been in effect since February 2025.
High risk: Systems in sensitive areas such as human resources, credit scoring, critical infrastructure, or education are subject to stringent requirements. These include a risk management system, data quality management for training data, technical documentation, logging, human oversight, and a conformity assessment.
Limited risk: Systems such as chatbots or deepfake generators are primarily subject to transparency obligations – users must be informed that they are interacting with AI or that content is AI-generated.
Minimal risk: The vast majority of AI applications fall into this category and remain largely unregulated. Voluntary codes of conduct are encouraged.
Implementation of the EU AI Act is phased:
Gain a complete overview of all AI systems in your organisation. This includes not only standalone tools but also embedded AI features within existing software – such as AI-powered analytics in CRM or ERP systems.
For each identified system, clarify: what role does your company play (provider or deployer)? Which risk category does the system fall into? Special caution is required for system integrations and branded solutions – Article 25 can apply quickly in these scenarios.
The obligation under Article 4 is already in force. Train all employees who use, develop, or oversee AI systems. This covers technical teams as well as business units and management. Nearshore and offshore teams operating in an EU context must also be included.
Define the distribution of roles under the EU AI Act in your contracts with clients, suppliers, and AI vendors. Who assumes which compliance obligations? Who bears liability in case of violations? Clean contractual delineation is the single most important lever for risk mitigation.
Don't wait for August 2026. High-risk compliance requires extensive technical and organisational measures – risk management systems, data quality processes, logging infrastructure, documentation. Start planning now and prioritise based on your risk exposure.
A common misconception: open-source AI models are exempt from regulation. While the EU AI Act does contain certain concessions for open-source providers, these only apply under narrow conditions – and crucially, not to companies that build commercial products on top of open models and distribute them to customers. If you integrate an open-source model into a commercial solution and deliver it to clients, you become a provider under the regulation.
The EU AI Act presents companies with new requirements – but it also creates clarity and trust. Those who analyse their AI landscape early, define roles clearly, and establish compliance processes gain a competitive advantage. Particularly in the B2B space, demonstrable compliance with the AI regulation is increasingly becoming a deciding factor in procurement decisions.
This regulation doesn't just affect the big players. It affects everyone who uses or offers AI. The good news: the phased deadlines provide sufficient time to prepare – provided you start now.
This article is for general informational purposes only and does not constitute legal advice. For an individual assessment of your AI systems, we recommend consulting specialised legal counsel.