In just over a hundred days, on 2 August 2026, the largest block of the EU AI Act becomes applicable, Ireland's new National AI Office stands up to coordinate enforcement, and a network of thirteen existing sectoral regulators — across banking, health, data protection, and other domains — takes on new powers to police AI use in the organisations they oversee. The penalty framework is notified to Brussels the same week. For general non-compliance, the headline numbers run to €15 million or 3% of global turnover. For prohibited uses, they run to €35 million or 7%.
It is a watershed moment for AI regulation in Europe. And yet most of the organisations we speak to across Ireland and the UK have no active compliance programme, no clear accountability for who owns it, and, critically, no training in place for the one provision that affects them most directly.
That provision is Article 4. And it has been law for fourteen months.
What Article 4 actually says
Article 4 of the EU AI Act — the AI literacy obligation — came into force on 2 February 2025. Unlike the provisions taking effect this August, it applies already, and it applies to almost every organisation operating in the European Union.
In plain language, Article 4 requires that any organisation deploying AI systems must take measures to ensure "a sufficient level of AI literacy" among the staff and other persons operating those systems on its behalf. That means the people using Copilot. The people using ChatGPT. The people using the embedded AI features in Salesforce, Canva, Adobe, their CRM, their LMS, their coding environment, and the fifty other tools that have quietly added AI over the last eighteen months.
The obligation is on the deployer — the organisation putting the tools in front of its people — not on the vendor. It cannot be contracted out. It cannot be satisfied by a vendor's training module buried in a help centre. And it is not discharged by a lunch-and-learn once a year.
Why the recent headlines misled everyone
In March 2026 the European Council agreed to push the high-risk AI rules out to December 2027. The mainstream business press read this as "the EU slows down the AI Act," and a great many CEOs and general counsels saw those headlines and concluded they had more time.
They do not. The delay applied to the high-risk systems provisions — a specific and complex category covering AI use in hiring, credit decisions, critical infrastructure and similar. Article 4 was not moved. Neither were the 2 August 2026 applicability dates for most other provisions. The literacy obligation has been live for more than a year, and the enforcement infrastructure stands up this August regardless of what happened with the high-risk rules.
This mismatch between what the press reported and what actually changed is one of the reasons we keep meeting organisations who think they have two years, when in fact the most relevant deadline for them has already passed.
What "sufficient AI literacy" actually looks like
Article 4 does not define literacy in detail, deliberately — because it applies to organisations of every shape and size. But the European Commission's guidance, combined with the way regulators across member states are signalling their interpretation, points clearly to a role-based definition.
Sufficient literacy for an executive using AI to draft board papers is not the same as sufficient literacy for a customer service representative using AI to handle calls, which is not the same as sufficient literacy for a developer building AI features into a product. Each role has its own risks, its own common failure modes, and its own set of judgements the person needs to be able to make. A one-size-fits-all "what is AI" module satisfies none of them.
What good looks like, in our experience, has five components. An honest audit of where AI is already in use across the organisation — almost always more places than leadership realises. A segmentation of the workforce into risk-weighted groups based on how they use AI and what the consequences of a poor judgement would be. Role-specific learning programmes for each group, designed around the actual decisions those people make rather than around generic concepts. A programme of refreshers, because the tools are changing every quarter. And documented evidence of all of the above, because the regulator will ask.
None of this is exotic. All of it is work.
The penalty picture — and the civil liability one
The headline penalty numbers are significant, but they are not the most immediate risk for most organisations. The regulator is unlikely to arrive at a mid-sized Irish firm in September looking to impose a €15 million fine for Article 4 non-compliance. What is far more likely is that Article 4 non-compliance becomes a factor in a different kind of proceeding — an employment tribunal, a discrimination claim, a data protection complaint, a professional negligence action — where the fact that staff were not adequately trained to use the AI systems involved becomes a contributory finding.
We have seen this pattern play out before. The legal liability that makes compliance training non-negotiable is rarely the headline penalty. It is the civil case where an untrained workforce becomes the reason the organisation loses.
What to do with the hundred days
For any organisation that has rolled out AI tools in the past two years and not paired that rollout with deliberate training, the hundred days between now and 2 August are a planning window, not a training window. It is too late to design and deliver a full role-based literacy programme in that time. What is achievable is to get the foundations in place: the audit, the segmentation, the first wave of training for the highest-risk roles, and a documented plan for the rest.
The organisations that will be in the strongest position in August are not the ones that have completed their literacy programmes. They are the ones that can show a regulator — or a tribunal — a coherent, evidenced plan, already in motion.
A free template to get the first two steps moving
To make the foundational work more practical, we have built an AI Use Inventory and Literacy Risk Register — a single working spreadsheet that covers the first two of the five components above. The inventory tab walks the team through mapping every AI tool currently in use across the organisation, with data-sensitivity tagging and training-status flags. The register tab groups the workforce by role, scores the impact of poor AI judgement against current and required literacy levels, and auto-calculates an action priority for each group. A summary tab rolls the two up into an executive-ready one-page view of where the gaps are most urgent.
It is not a training programme. It is the pair of foundation documents a regulator, tribunal, or board will expect to see first, and the ones that most organisations do not yet have. Free, ungated, and available alongside our other resources on the resources page.
What this means for professional institutes
Every professional institute that operates a CPD framework has an opportunity here that most have not yet recognised. Institute members — solicitors, accountants, directors, HR professionals, marketers, finance leaders — are among the most affected by Article 4. They use AI in work that carries explicit professional duties. Their employers are obligated to ensure their literacy. And the credential most likely to satisfy a regulator, a tribunal, or a disciplinary panel is an accredited, profession-specific one, delivered by the body that sets the standards for the profession.
The institutes that move first to accredit role-specific AI literacy as part of their CPD offering will own that territory. The ones that wait will find it filled by generic providers with no professional standing. The opportunity is visible now, the demand is already in the room, and the August deadline provides the urgency that normally takes years to generate.
For institutes still thinking about how to digitise their core programmes, AI literacy is a natural first move — smaller in scope than the flagship programme, urgent in demand, and a credential members actively want to display rather than quietly file.
Where we come in
At LearnFrame, we work with organisations and professional institutes on exactly this kind of design and delivery — from the literacy audit through to the role-based learning programmes that follow. Three decades of digital learning experience, a senior strategic team in Dublin, and a development capability that delivers enterprise-quality work at a fraction of typical agency cost — a structural advantage we have built deliberately, not a compromise on quality.
If your organisation or institute is somewhere on this curve and you would value an experienced second opinion on where to start, we would welcome a conversation.