August 2026: The Rest of the EU AI Act Goes Live. Is Your Generated Code Ready?
On August 2, 2026, the remainder of the EU AI Act becomes applicable. The first wave is already in force: prohibitions and AI literacy obligations since 2 February 2025, and the GPAI regime since 2 August 2025. The second wave is the larger one: the high-risk obligations, the conformity assessments, the documentation duties, the operational requirements that touch how systems are built, not just how they are deployed.
There are 95 working-window days from today to 2 August.
Most CTOs reading the regulation focus on whether their product is in scope — whether their AI system is classified as high-risk under Annex III, whether they will need a conformity assessment, what their CE marking obligations look like. Those questions matter and have been answered in countless compliance briefings.
The question that has been answered less often is the one that will arrive uninvited: when the auditor opens the engineering pipeline, what does the trail look like for code that was generated by AI? The audit will not ask for the code. The code is in the repo. The audit will ask how the code was decided. And that answer, for most teams, does not yet exist.
Software is still the expression of human intent. The speed of generation has changed. The need for intent has not.
What changes on August 2
The relevant shift, for engineering organizations, is not a single clause. It is the consolidation of an evidence regime. Several of the obligations that take effect — risk management systems for high-risk AI, technical documentation, automatic record-keeping, human oversight, accuracy and robustness, post-market monitoring — share a structural assumption: that the organization can reconstruct, on demand, how a system came to be the way it is.
For traditional software, this assumption was uncomfortable but manageable. Engineers wrote code; commits had authors; design documents existed; decisions were retrievable through reasonable forensic effort.
For AI-generated code shipped over the last 18 months without governance, the assumption breaks. The "author" of a function is a model that does not appear in the audit chain. The "design decision" was a prompt that was never persisted. The "rationale" was a developer's tacit reaction to an accepted diff. None of this is recoverable forensically. None of it satisfies an evidence request.
This is the gap. And ISO/IEC 42001:2023 — the AI management system standard published in late 2023 and increasingly treated by auditors as the operational complement to the EU AI Act — makes the gap explicit: regulators want to see not only what the system does, but how the organization governs the production of what the system does.
The four questions the audit is going to ask
Independent of which clause is being tested, the auditor's investigative path on AI-generated code converges on the same four questions. A team that can answer them quickly has a defensible posture. A team that cannot is rebuilding history from Slack threads.
One. What was the declared intent for this generation? Not "what does the code do" — that is observable from the code. What was the human-stated purpose at the moment of generation? If the answer is "we don't capture that," the regulator's next question writes itself: how do you know what was built was what was intended?
Two. Who is accountable for the rules this code encodes? When the model decides that an edge case rounds down rather than up, that a retry budget is three rather than five, that a particular customer category is excluded — those are business rules. They have owners outside engineering. The audit wants to see that the owner approved the rule, not that the model picked it.
Three. What context did the model have, and what did it not? Bad generations are often good models with missing context. The audit cares because context absence is a controllable risk. Teams that documented context inputs at generation time have a story. Teams that did not have a question they cannot answer.
Four. Who reviewed and signed? Not "did someone merge the PR" — who explicitly attested that the generation matched the intent and respected the rules? A merge is not a signature. A signature has a name, a timestamp, and an artifact it points to.
These four questions map directly onto GAISD.2.1 (intent as gate), GAISD.4.2 (end-to-end traceability), GAISD.4.4 (event logging), and the broader principle of human accountability. They are not new. They are the regulatory framing of controls the framework already specifies.
Why "we have audit logs" is not the answer
A common reflex from engineering leadership is to point at existing observability — "we log everything, the trail is there." This is technically true and operationally insufficient. Conventional audit logs capture what happened at runtime: requests, responses, errors. They do not capture what was intended at build time, who decided, or what alternatives were rejected.
Regulators reading Annex IV of the EU AI Act are looking for the second category. The technical documentation requirements are explicit about decisions, design choices, and the rationale connecting them. ISO 42001 makes the same demand more operational: an AI management system is, in part, the evidence that the organization decides things on purpose and can show its work.
A team whose only artifact for AI-generated code is the runtime log is in the position of a chemical plant whose only audit artifact is the temperature sensor. Useful for operations. Not what the auditor came for.
The evidence stack a defensible team has on August 2
The good news is that the evidence regime, properly designed, is not heavy. It is a small stack of artifacts produced as a byproduct of the engineering loop — not as an additional process layered on top.
An intent record per generation. Structured, signed, queryable. What was being built, what guarantees it had to hold, what business rules it had to respect, who signed off. This is the same artifact described in GAISD.2.1; for the auditor, it is the answer to question one.
A rule registry with named owners. Every business rule the system encodes is registered, with an owner outside engineering. When code is generated that touches a rule, the generation cites the rule id and the owner is in the review chain. This is GAISD.2.3 — spec funcional governada — meeting the audit's accountability requirement.
A context manifest per generation. A list of the files, ADRs, schemas, and rule ids the agent had access to when producing the output. Context absence is auditable when context presence is recorded.
A signature chain per merge. Not just a merge approval — an attestation that the merger has reviewed the diff against the intent and the cited rules. Two named humans minimum on anything touching a high-risk classification.
An out-of-band event log. All of the above, written to an append-only log that the engineering system itself cannot modify. This is the trail that survives the incident, the personnel turnover, and the audit two years later.
A team running this stack does not have to scramble in August. The evidence is being produced every day, automatically, as part of the normal generation workflow. The audit becomes a query, not an investigation.
The 95-day plan
For a CTO who reads this and recognizes the gap, three movements, in priority order.
First, classify your exposure honestly. Not every system you ship is high-risk under the EU AI Act. But the systems that are — and the systems that will become so as your product evolves — need the evidence stack now, not in August. A two-week classification exercise across your portfolio, mapping each system to Annex III categories and to the ISO 42001 control areas, tells you where to apply the framework first. Without this, you will either over-invest (governance everywhere, including where it is not required) or under-invest (governance nowhere, including where it is required).
Note that the obligations for high-risk AI systems classified under Annex I (products and safety components subject to existing EU harmonisation legislation) apply from 2 August 2027, not 2026. The 2026 deadline is the one that bites for Annex III systems — biometrics, employment, education, essential services, law enforcement, migration, justice — and that is where most engineering organizations have exposure.
Second, install the intent and traceability layer in your highest-exposure pipeline. Pick the one product line, the one repository, the one team where the regulatory exposure is sharpest. Wire in the intent gate, the rule registry, the context manifest, the signature chain, the out-of-band log. Treat this as the reference implementation. Six weeks is enough if the team is committed; the artifacts are small and the integration points are well-known.
Third, document the program, not just the controls. ISO 42001 — and the Annex IV expectations of the EU AI Act — want to see the management system, not just the technical controls. Who is the AI accountability owner at the executive level? How are risks reviewed? What is the post-market monitoring loop? What is the procedure when a generation fails the intent gate? This documentation is what the auditor reads first; the technical controls are what they verify against it.
A team that completes these three movements by August has a defensible answer when the question arrives. A team that delays will be doing forensics on commits made in 2025.
The invitation
The August 2, 2026 deadline is not a sudden surprise. It is the slow-rolling consequence of a regulation that was published, debated, and phased in over years. What is sudden is the realization, inside many engineering organizations, that the trail of AI-generated code does not exist in any form a regulator would recognize.
This is the moment when Conformity, Audit, and Standards stops being a compliance topic and becomes an engineering topic. The controls that satisfy the regulator are the same controls that prevent the silent architectural drift, the unowned business rules, and the irreproducible decisions that have been accumulating in AI-assisted codebases.
A defensible engineering organization is not one that resists the regulation. It is one whose normal practice already produces the evidence the regulation asks for — because it was already producing the evidence good engineering asked for.
If the trail does not exist in your pipeline today, the next 95 days are the runway. Sign the manifesto at gaisd.dev/sign, adopt the GAISD framework as the practical mapping between principle and audit, and build the evidence stack while there is still time to do it on purpose rather than in panic.
Architecture is what the system can be defended as. Defensibility is decided now.