If You Use AI to Hire, New Rules Are Coming in 105 Days
The EU AI Act's high-risk hiring rules kick in on August 2. If your business uses any AI tool to screen resumes, rank candidates, or target job ads — even from outside Europe — you have about three months to get compliant or face massive fines.
By Troy Brown
If your business uses any kind of AI to help with hiring, a clock just started that you cannot afford to ignore. On August 2, 2026, the EU AI Act's high-risk rules for employment decisions go into full effect. That is 105 days from today.
This week, EU regulators confirmed the exact audit scope, documentation requirements, and compliance framework that businesses must meet by that date. It is no longer a future problem. It is a this-quarter problem.
Here is the plain version. Any AI system used to screen resumes, rank candidates, score interviews, or target job ads is now classified as high-risk under the EU AI Act. That means annual third-party bias audits, full technical documentation, human oversight at key decision points, and transparency disclosures to every candidate.
The part that trips people up is the reach. This does not only apply to European companies. If your business is based in the US, Canada, Australia, or anywhere else and you use AI tools to evaluate candidates who are located in the EU, you are covered. A remote job listing that is open to applicants in Germany or France brings your hiring stack under EU jurisdiction.
Think about what that includes. Your applicant tracking system's resume-ranking algorithm. The AI feature in your recruiting platform that scores candidates. The automated screening that filters out applications before a human ever sees them. The AI-powered job ad targeting that decides who sees your listing. All of it qualifies.
The penalties are not symbolic. Non-compliance can cost up to fifteen million euros or three percent of your global annual turnover, whichever is higher. For a small business, even the lower end of that range is existential.
So what do you actually need to have in place before August 2? Four things.
First, a written risk assessment. You need to document what AI tools you use in hiring, what decisions they influence, what data they process, and what risks they create. This does not need to be a hundred-page report. It needs to be honest and specific.
Second, documentation of your training data sources. If your AI tool was trained on historical hiring data, you need to know what that data looked like. Was it balanced across gender, age, and ethnicity? Were there gaps? Your auditor will ask, and the answer cannot be we do not know.
Third, a human review process for rejected candidates. AI tools cannot make hiring decisions on their own under these rules. A human has to review the AI's recommendations before a final decision is made, especially for rejections. If your system auto-rejects candidates without any human in the loop, that is a compliance problem starting August 2.
Fourth, a signed audit engagement. This is the one that catches people off guard. You need a certified third-party auditor to review your AI hiring tools, and the pool of qualified auditors is small and filling up fast. If you wait until July to book one, you may not find one in time. Companies targeting August 2 compliance need an engagement letter signed now.
The EU is not the only place moving on this. In the United States, the patchwork is already here. New York City's Local Law 144 already requires annual bias audits for automated employment decision tools. Illinois now requires employers to notify applicants when AI is used in hiring decisions. California amended its fair employment laws to regulate automated decision systems in hiring, with data retention requirements stretching back four years.
The trend is clear. Whether the rules come from Brussels, Sacramento, or New York, the direction is the same: if you use AI to make decisions about people's jobs, you are going to have to prove those decisions are fair, documented, and supervised by a human.
For small business owners, the instinct might be to assume this does not apply to you. You are not a Fortune 500 company. You are not using a custom-built AI model. You just use the features that come with your hiring platform.
But that is exactly the point. Many popular hiring tools — the ones built into platforms like LinkedIn Recruiter, Workday, Greenhouse, Lever, and dozens of others — use AI to rank, filter, or recommend candidates under the hood. You may be using a high-risk AI system without realizing it, simply because it was a default setting in your recruiting software.
The single most useful thing you can do this week is ask your ATS or hiring platform vendor a direct question: is your tool classified as a high-risk AI system under Annex III of the EU AI Act? If they cannot answer clearly, that tells you something important about how prepared they are — and how exposed you might be.
There is a practical reason this matters beyond compliance. AI hiring bias is not a theoretical risk. Studies have repeatedly shown that AI resume screeners can disadvantage candidates based on gender, age, ethnicity, and even the neighborhood listed on their address. A tool that quietly filters out qualified people is not just a legal problem. It is a business problem. You are missing talent you would have hired if a human had looked at the application.
The honest advice for a small business owner is this. You do not need to panic. You do not need to rip out every AI tool in your hiring process. You need to know what you are using, understand what it does, make sure a human reviews the important decisions, and get your documentation in order before the deadline.
If you hire in Europe or have remote roles open to EU candidates, start the audit process now. If you only hire domestically in the US, check whether your state has its own rules — and assume that more are coming. The direction of regulation globally is unmistakable.
The takeaway is simple. AI made hiring faster. Regulation is about to make it accountable. The businesses that get ahead of this will not just avoid fines — they will build hiring processes that are fairer, more transparent, and better at finding the right people. The ones that ignore it will find out the hard way that the rules apply to them too.
Subscribe
Get the next issue in your inbox.
Join The AI Signal for clear weekly notes on tools, workflows, and the handful of AI developments that are actually worth your attention.