Request Audit →
Compliance Brief · EU AI Act · 2 August 2026

The bias audit your ATS cannot give you.

Kyōyū runs continuous, independent bias audits on your live hiring stack and produces dated, regulator-grade evidence of how the system performed on any given day. The Article 26 deployer’s audit trail — built outside your vendor stack.

In scope
EU AI Act · Annex III (high-risk) · Article 26 (deployers) · GDPR Art. 22
Evidence · Live
Last 60s
    Stream · synthetic preview1075 checks
    The Problem

    Annual audits report on a system that has already changed.

    Bias in a live hiring stack does not announce itself once a year.

    1. 01

      The EU AI Act classifies AI hiring tools as high-risk under Annex III. From 2 August 2026, deployers must document continuous bias testing, maintain audit logs, ensure human oversight, and register the system with national authorities. Most deployers are not yet ready.

    2. 02

      Article 26 places the compliance obligation on the deployer, not the vendor. If your company uses an AI hiring system to make decisions affecting EU workers, you carry the duty regardless of where the vendor is based or headquartered.

    3. 03

      ATS and screening vendors typically produce their own bias summaries. A vendor has a financial interest in its product passing audit. Self-attestation has not historically held up as standalone evidence in regulatory inquiries.

    4. 04

      Models drift, training data changes, integrations get re-tuned. An audit report from twelve months ago cannot describe how the system behaved last week.

    Precedent

    How we got here.

    In 2018, Amazon scrapped an internal AI hiring tool after discovering it systematically downgraded women’s resumes. Internal testing did not catch it. In 2023, Workday became the defendant in a federal class action alleging its screening AI discriminated against Black applicants, older workers, and people with disabilities. Neither system was independently audited before deployment.

    These cases shaped the EU AI Act’s deployer-side obligations. The legislative position is that internal testing and vendor self-attestation are not sufficient grounds to deploy a high-risk system. Independent, continuous evidence is the standard from 2 August 2026 forward.

    See how deployers are preparing
    How Kyōyū works

    Three components. One continuous audit. No replacement of the tools you already use.

    01Persona Engine

    Kyōyū generates synthetic candidate profiles, matched on qualifications and varied on protected attributes. This is the methodological core of an independent audit: you cannot test a live hiring system for bias using real candidates, because the variables are uncontrolled. Synthetic profiles create the controlled conditions a credible audit requires.

    02Compliance Dashboard

    Tracks outcome disparities across protected classes in real time, mapped to the specific Articles that govern your obligations: Article 10 on data governance, Article 14 on human oversight, Article 26 on deployer duties. Not a generic fairness report. A regulatory artefact.

    03Connector API

    Kyōyū sits above your hiring stack rather than inside it. It connects to your ATS, screening model, or scoring tool by API or direct integration. No rearchitecting. No engineering sprint. No replacement of what already works. Your team connects it; it runs from there.

    Together, these turn your live hiring stack into a continuously monitored, auditable system — without changing what your team already uses.

    Why independence matters

    No commercial relationship to your ATS vendor.

    We do not resell their tools. We do not take referral fees. We are not a partner in their go-to-market.

    That distinction matters in a regulatory inquiry. It matters at board level when AI governance comes up. And, in the unlikely event a discrimination claim reaches discovery, it matters then too.

    The reason is structural, not adversarial. A vendor cannot independently audit a product it has a financial interest in passing. We can.

    Early Access

    We work directly with legal and compliance teams preparing for EU AI Act deployer obligations.

    The first cohort is limited and the engagement is direct: no waiting list, no marketing sequence, no sales pipeline.

    Confidential · No marketing · One-to-one response

    hello@kyoyu.app·Cape Town, South Africa
    Frequently asked

    Plain-language answers to the questions in-house counsel and compliance officers ask before requesting an audit.

    The EU AI Act entered into force in August 2024. Obligations for deployers of high-risk systems (including the AI hiring tools listed in Annex III, point 4) apply from 2 August 2026. From that date, deployers must document continuous bias testing, maintain audit logs, ensure human oversight, and meet the data, transparency, and registration requirements set out in Articles 9 through 15 and Article 26.

    If your company uses an AI system that screens, ranks, scores, or filters candidates for roles in the EU labour market, Article 26 places deployer obligations on you. This is the case whether the vendor is established in the EU or not. Deployer obligations include monitoring the system in operation, maintaining logs, ensuring human oversight, informing affected workers, and reporting serious incidents.

    A vendor has a direct financial interest in its product passing audit. EU regulators treat vendor self-attestation as input rather than independent evidence. Article 26 places the burden of demonstrating compliance on the deployer, not the vendor. An independent third party removes the conflict of interest and produces evidence regulators can act on without further validation.

    Annex III lists the categories of AI systems classified as high-risk. Point 4(a) covers AI used for the recruitment or selection of natural persons, including filtering applications, evaluating candidates, and analysing CVs. Point 4(b) covers systems used for decisions on the terms of work, promotion, termination, task allocation, and performance monitoring. High-risk classification triggers the full set of Chapter III obligations.

    Yes. Article 22 of the GDPR continues to govern automated individual decision-making, including profiling, where it produces legal or similarly significant effects. Hiring decisions clearly meet that threshold. The EU AI Act adds specific obligations for the AI system itself; GDPR Article 22 governs the rights of the candidate. A complete compliance programme addresses both regimes in parallel.

    Continuous, dated, and verifiable evidence rather than a single annual report. This includes logs of system operation, records of bias and disparate-impact testing, documentation of human oversight, descriptions of the data used for testing, and records of any remedial action taken when issues were detected. The expected standard is that a deployer can produce, on request, evidence of how the system was performing on a specific date.

    Yes. The EU AI Act applies extraterritorially. If the output of the AI system is used in the EU — for example, a hiring decision affecting an EU worker, the deployer is in scope even where the deployer is established outside the EU. The basis for jurisdiction is the location of the output, not the headquarters of the company.

    Open-source fairness libraries (Fairlearn, AIF360, and similar) provide engineering tooling to compute fairness metrics. They do not produce regulator-facing evidence, do not address Article 10 data governance, and are not independent of the deployer. An internal audit, even a well-conducted one, is performed by the same organisation that benefits from the system passing. Kyōyū is the third-party layer that turns metrics into evidence a regulator can rely on.

    Penalties for non-compliance with high-risk system obligations under the EU AI Act can reach up to €15 million or 3% of total worldwide annual turnover, whichever is higher. National authorities are empowered to require deployers to bring systems into conformity, withdraw them from service, or restrict their use. Reputational consequences and parallel exposure under GDPR and national equality law are typical secondary costs.