AI regulation turns real
In 2025, the AI debate was mostly rhetorical. It lived in hearings, executive orders, and open letters. In 2026, it becomes operational. Not because Congress finally passed a sweeping national AI law. It did not. Instead, the first wave is arriving the way American regulation often arrives, through state and local hooks that can actually be enforced.
A city requires bias audits for hiring algorithms. A state privacy regulator finalizes rules for automated decision tools. Another state proposes a broad “high-risk AI” framework, then has to postpone it because the definitions and implementation costs are unsettled. That is what AI regulation looks like at the start. It is not one statute. It is a patchwork of audits, disclosures, and risk assessments built on existing state power.
That patchwork creates two realities at once. It creates more accountability in the places AI already bites, including hiring, credit, insurance, health care, and consumer profiling. It also creates a compliance maze that rewards incumbents and pushes the country toward a preemption fight. This piece explains what the first wave actually requires, why it is happening at the state level, what is working so far, and what is failing.
Executive summary
States are regulating AI because they can. They already have enforcement tools in privacy, civil rights, consumer protection, and employment. The most important early rules are governance rules rather than model rules. They focus on documentation, disclosure, audits, and the right to challenge outcomes.
New York City’s hiring rules are the clearest live example of how algorithmic oversight works in practice, because they translate bias audits and notice into an enforceable compliance regime. California is pushing automated decision governance through privacy regulation, which is one pathway for “California effect” standards to become national without a federal AI law. Illinois is showing how simple employment disclosure requirements can spread quickly because they are easy to legislate and easy to enforce. Colorado’s ambitious high-risk AI approach previews the next phase, but delays and redesign debates show why broad AI statutes are hard to implement.
The big tradeoff is patchwork compliance. It pressures firms toward one national standard, but it also raises costs and can entrench large players.
1) Why states moved first
Federal AI legislation is hard for the same reason every big national technology law is hard. It forces Congress to choose among values that do not reconcile cleanly. Innovation competes with safety. Free expression competes with harm prevention. Trade secrets compete with public accountability. Civil rights enforcement competes with “do not slow down the industry.” When the coalition is unstable, Washington stalls.
States and cities do something different. They regulate where harms are legible and where authority already exists. Hiring discrimination is a civil rights and employment problem. Consumer profiling is a privacy problem. Biometrics is a consent problem. Health care tools are a licensing and patient safety problem. You do not need a national AI bill to act on those. You need an agency with authority, a jurisdiction willing to use it, and a standard that is enforceable without requiring regulators to understand a model’s architecture.
That last point explains the shape of the first wave. Most early AI rules do not try to force companies to disclose every detail of how their models work. They try to do something more operational. They require disclosures at the point of use, independent testing for bias, documentation of risk management, and mechanisms for people to challenge outcomes. The logic is to regulate the system as deployed rather than the model in the abstract.
2) The first wave, as it actually exists
It is tempting to build a “50 states are passing AI laws” narrative. The more accurate story is narrower. A small number of jurisdictions are producing the first enforceable templates, and other jurisdictions are watching what holds up.
A) New York City: hiring audits as the live template
New York City’s Automated Employment Decision Tools regime (Local Law 144) is the best real-world example of algorithmic oversight that is already live. In plain terms, it operationalizes independent testing through bias audits, notice to affected people, and a form of public accountability through reporting.
The significance is not that New York City solved algorithmic bias. The significance is that it turned a fuzzy demand, “make hiring AI accountable,” into a compliance program that a regulator can enforce and a company can implement. That is what the first wave looks like.
B) California: privacy regulation becomes automated decision regulation
California’s privacy regime is already the closest thing the U.S. has to a de facto national standard, because companies would rather comply once than maintain dozens of variants. The California Privacy Protection Agency is extending that logic into automated decision systems through regulations that cover automated decisionmaking technology, risk assessments, and cybersecurity audits.
The details and timelines matter, but the high-level effect is straightforward. California is building a governance layer around automated decisions using privacy authority. That is one of the most plausible ways the U.S. gets practical AI regulation without a federal AI law.
C) Illinois: employment disclosure as an easy-to-spread rule
Illinois enacted an Illinois Human Rights Act amendment via House Bill 3773 (Public Act 103-0804), effective January 1, 2026, that includes an employer notice requirement when AI is used in covered employment actions and adds an explicit AI-related nondiscrimination hook.
Some rules spread because they are comprehensive. Other rules spread because they are simple. Employment disclosure requirements are simple. They do not require regulators to inspect models. They require employers to tell workers when an AI tool is used in specified contexts. That is a low-cost rule to legislate and a low-cost rule to enforce, and it is politically durable because it sounds like basic fairness.
D) Colorado: the next phase, and why it is hard
Colorado’s high-risk AI approach matters because it represents the next phase: a broad, cross-sector law aimed at preventing algorithmic discrimination across multiple domains. If you want to regulate AI across employment, housing, credit, and health care, you end up with a law like that.
But broad laws run into hard implementation questions quickly. What counts as high-risk. What counts as meaningful human oversight. What is the audit standard. Who pays for compliance infrastructure. The reported delay is not just political. It is a design story. Broad AI statutes are hard because they have to define technical concepts tightly enough to enforce while still remaining workable for employers, vendors, and regulators.
3) What the rules tend to require (the shared core)
Even when the laws differ, the compliance logic converges. The first wave is building a common toolkit.
A) Impact and risk assessments
Impact and risk assessments are a modern version of “show your work.” The regulator’s goal is not to force full transparency of code. It is to force deployers to document what the system is for, what data it uses, what the risks are, what safeguards exist, and how errors and complaints are handled. The principle is simple. If you want to automate consequential decisions, you do not get to treat the system as a black box that nobody is responsible for.
B) Bias audits and disparate impact testing
Bias audits are the most concrete form of algorithmic accountability because they translate civil rights concepts into testing requirements. The goal is not to prove a model is unbiased in a philosophical sense. The goal is to catch and reduce measurable disparate impacts.
The hard part is that standards are still emerging. Which metrics count. What data is required. How to test for intersectional harms. What threshold defines a material disparity. Early regimes often start with a requirement to test and disclose, then refine standards through practice.
C) Notice and disclosure
The lowest bar for accountability is that people should know when an automated system is making or shaping consequential decisions about them. Disclosure requirements are imperfect. People can get notice fatigue. But disclosures are often the first enforceable step, and they create the precondition for complaints, audits, enforcement, litigation, and political scrutiny.
D) Human review and redress
A core fear about automated systems is not only that they make mistakes. It is that they make mistakes at scale, and the institution cannot be reached. That is why many frameworks converge on the same demand: a path to appeal, a path to a human, and a path to correction.
In practice, the hard part is avoiding rubber-stamp humans. Meaningful human review is a staffing and governance requirement. It costs money. That is why it becomes a real constraint and not only a symbolic promise.
4) What is working so far
It is easy to mock early AI regulation as fragmented and bureaucratic. Some of it is. But some of it is producing real gains.
First, it is creating a compliance vocabulary. Before the first wave, many firms had an “AI ethics” posture that was mostly internal. Now the same concepts show up in operational checklists, including risk assessments, audit reports, disclosure language, and escalation pathways. Regulation often works by making practices repeatable. This is what repeatable looks like.
Second, it is forcing attention where incentives are weakest. Left alone, automated systems tend to optimize for cost reduction, speed, and operational convenience. The first wave is forcing attention toward disparate impact, error rates, and remediation. That is the margin where institutions underinvest without external pressure.
Third, it is creating a market for compliance tooling. Governance requirements produce a new layer of vendors and infrastructure, including audit services, documentation tooling, privacy-preserving evaluation, and decision logs. Some of this is rent-seeking. Some of it is the infrastructure you need if you want to deploy automated systems responsibly.
5) What is failing (or likely to fail)
The first wave is generating predictable problems.
Patchwork compliance is real. The U.S. does not have one AI law. It has local hiring rules, state privacy rules, state employment disclosure rules, and emerging high-risk AI rules. For national firms, the choice becomes building a complex patchwork program or adopting one highest standard nationwide. That second response is the California effect. But it is not free. It raises costs even in states that did not enact the rule.
Enforcement capacity is also thin. Regulators often do not have sufficient staff, technical expertise, or budget. In the early stage, enforcement is often driven by complaints, investigative journalism, and litigation. That can be messy, but it is often how new regimes find their footing.
Definitions remain unstable. Early rules often rely on terms like high-risk, meaningful human oversight, and consequential decision. Those terms decide what is covered, and they are not yet settled. That is why broad statutes face more turbulence than narrow disclosure rules.
The biggest long-run risk is entrenchment. A mature compliance program is easier for a large company than a small one. That is true in finance, health care, and privacy. It will likely be true in AI. Audits, legal review, and documentation can become a competitive moat. That is the uncomfortable tradeoff. AI regulation can reduce harm while consolidating market power.
6) The federal void, and the preemption fight that follows
The logic of the patchwork pushes toward one of two endings. One path is a strong federal law that harmonizes requirements while preserving enforcement. The other path is a weak federal law that preempts strong state laws.
In practice, industries often push for the second. Not because they oppose regulation in principle, but because they want a single standard, and they want it to be light. The political problem is that the U.S. is trying to regulate a fast-moving technology in a polarized environment.
That is why the near-term path is unlikely to be a grand federal settlement. It is more state and local action, followed by legal fights over preemption, commerce clause questions, and the boundary between transparency and compelled speech.
Conclusion: the experiment has begun
The first wave of AI regulation is not a single national framework. It is a set of enforceable rules that grew out of state and local authority. That wave is messy, partial, and uneven, but it is real. Once rules are real, the debate changes.
Companies have to build programs, not just statements. Regulators have to enforce, not just warn. The country also has to decide what it actually wants: fragmented accountability now, or harmonized accountability later. Either way, the era of “AI regulation is coming” is over. The era of AI regulation as real policy has begun.
Sources (starter set)
Primary / official
- California Privacy Protection Agency (CPPA) announcement on approved regulations covering automated decisionmaking technology (ADMT), risk assessments, and cybersecurity audits (effective Jan 1, 2026): https://cppa.ca.gov/announcements/2025/20250923.html
- New York City DCWP: Automated Employment Decision Tools (Local Law 144) overview: https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page
- Illinois General Assembly: Public Act 103-0804 (HB3773) (effective Jan 1, 2026): https://www.ilga.gov/legislation/publicacts/fulltext.asp?Name=103-0804
Neutral trackers
- NCSL: Artificial intelligence legislation tracker: https://www.ncsl.org/technology-and-communication/artificial-intelligence-2024-legislation
Credible secondary explainers (use sparingly, label as analysis)
- Mayer Brown (Jan 2026): CPPA amendments re ADMT, cybersecurity audits, and risk assessments: https://www.mayerbrown.com/en/insights/publications/2026/01/updates-to-the-ccpa-regulations-what-businesses-need-to-know-now-about-automated-decision-making-cybersecurity-audits-and-risk-assessments
- Seyfarth Shaw: Colorado delay and Illinois disclosure effective Jan 1, 2026: https://www.seyfarth.com/news-insights/artificial-intelligence-legal-roundup-colorado-postpones-implementation-of-ai-law-as-california-finalizes-new-employment-discrimination-regulations-and-illinois-disclosure-law-set-to-take-effect.html
Federal context
- NIST: AI Risk Management Framework (AI RMF 1.0): https://www.nist.gov/itl/ai-risk-management-framework
- White House OSTP: Blueprint for an AI Bill of Rights (2022): https://www.whitehouse.gov/ostp/ai-bill-of-rights/
Discover more from The Rational Moderate
Subscribe to get the latest posts sent to your email.
