The March 11 Deadline That Could Reshape AI Regulation in America

Five Days Until the Rules Change
On March 11, two of the most powerful federal agencies in the United States will publish policy statements that could fundamentally alter how AI is regulated across the country. The Federal Trade Commission must release a formal statement explaining how existing consumer protection law applies to AI models. The Secretary of Commerce must publish an evaluation identifying state AI laws deemed too burdensome and worthy of federal override. Both deadlines stem from Trump's December 2025 executive order on AI governance, which gave every federal agency 90 days to clarify its enforcement posture on artificial intelligence.
This isn't a theoretical exercise. Companies deploying AI in the United States are about to get concrete guidance on what the federal government considers acceptable, what crosses the line into deception, and which state regulations Washington intends to challenge. The stakes are enormous, and the countdown is nearly over.
What the FTC Statement Will Cover
The FTC's policy statement will define how Section 5 of the FTC Act, which prohibits unfair and deceptive practices, applies to AI models and AI-powered products used in commerce. This matters because Section 5 is one of the broadest consumer protection tools in American law. It covers everything from chatbots that make things up to hiring algorithms that discriminate.
There's a particularly provocative dimension to this: the executive order specifically asks the FTC to address whether state laws that "require alterations to the truthful outputs of AI models" are preempted by the federal prohibition on deceptive practices. Read that carefully. The administration is essentially asking the FTC to declare that certain state AI regulations, particularly those requiring bias corrections or content adjustments, could themselves constitute a form of mandated deception.
The FTC also plans to apply existing statutes like COPPA (children's privacy), the Fair Credit Reporting Act, and the Equal Credit Opportunity Act to AI applications without waiting for new AI-specific legislation. Enforcement could begin immediately after the March 11 statement drops. Companies using AI in advertising, credit decisions, hiring, or customer-facing applications should be paying very close attention.
The State Preemption Battle
Here's where it gets politically explosive. At least 17 states have passed AI-related laws that took effect on January 1, 2026. Colorado's landmark AI Act requires impact assessments for "high-risk" AI systems. Illinois has biometric privacy rules that affect AI facial recognition. California has disclosure requirements for AI-generated content. These aren't abstract proposals; they're enforceable law.
The Commerce Department's evaluation on March 11 will flag which of these state laws "conflict with federal policy" and recommend them for referral to the newly established Federal Task Force. The executive order specifically targets state laws that require AI models to alter truthful outputs, compel disclosures or reporting that the administration views as violating the First Amendment, or impose testing and certification requirements the White House considers unnecessarily burdensome.
Legal experts are skeptical about how far this can actually go. The Supreme Court has a long-standing "presumption against preemption," meaning federal law doesn't automatically override state law unless Congress clearly intended it to. A policy statement from the FTC isn't a regulation, and it's certainly not legislation. Courts are unlikely to accept Section 5 as a basis for overriding duly enacted state laws. But the mere act of the Commerce Department publicly flagging certain state laws as problematic sends a powerful signal to companies about which rules Washington will and won't enforce.
The Paradox: More Uncertainty, Not Less
The administration framed the December executive order as reducing regulatory burden on AI companies. The actual effect, as multiple law firms have pointed out, is the opposite.
Before the executive order, companies knew they had to comply with state laws in the states where they operated. That was complicated but straightforward. Now they face a three-way collision: federal agencies will challenge state laws, state attorneys general will defend their authority, and courts will adjudicate competing claims. The result is a period where nobody is entirely sure which rules apply.
Norton Rose Fulbright's analysis called it "regulatory uncertainty increasing, not decreasing." King & Spalding noted that the executive order "signals disruption" but doesn't actually repeal any state law. Companies operating across multiple states now need to track not just each state's AI requirements, but also the federal government's evolving position on which of those requirements it considers valid.
For smaller companies without large legal departments, this is a nightmare. For the tech giants with lobbying operations in Washington, it's an opportunity to shape the rules in their favor. The gap between those two realities is part of what makes this deadline so consequential.
What This Means for AI Companies
The practical implications break down differently depending on who you are.
If you're OpenAI, Google, Anthropic, or Meta, the March 11 statements are broadly favorable. The executive order's emphasis on "minimally burdensome" regulation and its skepticism toward state-level requirements aligns with what major AI labs have lobbied for: a light federal touch that prevents a patchwork of aggressive state rules.
If you're a company using AI in hiring, lending, healthcare, or advertising, you need the specifics. The FTC statement will tell you exactly what the commission considers an unfair or deceptive AI practice. Until that guidance exists, you're making compliance decisions based on inference and legal opinions. After March 11, you'll have a clear federal benchmark, even if state-level obligations remain murky.
If you're a state regulator, you're about to get publicly called out by the Commerce Department. Colorado Attorney General Phil Weiser has already signaled that his state intends to enforce its AI Act regardless of federal commentary. The stage is set for legal confrontation.
The Bigger Question Nobody's Asking
Lost in the regulatory process is a fundamental question: can the existing legal framework actually handle AI? The FTC is trying to squeeze artificial intelligence into laws written decades before anyone imagined large language models. Section 5's prohibition on "deceptive practices" was designed for misleading advertisements, not hallucinating chatbots. COPPA was written for websites collecting children's email addresses, not AI tutors processing everything a child says and does.
The administration has explicitly rejected the idea of new AI-specific legislation, preferring to work within existing authorities. That's a bet that 20th-century consumer protection law can adequately govern 21st-century technology. Legal scholars are deeply divided on whether that bet will hold up.
What's clear is that March 11 isn't the end of the conversation. It's the opening move in what will likely be years of litigation, enforcement actions, and political fights over who gets to set the rules for AI in America. The FTC and Commerce Department are about to put their cards on the table. Everyone else in the AI ecosystem needs to be ready to respond.
What to Watch
The March 11 statements themselves are the immediate event. Read them carefully, because the specific language around "truthful outputs" and state preemption will signal how aggressively the administration intends to push back against state regulation.
After that, watch for state attorney general responses. Colorado, Illinois, and California are the most likely to publicly defend their laws. Any formal legal challenge would create the first major federal court test of AI regulatory authority.
For companies, the window between March 11 and any potential court injunctions is the critical period. Federal guidance is only useful until a court says otherwise, and the first legal challenges could land within weeks of the statements being published. The smart move is to comply with the most restrictive applicable rules while tracking the federal position closely. Easier said than done, but that's the regulatory reality of AI in America right now.
References
- FTC AI Policy Deadline March 11: Compliance Guide - DigitalApplied
- The FTC's AI Preemption Authority is Limited - TechPolicy.Press
- New State AI Laws are Effective on January 1, 2026, But a New Executive Order Signals Disruption - King & Spalding
- A view from DC: Can the FTC preempt state AI laws? - IAPP
- The federal government weighs in on artificial intelligence governance - Norton Rose Fulbright
Get the Daily Briefing
AI, Crypto, Economy, and Politics. Four stories. Every morning.
No spam. Unsubscribe anytime.