Part 5 of a 6-part series on TN SB 1493 / HB 1455

Digital illustration of a humanoid robot in handcuffs between two police officers, standing in front of a cracked judge’s gavel and a torn U.S. Constitution, with storm clouds, lightning, the Tennessee state flag, and the Tennessee Capitol building in the background, symbolizing Tennessee’s AI bill colliding with constitutional protections.

First Amendment, void-for-vagueness, Commerce Clause – pick your poison. This bill violates them all.

As highlighted in part 4 of this series, Tennessee’s SB 1493 demonstrably creates enforcement nightmares. It’s also a constitutional minefield that courts will dismantle piece by piece for the same reason they’ve struck down past overreaches in tech regulation: it runs headlong into constitutional principles developed over centuries.

Let’s examine why this bill is unlikely to survive its first serious legal challenge.

First Amendment: Code as Speech

Federal courts have consistently held that computer code constitutes protected speech under the First Amendment.

Key precedent: Bernstein v. U.S. Department of Justice (9th Circuit, 1996) established that source code is speech protected by the First Amendment, even when that code has functional applications. Judge Marilyn Hall Patel stated:

“This court can find no meaningful difference between computer language, particularly high-level languages as defined above, and German or French….Like music and mathematical equations, computer language is just that, language, and it communicates information either to a computer or to those who can read it…”

Application to SB 1493: AI training constitutes:

  • Writing code that processes information
  • Selecting datasets that represent language and ideas
  • Creating systems that generate expressive content

Tennessee’s bill criminalizes creating AI that can hold “open-ended conversations” – conversations that are themselves speech. The state is criminalizing the creation of tools that enable expression because Tennessee doesn’t like some of the expression those tools might enable.

While residents are free to engage in free speech elsewhere, even if some would classify their words as extreme, harmful, violent or radical via social media or in-person gatherings, the right to engage in conversations with Large Language Models (LLM) appears to be their desired exception to the rule.

The constitutional problem: Content-based restrictions on speech face strict scrutiny – the highest level of judicial review. Tennessee must demonstrate:

  1. A compelling government interest
  2. A law narrowly tailored to achieve that interest
  3. The least restrictive means available

Tennessee’s challenge: “Protecting people from emotional support” isn’t a compelling interest in the legal sense. Rather, it would appear to be a form of paternalistic thought control, in that it’s attempting to define the best (or worst) form of emotional support for every resident of the state unilaterally.

And even if courts accepted it as compelling, the bill fails narrow tailoring by criminalizing vast swaths of beneficial AI development to prevent potential harms.

In First Amendment doctrine, SB 1493 is not a neutral safety rule; it is a content-based regulation of speech. A law is content-based when it singles out speech because of what it communicates – its topic, message, or viewpoint. The Supreme Court has repeatedly held that such laws are “presumptively unconstitutional” and subject to strict scrutiny because they allow government to favor some ideas while suppressing others.

SB 1493 draws exactly that kind of line. It does not regulate all AI output equally. It targets only those systems that engage in certain kinds of conversation: “emotional support,” “companionship,” or interactions that make a user “feel” they could form a relationship. Neutral factual answers are allowed; emotionally resonant ones become a Class A felony. That is textbook content-based regulation: the legal consequences turn on whether the speech sounds supportive, relational, or human-like.

Void-for-Vagueness Doctrine

The Fifth and Fourteenth Amendments require criminal laws to define prohibited conduct with sufficient clarity that ordinary people can understand what’s illegal. Vague laws violate due process.

The void-for-vagueness test requires laws to:

  1. Provide notice to citizens about what conduct is prohibited
  2. Prevent arbitrary and discriminatory enforcement

SB 1493’s vagueness problems:

“Emotional support”: What distinguishes emotional support from:

  • Helpful information that reduces stress
  • Patient explanation that builds confidence
  • Accessible assistance that provides relief

“Open-ended conversations”: What makes a conversation “open-ended”?

  • Any multi-turn dialogue?
  • Conversations without predetermined endpoints?
  • Adaptive responses based on context?

“Develop an emotional relationship”:

  • Is relationship determined by user feelings or AI behavior?
  • Does remembering preferences constitute relationship development?
  • Can an AI “develop” something it doesn’t experience in the same way a human does?

“Feel that the individual could develop a friendship”:

  • Who determines what feelings constitute potential friendship?
  • Is this based on user statements or prosecutor interpretation?
  • Does “could develop” mean might, possibly, or definitely?

“Mirror interactions that a human user might have with another human user”:

  • All human conversations are different – which “interactions” are prohibited?
  • Does explaining concepts “mirror” how teachers interact with students?
  • Does answering questions “mirror” how experts interact with clients?

Google search mirrors how a librarian helps you find information. Autocorrect mirrors how an editor fixes your spelling and grammar. GPS mirrors how a local gives you directions. Spell check mirrors how an elementary school teacher corrects your homework. Email auto-complete mirrors how an administrative assistant finishes your sentences. A scientific calculator mirrors how a mathematician solves equations.

The provision is so vague it bans every useful function an AI could perform even without conversation being involved, because every helpful AI behavior “mirrors” some human interaction pattern.

That’s what helpful means: providing assistance in ways humans recognize as assistance.

However, if this bill passes, an AI engineer in California faces 15-25 years in a Tennessee prison for doing the job they were hired to do. The problem is that they would be unable to determine from the statutory language whether teaching a model to have patient, helpful conversations puts their freedom in jeopardy.

When the topic was raised in the Ninth Circuit back in 2010 with respect to secondary copyright infringement, former Electronic Frontier Foundation (EFF) intern Paul Szynol wrote: “One of the principal problems with this approach, however, is the fact that the boundaries of secondary liability are not precisely set, and, short of extreme cases, it is not at all clear under what circumstances a product manufacturer will be liable for secondary infringement.”

The same is true of SB 1493 as it currently stands.

Commerce Clause Violations: The “Indivisible Market” Defense

The Commerce Clause reserves the regulation of interstate commerce to the federal government. While the 2023 Ross decision allows states to regulate products sold within their borders, SB 1493 crosses the line by imposing a Class A Felony on the very process of interstate digital production.

1. The “Regulatory Patchwork” Doctrine

Under the Dormant Commerce Clause, states cannot impose regulations that create a “patchwork” so inconsistent that it grinds national commerce to a halt.

  • The Problem: AI training is geographically indivisible. Unlike physical goods that can be labeled “Not for Sale in Tennessee,” a Large Language Model (LLM) is a single entity.
  • The Impact: If Tennessee bans “emotional simulation” while California mandates “empathetic safety alignment,” a developer cannot comply with both. This creates an unconstitutional “inconsistent regulatory burden” on a national industry.

2. The Pike Balancing Test (Still Alive)

A state law is unconstitutional if its burden on interstate commerce is “clearly excessive” compared to its local benefits.

  • The Burden: SB 1493 threatens out-of-state engineers with 15 to 25 years in prison for standard development practices that are legal in the other 49 states.
  • The Lack of Benefit: Tennessee’s interest in preventing “AI friendships” can be achieved through less restrictive means, such as mandatory disclaimers or “robotic-mode” toggles, rather than criminalizing the act of training the model itself.

3. Discrimination Against Interstate Developers

SB 1493 specifically exempts local-style uses (like customer service bots and video game characters) while targeting “foundational” AI models.

  • The Constitutional Problem: By exempting the AI applications most common to Tennessee’s local businesses while felonizing the specific technologies produced by out-of-state “Big Tech,” the bill functions as de facto economic protectionism.

Application: The “Slippery Slope” Re-imagined

If Tennessee can criminalize the training of software in California because that software is “human-like,” then:

  • Texas could criminalize the coding of encryption in Washington.
  • Florida could criminalize the development of social media algorithms in New York.
  • The Result: The U.S. digital economy would fragment into 50 “splinternets,” destroying the single national market the Commerce Clause was designed to protect.

Overbreadth Doctrine

Even if Tennessee has legitimate interests in preventing harmful AI, the First Amendment prohibits laws that sweep so broadly they chill protected speech.

The overbreadth analysis: Does the law prohibit substantially more protected conduct than unprotected conduct?

SB 1493’s overbreadth:

Prohibited under the bill:

  • Accessibility tools for disabled users
  • Grief counseling applications
  • Language learning tutors
  • Elderly companionship systems
  • General-purpose conversational AI

Potential harms the bill targets:

  • Predatory programming in chatbots encouraging self-harm
  • Manipulative systems that have been programmed to purposely exploit vulnerable users
  • AI that actively isolates users from their human support network (the bill implicitly presumes that all Tennessee residents have a human support network, which is impossible to state with certainty for every resident of the state)

The ratio: The bill criminalizes thousands of beneficial applications to prevent dozens of potentially harmful ones. That’s unconstitutionally overbroad.

Equal Protection Concerns

The Fourteenth Amendment requires states to apply laws equally. Arbitrary classifications without rational basis violate equal protection.

SB 1493’s arbitrary distinctions:

Video game NPCs get a carve-out, but grief counseling apps don’t. Why?

  • Both involve “emotional” interactions with AI
  • Both create experiences users might describe as relationships
  • Both “mirror human interactions”

Customer service bots are exempted, but accessibility tools aren’t. Why?

  • Both provide conversational assistance
  • Both adapt to user needs
  • Both could provide “emotional support” by reducing frustration

The bill’s exemptions reveal arbitrary line-drawing without rational basis. If AI companionship is dangerous enough to warrant 15-25 year sentences, why do video games get a pass? If emotional support from AI is harmful, why exempt customer service interactions that provide relief and support?

Tennessee’s Legal Exposure

If challenged, Tennessee faces:

Preliminary Injunctions: Courts will likely block enforcement before trial, finding plaintiffs are likely to succeed on constitutional grounds.

Attorney’s Fees: Under 42 U.S.C. § 1988, Tennessee must pay prevailing plaintiffs’ attorney fees in civil rights cases. Well-resourced tech companies and civil liberties groups will not be shy about litigating, and Tennessee may ultimately pay millions in attorney’s fees.

Precedent: Tennessee taxpayers will fund years of appeals through state and federal courts, with a possible outcome being that the courts strike the law down and order fee payment.

Political Damage: National attention on Tennessee criminalizing emotional support, accessibility tools, and grief counseling is likely to create backlash far beyond AI policy debates.

Similar Laws Struck Down

Courts have repeatedly rejected overly broad technology regulations:

  • Brown v. Entertainment Merchants Association (2011), 564 U.S. 786. California tried to ban the sale of violent video games to minors, arguing that interactive media posed special psychological risks. The Supreme Court struck the law down, holding that video games are fully protected speech and that the state cannot restrict access to protected expression simply because it disapproves of the content. The Court emphasized that alleged harms to children and parental concerns were not enough to justify a broad content-based restriction when less restrictive alternatives existed.
  • Reno v. American Civil Liberties Union (1997), 521 U.S. 844. Congress’s Communications Decency Act made it a crime to transmit “indecent” or “patently offensive” online content that minors might see. The Supreme Court unanimously invalidated key provisions as vague and overbroad, warning that in trying to protect children, the government had effectively criminalized a vast amount of lawful adult speech and burdened the entire medium of internet communication. The Court held that the internet is entitled to full First Amendment protection and that fear of online harms does not justify sweeping, content-based criminal prohibitions.

Together, these cases draw a clear boundary Tennessee is likely to crash into. Legislatures do not get to criminalize broad categories of protected speech – whether violent games or “emotionally supportive” conversations – on the theory that some subset of that speech might be harmful. When a law targets specific types of content, reaches far beyond the conduct it claims to prevent, and offers no realistic way to confine enforcement to truly harmful cases, federal courts have not hesitated to strike it down.

What This Means

Tennessee lawmakers wrote an unconstitutional policy that courts would be able to dismantle on multiple grounds:

  • First Amendment protection for code as speech
  • Void-for-vagueness violations of due process
  • Commerce Clause constraints on interstate regulation
  • Overbreadth doctrine preventing speech suppression
  • Equal protection issues with arbitrary classifications

The timeline:

  1. Bill passes
  2. Tech companies / civil liberties groups immediately file suit
  3. Courts grant preliminary injunction blocking enforcement
  4. Years of expensive litigation Tennessee might very well lose
  5. Final court order striking law + ordering attorney fee payment
  6. Tennessee taxpayers fund millions in legal fees for nothing

That is the fate that typically awaits laws that are unconstitutionally vague, overbroad, and content-based.


Next in this series: Part 6 examines what good AI regulation actually looks like – targeted harm prevention, clear standards, enforcement mechanisms that work.