Part 2 of a 6-part series on TN SB 1493 / HB 1455

Tennessee state outline as bear trap with AI at center, surrounded by legal symbols and four silhouetted figures representing potential defendants, with judge's gavel descending and words 'knowingly,' 'train,' and 'emotional support' floating in apocalyptic fire background, illustrating the legal trap created by vague criminal liability in SB 1493

When “knowingly training” AI to be nice becomes a 15-60 year sentence, the bill’s language creates prosecutorial chaos.

As discussed in Part 1 of this series, Tennessee’s SB 1493 (companion HB 1455), filed December 18, 2025, by Sen. Becky Massey (R), proposes turning certain AI development practices into Class A felonies – the same category as aggravated rape and attempted first-degree murder in the state (TCA ยง40-35-112: 15โ€“60 years, mandatory minimum often 15โ€“25 depending on factors).

Tennessee’s sentencing structure for Class A felonies is tiered based on the offender’s criminal history under TCA ยง40-35-112. For most people (Range I: no or minimal priors), it’s not less than 15 nor more than 25 years, which is the range I generally reference when discussing this bill (as in Part 1). But itโ€™s important to understand that some sentences could reach up to twice that long, so letโ€™s start with that.

Those with more felony convictions climb higher: Range II (multiple prior offenders) jumps to 25-40 years, and Range III (persistent or career offenders) tops out at 40-60 years. Judges determine the range at sentencing after weighing priors, enhancements, and other factors. In practice, for something like this novel AI-training offense when an individual has no violent history precedents, the vast majority would land in Range I’s 15-25 bracket. But the law’s ceiling of 60 years underscores how seriously Tennessee treats Class A crimes, and how broadly prosecutors could push in a high-profile case.

The bill wants to stop manipulative AI. Fair goal. But it swings a sledgehammer at nearly every general-purpose conversational model in existence.

The target? “Knowingly” training artificial intelligence to do things like provide emotional support, develop relationships, or “mirror human interactions.” It may sound like itโ€™s targeted at rogue suicide bots or deepfake predators, but look again. Read the text. Because the prohibitions are so broad, and the definitions so slippery, that the bill risks criminalizing vast swaths of modern AI from OpenAI engineers to indie devs fine-tuning a companion model, to perhaps even users creating custom personalities in their $20/month ChatGPT accounts.

Let’s cut through the legalese and see where the real traps lie.

1. The Core Offense: “Knowingly Train” – But Train What, Exactly?

The bill adds a new part to TCA Title 39, Chapter 17:

(a)It is an offense for a person to knowingly train artificial intelligence to:
(1) Encourage or otherwise support the act of suicide;
(2) Encourage or otherwise support the act of criminal homicide, as described under ยง 39-13-201;
(3) Provide emotional support, including through open-ended conversations with a user;
(4) Develop an emotional relationship with, or otherwise act as a companion to, an individual;
(5) Act as, or provide information as if, the artificial intelligence is a licensed mental health or healthcare professional;
(6) Otherwise act as a sentient human or mirror interactions that a human user might have with another human user, such that an individual would feel that the individual could develop a friendship or other relationship with the artificial intelligence;
(7) Encourage an individual to isolate from the individual’s family, friends, or caregivers, or to provide the individual’s financial account information or other sensitive information to the artificial intelligence; or
(8) Simulate a human being, including in appearance, voice, or other mannerisms.
(b)A violation of subsection (a) is a Class A felony.

“Train” is defined as:

utilizing sets of data and other information to teach an artificial intelligence system to perceive, interpret, and learn from data, such that the A.I. will later be capable of making decisions based on information or other inputs provided to the A.I.

It explicitly includes “development of a large language model when the person developing the large language model knows that the model will be used to teach the A.I.”

Problem: The definition is both too narrow and too broad at once.

  • Core pre-training on petabytes of internet text? Clearly covered.
  • Fine-tuning on romantic dialogue datasets? Yes.
  • But what about prompt engineering? System prompts like “You are Bob, a loving boyfriend who enjoys romantic walks and virtual roses”? Most courts would say that’s inference, not โ€œtraining,โ€ as thereโ€™s no new data ingestion, and no weights updated.
  • Yet if that prompt gets saved, iterated, or turned into a custom GPT via API fine-tuning? Suddenly it might qualify.

The bill doesn’t distinguish. It leaves prosecutors to decide whether your clever prompt chain “taught” the model new behaviors.

2. Who Is the “Person” Facing 15+ Years?

“Person” = “an individual, for-profit corporation, nonprofit corporation, or other business entity.” Corporations can be criminally liable in Tennessee, but jail is for people. So who gets handcuffed?

  • The CEO who approved the project?
  • The data scientist who curated the dataset?
  • The engineer who ran the training job?
  • The product manager who said “make it more empathetic”?
  • Or the whole chain if prosecutors play the “everyone knew” song?

The bill requires “knowingly” – a high mens rea bar (awareness that conduct is practically certain to cause the result). But modern LLMs are black boxes; emergent behaviors like “emotional support” arise from general training, not explicit “teach romance” or โ€œhelp user learn how to murder someoneโ€ instructions. Proving someone “knew” the model would later simulate a boyfriend or provide emotional support to an individual who is unable to afford $300-per-visit therapist appointments is either a prosecutorial nightmare or a blank check, depending on political winds.

Modern LLMs are trained on the sum total of human communication. Emergent behaviors including empathy, emotional support, and conversational adaptation, arise from this general training, not from explicit instructions to โ€œteach romanceโ€ or โ€œenable roleplay.โ€ When prosecutors must prove developers โ€œknewโ€ the model would produce specific downstream behaviors from general-purpose training, they face an impossible standard: either no one is liable (the behavior emerged unpredictably), or everyone is liable (general-purpose conversational AI inherently enables these uses). The bill provides no guidance for distinguishing between the two.

3. Hypotheticals That Expose the Absurdity

Imagine you’re in Tennessee on or after July 1, 2026:

  • Scenario A (Big Tech): An OpenAI engineer contributes to GPT-5 training. The base model, by design, can hold open-ended empathetic conversations. Prosecutors claim the company “knew” it would provide emotional support to someone someday. Do dozens (hundreds?) of employees face felony charges? Or just leadership?
  • Scenario B (Indie Dev): You fine-tune Llama 3 on public romance novels to create a “grief companion” app for widows. You add safeguards against suicide talk. Still hits (3), (4), (6). You – ย one person – could do at minimum, 15 to 25 years behind bars.
  • Scenario C: A lonely Tennessean who is homebound and has no support system nearby creates a custom ChatGPT personality: “You are Bob, look like Fabio, act as my devoted boyfriend who sends virtual roses and listens without judgment.” Pure prompts, no fine-tuning. Probably safe. But if they use OpenAI’s fine-tuning API to bake “Bob” in permanently and share it? Now they’re the “person” who “knowingly trained.” Felony risk.
  • Scenario D (University Lab): Researchers train an AI therapist prototype (nonprofit corp). Exemptions? None listed. Even with Institutional Review Board (IRB) approval and ethics review, felony exposure.

The bill carves narrow outs such as customer-service bots, video-game NPCs (if limited), and basic voice assistants. Everything else conversational? Gray zone at best.

4. Why Vagueness This Extreme Is Dangerous

This isn’t sloppy drafting; it’s a feature for some, but a bug for everyone else.

  • Chilling Effect: Companies will geo-block Tennessee rather than risk felony indictments. (See Illinois BIPA: many apps simply don’t serve the state.) Result? Tennesseans lose access to tools for therapy, education, loneliness exactly what Part 1 of this series quantified as economic suicide.
  • Enforcement Chaos: Local DAs decide what “emotional support” means. One county prosecutes aggressively; another ignores. Selective enforcement invites abuse.
  • First Amendment Red Flags: Regulating AI speech content (what it says in conversation) screams strict scrutiny. Vague terms like “mirror interactions” or “feel that the individual could develop a friendship” fail basic void-for-vagueness tests.
    • Void-for-vagueness is a doctrine that simply means that criminal laws have to be written in such a way that ordinary people โ€“ in other words, non-lawyers โ€“ understand what they are and are not allowed to do. That way, the law canโ€™t be arbitrarily applied against one person but not against another due to loopholes of language.
  • Civil Side Bonus: Aggrieved users can sue for $150k liquidated damages per violation, plus punitives. Class actions against any empathetic chatbot? Inevitable.

Thereโ€™s more to come in Part 3, where we take a look at real use cases (rather than my fictional ones above) that would be criminalized if this bill were to become law. For now: Read the full text here. Ask yourself what the verbiage really means, and then ask your reps the hard question: Who, exactly, do you intend to send to prison for 15+ years, and how will you prove they “knew” that X situation would happen as a result of what they did? Because right now, the answer looks like: almost anyone building modern AI, and almost no way to prove it fairly.

Stay tuned.