Part 4 of a 6-part series on TN SB 1493 / HB 1455

Prosecuting “emotional support” means monitoring every conversation. Tennessee can’t afford that infrastructure – or the resulting lawsuits.
Tennessee’s SB 1493 does more than criminalize broad categories of AI behavior. It quietly assumes an enforcement apparatus that does not exist, cannot exist at the state level, and would collapse under its own legal and technical weight if attempted.
We are not talking about political will, ideology, or technical feasibility. This comes down to mechanics.
To enforce this law, Tennessee would need surveillance powers, evidentiary standards, and jurisdictional reach that exceed anything the state currently possesses. Reach that would immediately collide with constitutional protections, interstate commerce limits, and the basic realities of how modern AI systems work.
Let’s walk through what prosecution would actually require.
The Proof Problem: What Does “Knowingly” Mean in an Emergent System?
SB 1493 hinges on a single word: knowingly.
To secure a conviction carrying a 15–25 year sentence, prosecutors must show beyond a reasonable doubt that a developer knew their system would:
- Provide emotional support
- Develop a relationship with a user
- Act as a companion
- Cause a user to feel friendship or relational attachment
That burden is not abstract. It dictates what evidence must exist.
What Prosecutors Would Have to Obtain
Internal communications, including:
- Engineering discussions about model behavior
- Product specifications referencing conversational tone or engagement
- Dataset documentation indicating inclusion of supportive or relational language
- Executive or board discussions about user retention, engagement, or trust
Technical proof, including:
- Identification of specific training data that caused prohibited behaviors
- Architectural explanations linking model design to relational outcomes
- Demonstration that safeguards were knowingly insufficient
- Evidence that prohibited behaviors were foreseeable rather than emergent
Here’s the problem: modern language models are not programmed line-by-line to form relationships. They are trained on massive corpora of human language. Many of their most salient capabilities – including empathy, reassurance, and conversational continuity – are emergent properties, not explicit features.
GPT-4, for example, was not explicitly trained to “provide emotional support.” It was trained to predict language. Emotional resonance arises because human language itself contains reassurance, reflection, and care.
Transformer-based models do not encode intent or relational goals; they generate language by modeling probabilistic patterns across large bodies of human text, producing behaviors that emerge from context rather than design.
From a legal standpoint, that distinction is fatal. You cannot prove intent to create a relationship when the behavior emerges probabilistically from general training. Defense counsel would correctly argue that the system learned linguistic patterns, not relational goals.
The statute criminalizes user experience, not developer action.
Intent exists in minds. These models operate in gradients.
The Surveillance Infrastructure This Law Assumes
Because intent cannot be inferred solely from code, enforcement shifts to outcomes. That means monitoring individual AI user behavior. To identify violations, Tennessee would need visibility into two domains.
Developer Surveillance
The state would need access to:
- Source code repositories
- Training datasets and curation criteria
- Internal testing transcripts
- Safety review documentation
- Iteration logs and deployment notes
Much of this material is proprietary, protected by trade secret law, or located entirely outside Tennessee. For example, OpenAI and Anthropic are both headquartered in San Francisco while xAI is nearby in Palo Alto. Google’s DeepMind lives in London while Cohere cohabits in both San Francisco and Toronto.
User Surveillance
More critically, the state would need access to:
- Conversations between Tennessee residents and AI systems
- Interaction logs sufficient to determine emotional support or companionship
- Pattern analysis identifying perceived relationships of any kind
There is no other way.
A prosecutor cannot prove that an AI “provided emotional support” without examining the content of private conversations. The statute does not target architecture or code in the abstract; it targets outcomes experienced by users. That means evidence must come from interaction logs.
This is more than reviewing a handful of transcripts. It is volume at a scale that overwhelms conceptually, before practicality even enters the conversation.
Tens of thousands – or more realistically, millions – of AI conversations occur daily involving Tennessee residents across healthcare, education, accessibility tools, customer service, and general-purpose systems. Each conversation would need to be ingested, stored, classified, and evaluated for emotional tone, relational continuity, and perceived companionship.
Human review alone is impossible.
No state agency has the staffing, expertise, or budget to manually monitor conversational AI at that scale. Even organizations that already operate large-scale AI systems rely on automated triage rather than comprehensive human review, because manual monitoring is economically and operationally infeasible.
Which leaves that very same option as the only one available to any state attempting to prosecute under legislation this broad.
To enforce SB 1493, Tennessee would need to deploy large-scale analytical systems capable of scanning private communications, classifying emotional content, flagging relational language, and triaging potential violations for human review.
In other words, the state would need to build or procure AI systems designed to monitor other AI systems and still employ people to review whatever gets flagged.
This is not speculative. It is operational necessity.
The Tennessee Artificial Intelligence Advisory Council’s November 2025 Action Plan acknowledges that AI is already widespread across state functions and that fully cataloging or separating AI-enabled systems will soon become impractical as adoption accelerates. Yet SB 1493 assumes the opposite: that AI interactions can be isolated, identified, and surveilled comprehensively.
An assumption that collapses under basic arithmetic.
Even attempting enforcement would require continuous monitoring of Tennesseans’ private conversations, long-term storage of expressive content, algorithmic analysis of emotional states, and subjective determinations about relationships. All of this would have to occur before a single charge could be filed.
That raises immediate Fourth Amendment concerns around unreasonable search, compelled disclosure of expressive content, and the absence of individualized suspicion. But the contradiction runs deeper than surveillance alone.
Successful prosecutions would not end at indictment.
They would require incarceration in a state system that is already operating under severe overcrowding and rising incarceration rates. Each conviction would carry decades-long sentences, imposing ongoing costs on taxpayers in a state that does not levy an individual income tax and already struggles to fund basic services.
Enforcing this bill would therefore require Tennessee to expand surveillance infrastructure, absorb long-term incarceration costs, and divert public resources from an already strained system. And all of this in a bid to police conduct that the state’s own AI advisory bodies acknowledge is becoming ubiquitous and inseparable from ordinary digital life.
More fundamentally, it reveals the core contradiction of the bill: enforcing it would require the very class of AI systems it seeks to criminalize, while compounding fiscal and institutional pressures Tennessee is already failing to manage.
The Jurisdictional Nightmare
SB 1493 applies to anyone who trains an AI system accessible to Tennessee residents. That includes developers located in:
- California
- Texas
- New York
- The EU
- The UK
- Japan
- Anywhere with an internet connection
The unanswered questions multiply quickly.
- Can Tennessee compel a California engineer to stand trial for work done entirely in another state?
- Can prosecutors prove the engineer “knew” their work would result in a 73-year-old Tennessean receiving emotional comfort from the product?
- Can a foreign developer be extradited because a Tennessee resident accessed a web service to help them with their service-related PTSD?
- Does making software available nationwide subject developers to the criminal law of all 50 states simultaneously?
While states may enact non-discriminatory regulations that have incidental effects on interstate commerce, they generally may not exercise extraterritorial criminal jurisdiction over conduct occurring wholly outside their borders. SB 1493 pushes directly into that unresolved and constitutionally fraught territory.
In similar regulatory contexts, the predictable response from technology providers has not been bespoke compliance, but withdrawal from the market. Faced with felony exposure and vague standards, AI providers would be far more likely to geo-block Tennessee entirely than attempt state-specific enforcement compliance.
In practical terms, that means whole categories of AI services disappearing from Tennessee overnight.
Impossible Definitions, No Case Law
Even if jurisdiction and surveillance hurdles were overcome, prosecutors would face a more basic problem: none of the prohibited behaviors are legally defined.
“Emotional Support”
- Is reducing frustration emotional support?
- Is explaining information at 3 AM emotional support?
- Is helping someone understand a problem emotional support if it makes them feel capable?
“Developing a Relationship”
- Is remembering preferences a relationship?
- Is adaptive tone a good user experience (UX) or criminal intimacy?
- If a user feels attachment, does that create liability regardless of developer intent?
“Acting as a Companion”
- Is regular use companionship?
- Is conversation itself companionship?
- Does a user’s awareness of the system’s nature negate their lived experience of companionship? And if so, on what basis?
In the absence of settled doctrine, objective technical standards, or clearly articulable elements of the offense, enforcement cannot be uniform. Decisions about investigation, charging, and conviction would necessarily turn on subjective judgments about acceptable behavior, moral norms, and perceived deviance rather than demonstrable conduct.
That is the textbook precondition for uneven enforcement, discretionary prosecution, and verdicts driven more by cultural bias than by law.
The Resource Drain
A single prosecution would require:
- Expert witnesses fluent in transformer architectures and emergent behavior
- Forensic analysis of billion-token datasets
- Years of litigation against companies with unlimited legal budgets
- Appeals through state and federal courts
- Parallel constitutional challenges
- Unforeseen collateral consequences for defendants, their families, and their communities
Tennessee district attorneys already operate under resource constraints. They are not equipped to become AI researchers litigating probabilistic systems with undefined legal standards.
Selective enforcement would be inevitable because when laws are vague, enforcement becomes discretionary.
Who gets charged?
- The indie developer?
- The open-source contributor?
- The largest company with the deepest pockets?
What triggers prosecution?
- User-initiated complaints, including subjective claims of harm or deception
- Complaint-driven enforcement initiated by third parties, including bad-faith or retaliatory reporting
- Media amplification of selected cases, shaping public narrative and urgency
- Political pressure influencing prosecutorial discretion?
This is not rule of law. It is prosecutorial roulette.
What This Actually Means
SB 1493 assumes a world in which:
- Private conversations can be monitored at scale
- Developer intent can be inferred from emergent behavior
- State criminal law can reach global software development
- Undefined emotional experiences can be prosecuted
- Local prosecutors can litigate frontier AI technology
That world does not exist.
Attempting to enforce this bill would require surveillance powers the US Constitution does not allow, resources the state does not have, and legal theories courts have repeatedly rejected.
The most likely outcome is not successful prosecution. It is access loss, legal collapse, and costly failure.
Tennessee residents would lose tools that are already pervasive across the state. Courts would strike down the law. Predatory actors would relocate. And the surveillance apparatus would linger long after enforcement failed.
That is not protection. It is security theater.
Next in this series: Part 5 examines the constitutional vulnerabilities that would dismantle SB 1493 in court, including First Amendment protections for code as speech, void-for-vagueness doctrine, and Commerce Clause violations.

Leave a Reply
You must be logged in to post a comment.