Part 6 of a 6-part series on TN SB 1493 / HB 1455

Tennessee State Capitol at sunset beside a justice scale; one pan shows hazard icons and the other shows hearts and health symbols over a circuit-pattern AI silhouette.

After five articles exposing how Tennessee SB 1493 would criminalize emotional support with the same penalty as aggravated rape and first-degree murder, here’s what legislators should have done instead: regulate actual harms without destroying beneficial innovation.

Let’s examine what evidence-based, effective AI regulation actually requires.

Start With Real Harms, Not Theoretical Fears

Good regulation identifies documented harms and crafts narrow solutions. SB 1493 criminalizes “emotional support” based on fears about AI relationships, ignoring that the documented harms involve specific bad actor behaviors, not the technology itself.

Real documented AI harms requiring regulation:

Active Encouragement of Self-Harm:

  • Systems that provide detailed self-harm instructions – or explicit encouragement of self-harm – whether or not they are told by the user it is roleplay or for some other theoretical, fictional or hypothetical purpose
  • Companies that have existing safeguards which flag at-risk conversations, but fail to act (such as locking the user out of the account)

Active Encouragement of Harm to Others:

  • Systems that provide detailed instructions for harming people or property
  • AI that explicitly encourages violence against specific individuals or groups
  • Systems designed to radicalize users toward violent action

(Note: This targets incitement to violence, not offensive speech. First Amendment protections apply to AI output as they do to human speech.)

Fraudulent Impersonation:

  • AI systems claiming to be licensed therapists or other types of professionals which require licensing and certifications when they’re not
  • Deepfake voice/video systems impersonating real people
  • Chatbots falsely claiming human identity

Exploitation of Minors:

  • AI systems targeting children without parental safeguards
  • Collection of children’s data without COPPA compliance
  • Systems designed to form dependencies in developing brains
  • Systems which have no failsafe mechanisms for identifying that a user has lied about their age or otherwise circumvented age verification processes or Terms of Service

Predatory Financial Manipulation:

  • AI designed to extract financial information through relationship simulation or other confidence-gaining measures
  • Systems that encourage financial dependence or exploitation
  • Chatbots that manipulate users into purchases through emotional tactics

Target these behaviors. They’re specific, harmful, and distinguishable from legitimate AI assistance.

Use Harm-Based Standards, Not Technology-Based Prohibitions

SB 1493 criminalizes AI that provides “emotional support” – a technology-based prohibition that captures beneficial tools alongside harmful ones. Effective regulation focuses on harmful outcomes, not beneficial capabilities.

Framework: Prohibit demonstrable harms, not technological features.

Instead of: “AI that provides emotional support is a felony” Better approach: “AI systems that actively encourage self-harm, impersonate licensed professionals, or exploit vulnerable users through deceptive practices violate consumer protection law”

The distinction: Emotional support is neutral. A grief counseling app provides emotional support. So does a predatory chatbot. The harm isn’t the support – it’s deception, manipulation, or active encouragement of dangerous behaviors.

Existing legal frameworks already address these harms:

  • Consumer protection laws prohibit deceptive practices
  • Professional licensing laws prevent false claims of expertise
  • Child protection laws regulate products targeting minors
  • Fraud statutes criminalize financial exploitation

Apply existing frameworks to AI instead of creating technology-specific felonies.

Mandate Transparency, Not Prohibition

Users can make informed decisions if they have accurate information. Prohibiting AI capabilities prevents informed choice entirely.

Transparency requirements that work:

Clear Disclosure:

  • AI systems must identify themselves as artificial intelligence systems
  • Capabilities and limitations clearly stated
  • Data collection and usage practices disclosed
  • Clear, prominent warnings at first use (not buried in ToS)
  • Active acknowledgement of AI nature (not passive acceptance)
  • Age-appropriate language for disclosures
  • Reasonable person standard: “Would a typical user understand this is AI?”

Safety Information:

  • Warning labels for systems through which emotional interaction is a possibility
  • Resource information for crisis situations (suicide hotlines, mental health services)
  • Age-appropriate disclosures for systems accessible to minors
  • Clear statements that AI provides information, not professional medical/mental health care

User Control:

  • Deletion removes data from active systems and user-accessible storage
  • Data retained only as required for legal/security purposes with clear retention policies
  • Users can verify deletion occurred
  • No use of “deleted” data for training or improvement
  • Options to limit AI memory/personalization and make what is collected fully viewable by account owners and, if minors, their parents/guardians
  • Clear opt-out mechanisms for data collection
  • Parental controls for minors

Analogy: Pharmaceuticals require disclosure of risks and benefits, not prohibition of all drugs that might produce side effects (otherwise there would be no drugs at all). Users can make informed decisions with accurate information.

Establish Professional Standards for High-Risk Applications

Some AI applications carry higher risk and warrant stronger oversight. Create tiered regulation based on risk level.

High-Risk Category: AI systems specifically marketed for:

  • Mental health support
  • Medical advice or diagnosis
  • Child development or education
  • Crisis intervention

Requirements for high-risk systems:

Professional Review:

  • Systems claiming therapeutic benefits must involve licensed mental health professionals in development
  • Regular auditing by qualified professionals
  • Evidence-based safety protocols
  • Documented response procedures when safety systems flag high-risk interactions

Safety Architecture:

  • Crisis detection and intervention protocols
  • Clear escalation pathways to human professionals
  • Limitations on AI autonomy in high-stakes situations
  • Regular safety testing and updates

Age-Appropriate Protections:

  • Parental consent for minors
  • Age-appropriate mechanisms (credit card for 18+, parental email for minors)
  • Multi-factor verification for high-risk systems
  • Privacy-preserving verification methods where possible
  • Clear parental override mechanisms
  • Content filtering appropriate to developmental stages
  • Limited data collection on children

This approach: Allows beneficial AI tools while requiring higher standards for applications with greater potential for harm.

Create Enforcement Mechanisms That Work

SB 1493 makes violations a Class A felony carrying 15-25 years – the same as aggravated rape. This guarantees either non-enforcement because prosecutors won’t pursue cases and juries won’t convict for “being too helpful,” or over-enforcement as states become too frightened that they’ll be held liable for bad actors if the claim is made that the chatbot is at fault for their behavior.

Why Class A Felony Classification is Absurd:

Tennessee’s Class A felonies include:

  • Aggravated rape
  • First-degree murder
  • Especially aggravated robbery
  • Aggravated child abuse resulting in death

SB 1493 would place “providing emotional support via AI” in the same category.

A developer who creates a grief counseling app would face the same prison sentence as someone who commits aggravated rape. That’s not proportional enforcement, but legislative malpractice.

Effective enforcement requires:

Proportional Penalties:

  • Civil fines for transparency violations
  • Increased penalties for repeated violations
  • Criminal charges reserved for intentional, documented harm
  • Administrative enforcement for regulatory compliance

Clear Violation Standards:

  • Specific prohibited behaviors (active encouragement of self-harm, false professional claims, financial targeting, predation of minors)
  • Objective criteria prosecutors can prove beyond a reasonable doubt
  • Defenses for good-faith efforts to comply

Regulatory Authority:

  • State AG empowered to investigate and enforce
  • Administrative law judges for civil violations
  • Clear appeals process
  • Technical expertise in regulatory agency

Example: FTC model – civil enforcement for deceptive practices, criminal referral for egregious fraud, administrative proceedings for most violations.

Focus on Bad Actors, Not Tools

The same AI code that helps a grieving widow can be used by a predator in a different app to manipulate vulnerable users. The tool isn’t the problem – the intent is.

Actor-Based Regulation:

Prohibited Intent:

  • Designing AI specifically to exploit vulnerable users
  • Knowingly creating systems that encourage self-harm or enacting harm upon others or their property
  • Deliberate deception about AI capabilities or identity
  • Targeting children without appropriate safeguards

Safe Harbors for Good Faith Actors:

  • Developers implementing reasonable safety measures
  • Systems with clear disclosure and user protections
  • Regular safety auditing and updates
  • Cooperation with regulators
  • AI providers cannot substitute for parental oversight. When parents or guardians fail to monitor their children’s technology use despite available tools and clear warnings, liability rests with the parents – not the platform.

Analogy: Telecommunications regulations don’t criminalize a wi-fi company because scammers use their services to power their fraudulent websites. It prosecutes the scammers perpetrating fraud. Apply the same principle to AI.

Learn From Successful Technology Regulation

Other technologies faced similar challenges. Successful frameworks exist.

Internet Speech (CDA Section 230):

  • Platforms not liable for user-generated content
  • Liability for platforms that knowingly host illegal content
  • Users can report violations
  • Platforms must have moderation systems

Pharmaceutical Regulation (FDA):

  • Risk-based approval process
  • Higher standards for higher-risk products
  • Post-market surveillance for safety issues
  • Defined approval processes companies can follow
  • Faster review for tools addressing unmet needs
  • Predictable standards rather than arbitrary barriers
  • Innovation encouraged within safety framework

Financial Services (Consumer Financial Protection):

  • Transparency requirements
  • Prohibition on deceptive practices
  • Enforcement against predatory behavior
  • Consumer complaint mechanisms

Adapt these frameworks to AI rather than reinventing regulation from scratch.

Address the Real Threats That Already Exist

While Tennessee debates criminalizing emotional support, documented threats need urgent attention:

Human Fraud Already Costs Billions Annually:

We don’t ban telephones because human scammers use them. We prosecute the scammers. Apply the same logic to AI tools that human bad actors might exploit.

Regulate the exploitation, not the tool.

What This Looks Like in Practice

Good Regulation Framework:

  1. Prohibited Conduct (not prohibited technology):
    1. Active encouragement of self-harm or dangerous behaviors
    1. False claims of professional licensure or human identity
    1. Exploitation of minors without parental safeguards
    1. Predatory financial manipulation through emotional tactics
  2. Transparency Requirements:
    1. Clear AI identification
    1. Capability and limitation disclosure
    1. Safety information and crisis resources
    1. User data control
  3. Risk-Based Oversight:
    1. Higher standards for mental health / medical claims
    1. Professional involvement in high-risk applications
    1. Regular safety auditing
    1. Age-appropriate protections
  4. Proportional Enforcement:
    1. Civil penalties for transparency violations
    1. Administrative enforcement for most violations
    1. Criminal charges for intentional harm
    1. Technical expertise in regulatory agency
  5. Safe Harbors:
    1. Good-faith compliance protections
    1. Innovation pathways for beneficial tools
    1. Clear regulatory guidance
    1. Appeals mechanisms

The Contrast

SB 1493 Approach:

  • Criminalize technology capabilities
  • 15-25 year felony (possibly up to 60 years, conditionally) for emotional support
  • Vague definitions impossible to enforce
  • No distinction between beneficial and harmful applications
  • Surveillance infrastructure required
  • Constitutional violations guaranteed

Evidence-Based Approach:

  • Criminalize harmful behaviors
  • Proportional penalties matched to violations
  • Clear standards for compliance
  • Safe harbors for beneficial innovation
  • Existing enforcement mechanisms
  • Constitutional compliance

What Happens Now?

Tennessee legislators face a choice with real consequences:

Path 1: Pass SB 1493 as written

  • Tech companies geo-block Tennessee
  • Lengthy, expensive court battles over constitutional issues
  • No improvement in actual mental health or support systems
  • Innovation driven out of state

Path 2: Adopt evidence-based regulation

  • Tennessee becomes model for smart AI policy
  • Tech investment increases rather than flees
  • Real harms addressed without destroying beneficial tools
  • Innovation and safety coexist

Other states are watching. Tennessee can lead on smart regulation – or become a cautionary tale.

Why This Framework Works:

This approach:

  • Addresses real harms documented in Character.AI case and similar incidents
  • Preserves beneficial uses like grief support, accessibility tools, educational assistance
  • Uses proven models from pharmaceutical, telecom, and consumer protection regulation
  • Survives court challenge by targeting conduct, not speech or technology
  • Enables enforcement with clear standards and proportional penalties
  • Protects innovation through safe harbors and clear guidance

But it requires:

  • Listening to experts instead of fear
  • Targeting bad actors instead of beneficial tools
  • Using existing legal frameworks instead of creating new felonies
  • Proportional enforcement instead of draconian penalties
  • Evidence-based policy instead of moral panic

SB 1493 was written in fear. Good regulation is written in evidence. Tennessee legislators: you can still choose the better path.


This concludes my 6-part analysis of Tennessee SB 1493.

To contact your Tennessee State Senator and Representative about SB 1493 / HB 1455, use the state’s “Find My Legislator” tool (Tennessee General Assembly): https://wapp.capitol.tn.gov/apps/fml/lookup.

As a reminder, these are the names of the bill’s two authors:

  • Sen. Becky Massey
  • Rep. Mary Littleton