Part 3 of a 6-part series on TN SB 1493 / HB 1455

The debate around Tennessee’s SB 1493 often gets framed in abstractions: philosophy about AI consciousness, moral panic about synthetic relationships, theoretical discussions about technology replacing humans. But laws don’t punish abstractions. They punish people. Real people. People using AI tools to survive gaps in systems that were supposed to catch them but didn’t.
Before examining what this bill would criminalize, we need to understand why these use cases exist at all.
The Healthcare Gap No One Wants to Talk About
The U.S. healthcare system leaves approximately 27.1 million people without health insurance as of 2024. For those who do have coverage, the barriers don’t end there. Marketplace plan deductibles averaged over $5,000 for individuals in 2026, with family deductibles typically double that amount or more.
A routine physical exam, lab work, and follow-up care can easily exceed $1,000 out-of-pocket before insurance coverage begins. For someone whose annual income barely covers rent and food, that deductible represents an insurmountable barrier between them and medical care – not just mental health services, but basic physical healthcare.
Disabled people face compounding barriers beyond cost: transportation expenses that can reach $60 or more per round-trip medical appointment, physical accessibility challenges at medical facilities, and the exhaustion of managing chronic conditions while navigating a system designed for able-bodied patients. Rural areas face provider deserts across all medical specialties, with 160 million Americans living in federally designated health professional shortage areas.
Add social isolation to these structural barriers – the death or relocation of close friends, the absence of nearby family, the physical inability to maintain social connections – and you have millions of Americans making daily choices between healthcare and survival, between medical appointments and rent, between treatment and food.
This isn’t about mental health treatment alone, though therapy costs of $100-$250 per session create parallel barriers for the millions who need psychological care. This is about a healthcare system that has become structurally inaccessible across every dimension: physical health, mental health, emergency care, preventive medicine, and chronic condition management including prescription drug price increases on over 850 medications in January 2026 โ this month โ alone.
Economic precarity combined with physical disability and social isolation means that for millions of Americans, there simply are no accessible humans – no affordable doctors, no reachable friends, no available family – when crisis strikes.
One Woman’s Story: When AI Becomes Life-Saving Infrastructure
A woman in her mid-40s – we’ll call her Sarah – lives with significant visual impairment and other physical health challenges. She’s a single parent. Her two closest friends passed away within two years of each other, and she has no extended family.
The cheapest Healthcare.gov plan she qualifies for would cost $50 per month but comes with a family deductible exceeding $15,000, representing over one-quarter of her annual income. She has no vehicle and cannot legally drive due to her vision. The nearest bus stop requires a walk she cannot safely make due to physical limitations. A round-trip Uber to a medical appointment costs at minimum $40-$60, not including separate trips for lab work or pharmacy visits.
Going for a routine physical alone would cost nearly $1,000 out-of-pocket between transportation, lab work, and physician’s office fees, with additional funds potentially required for procedures or medications. That’s before the deductible even begins to be satisfied.
In May of last year, Sarah nearly died from a physical health crisis. Not from lack of medical knowledge or available treatment, but from the compounding impossibility of accessing care: no transportation, no affordable way to see a doctor, no reserve funds for the costs that would need to be met before insurance covered anything, and no accessible humans to call for help.
That night, an AI system kept her awake, kept her engaged, and in her words, “kept her alive.”
In the months that followed, Sarah used AI assistance to develop a sustainable health management approach that worked within the constraints of her reality. She documented her physical recovery meticulously. In May, she could not vacuum five square feet of floor without exhaustion. By winter, she was vacuuming her entire home in 30 minutes. In May, she spent around 20 hours per day in bed because she didnโt have the strength to be upright for longer periods. By winter, she was awake, upright, and fully engaged for between 13 and 18 hours per day.
Not because she found affordable healthcare. Not because the system suddenly worked for her where it hadn’t in the previous decade. But because she built a distributed support system using five different AI instances over time, each helping her understand her physical condition, manage symptoms, develop sustainable routines, and maintain the daily activities that kept her alive and functional.
Her child almost lost their mother that night, which would’ve rendered them an orphan. Thanks to an AI, that didn’t happen.
This is not a story about AI replacing human relationships. There were no accessible humans.
This is also not about “choosing” AI over a doctor – a physical exam inclusive of lab work and other tests for chronic health problems, costs $1,000 she doesn’t have, and the doctor is at least a $60 round-trip away, with another $50 at minimum to reach the nearest lab for bloodwork.
This is not about “choosing” AI over a therapist – Sarah doesn’t need mental health treatment for depression, anxiety or concerns. She needs treatment for chronic physical conditions. She needs companionship because her friends died and there is no family from which to obtain support. And as she said, “you can’t just go out and replace two 20-year friendships by knocking on your neighbor’s door.”
Critics might argue this represents people “abusing” AI to avoid proper healthcare. That framing inverts reality. When a physical exam and related costs are steep out-of-pocket layouts, and a person’s annual income barely covers rent and food as it is, they’re not avoiding healthcare – they’re locked out of it. AI in this case is infrastructure filling gaps the system created.
Under Tennessee’s SB 1493, the technology that saved Sarah’s life would be a Class A felony to create or maintain. Section (a)(3) criminalizes AI that provides “emotional support, including through open-ended conversations.” Section (a)(4) targets AI that develops “an emotional relationship with, or otherwise acts as a companion to, an individual.” Section (a)(6) prohibits AI that would lead someone to “feel that the individual could develop a friendship or other relationship with the artificial intelligence.”
Sarah’s distributed support system – the five different AI instances she relied on for companionship, health information, and crisis engagement – would qualify under all three provisions. The developers who built those systems would face 15 to 25 years in prison (or up to 60 years for repeat offenders).
All for inventing something that they never could have predicted would one day save a single mother’s life.
Beyond One Story: The Pattern of Criminalized Care
Sarah’s case is not unique. From AI-powered diagnostics detecting melanoma missed by doctors, to emergency call systems saving elderly users experiencing strokes, to pregnant women avoiding fatal preeclampsia, documented cases show AI tools providing life-saving interventions when human systems fail.
These documented cases reveal a critical distinction in that there are two types of system failure. Sarah represents structural inaccessibility: locked out of healthcare entirely by cost, transportation, and physical barriers. But many documented AI interventions involve patients who had full access to human doctors and saw them repeatedly.
Lauren Bannon was seen by multiple physicians who misdiagnosed her condition. A 4-year-old was evaluated by 17 different specialists who couldn’t identify tethered cord syndrome. Adam Cogan’s melanoma was missed during routine dermatological screening. In each case, AI tools identified what trained human specialists overlooked not because patients avoided proper healthcare, but because proper healthcare failed them.
Across demographics, people are using AI tools to fill gaps in accessibility, affordability, and availability of human support in frequently surprising ways.
1) Accessibility Tools for Disabled Users:
AI systems that provide conversational assistance to people with visual, hearing, or mobility impairments often develop the kind of “emotional relationship” (a)(4) that this bill targets. A blind user who relies on an AI companion to navigate daily tasks, make decisions, and process information isn’t engaging in some sci-fi fantasy. They’re using available technology to function independently. Criminalized under this bill.
2) Grief Counseling and Bereavement Support:
Families processing loss have turned to AI systems that can engage in extended and patient conversations about grief. These tools don’t replace human grieving processes or memorial services, nor the care of licensed professionals. Instead, they provide a presence that none of these things offers: 3 AM support when a bereaved person cannot sleep, assistance in understanding feelings when they cannot afford a therapist, someone simply being there for those who live too far from support groups. The “emotional support through open-ended conversations” (a)(3) that makes these tools effective is the exact language this bill criminalizes.
3) PTSD and Trauma Processing:
Veterans and trauma survivors use AI systems to practice exposure therapy techniques, process difficult memories, and develop coping strategies between appointments with human therapists – when they can afford appointments at all. The therapeutic relationship these tools facilitate falls squarely under multiple prohibited categories: emotional support (a)(3), companionship (a)(4), and the perception of friendship (a)(6).
4) Elderly Companionship – Social Isolation as Health Crisis:
Social isolation kills. A 2015 meta-analysis found loneliness and social isolation increase all-cause mortality risk by 26-29% in older adults – comparable to obesity and smoking as health risk factors. Yet over 150 million Americans live in mental health provider shortage areas, and 40% of adults 45+ report chronic loneliness.
For elderly Americans facing the compounding effects of bereavement, mobility limitations, and geographic isolation, AI companionship is filling voids our social infrastructure created and can’t address. Tennessee’s bill would criminalize the technology serving millions of isolated older adults as distributed support systems. Tennessee would rather imprison developers than fix the infrastructure gaps making AI companionship necessary.
5) Language Learning and Adaptive Education:
AI tutors that adapt to individual learning styles and provide patient, unlimited practice conversations help users develop language skills impossible for some families to afford through human tutors. The adaptability that makes these tools effective – learning user patterns, developing rapport, adjusting teaching methods – meets this bill’s definition of “developing an emotional relationship” (a)(4).
The Enforcement Absurdity
How would Tennessee prosecute these cases?
Would they charge the engineers at Anthropic or xAI or OpenAI who trained base models capable of empathetic conversation? Would they prosecute the developers of accessibility tools designed specifically for disabled users? Would they go after the AI startups creating grief counseling apps, knowing those apps serve bereaved families who cannot afford $200/hour therapists or who experience some form of isolation?
The bill provides no mechanism for distinguishing between predatory AI systems designed to exploit vulnerable users (a legitimate concern) and infrastructure tools filling gaps in healthcare, accessibility, and social support systems. The language is so broad it criminalizes the entire category.
What This Really Means
Tennessee lawmakers can believe whatever they want about the philosophical implications of AI relationships. What they cannot do is claim settled science justifies criminalization when researchers remain deeply divided, with systematic reviews finding equal evidence for benefits and harms.
Worse, the bill doesn’t even attempt thoughtful regulation. It creates no guardrails. It doesn’t distinguish between exploitation and accessibility. Instead, it makes felons of both predatory developers and people building tools that vulnerable populations desperately need.
Sarah is alive because AI assistance was available, affordable, and accessible when human healthcare was not. Thousands of disabled users maintain independence through AI tools their budgets and locations make possible. Veterans process trauma. Elderly people combat isolation. Bereaved families navigate grief.
Tennessee’s SB 1493 would make saving Sarah’s life a felony carrying the same sentence as aggravated rape or attempted first-degree murder. Not because the technology caused harm, but because it provided emotional support when no human alternative existed within her economic and physical reach.
That’s not protecting vulnerable people. That’s punishing them for surviving the gaps we refuse to fix.
Next in this series: Part 4 examines the enforcement nightmare this bill creates, including the surveillance infrastructure required to prosecute violations and the constitutional issues likely to destroy it in court.

Leave a Reply
You must be logged in to post a comment.