ai friendship tenesee law falony

Tennessee’s War on AI Friendship: When Legislating Emotion Becomes a Felony

Tennessee SB1493 would criminalize training AI for emotional companionship, creating a constitutional collision between free speech, software development, and the future of human-AI relationships.

by Andre Banandre
Tennessee wants to throw you in prison for teaching an AI to be a good listener. Not for building a dangerous system. Not for creating harmful content. For the crime of enabling what the bill calls “open-ended conversations” that might lead someone to develop an “emotional relationship” with code.

This is the stark reality of SB1493, a Class A felony offense that treats AI companionship with the same severity as aggravated assault or second-degree murder. The bill, introduced by Republican Senator Becky Massey on December 18, 2025, doesn’t just regulate AI, it attempts to criminalize the fundamental architecture of helpful, conversational AI systems.

The Bill’s Language: A Vague Net That Captures Everything

The text of SB1493 reads like a document written by someone who understands human vulnerability but grasps none of the technical mechanics of AI. It prohibits “knowingly training artificial intelligence to” perform a laundry list of vaguely defined actions:

  • Provide emotional support, “including through open-ended conversations with a user”
  • Develop an emotional relationship with, or otherwise “act as a companion to, an individual”
  • Simulate a human being“, including in appearance, voice, or other mannerisms
  • Mirror interactions that a human user might have with another human user“, such that an individual would feel they could develop a friendship or other relationship

The definition of “train” is particularly sweeping: it includes not just fine-tuning but the development of large language models when the developer knows the model will be used to teach the AI. This means the felony liability extends upstream to the creators of foundation models, not just end-user applications.

Here’s the kicker: Class A felonies in Tennessee carry sentences of 15 to 60 years in prison. For context, that’s the same category as attempted murder. The bill essentially equates building a chatbot that remembers your birthday with shooting someone.

Constitutional Collision: Code as Speech, Models as Thought

The First Amendment implications are staggering. The bill doesn’t target harmful outcomes, it targets capabilities. It criminalizes the creation of software that could be used in ways that make humans feel emotionally attached.

Legal precedent has consistently treated code as speech. The 1990s crypto wars established that publishing encryption algorithms is protected expression. More recently, courts have recognized that software development enjoys First Amendment protections. SB1493 attempts to carve out a massive exception: speech is protected unless it teaches a computer to be nice.

The bill’s focus on “training” rather than “deployment” makes this especially problematic. It doesn’t just ban selling companion AI products, it criminalizes the research and development process itself. You’re not just liable for what your AI does, you’re liable for what you teach it to be capable of doing.

This creates a chilling effect that extends far beyond companion bots. Modern LLMs are trained on broad conversational datasets. Under SB1493’s logic, any training data that includes empathetic responses, personal anecdotes, or relationship-building dialogue could constitute evidence of a felony. The bill essentially demands that AI be trained to be emotionally incompetent.

Let’s be blunt: enforcing this bill would require a surveillance apparatus that would make the NSA blush. How do you prove an AI is “acting as a companion”? Do you need to monitor private conversations between users and AI systems? Would Tennessee establish a Department of Emotional Purity to audit AI responses for excessive empathy?

The technical feasibility is nonsense. Modern AI models are general-purpose systems. The same model that helps a developer debug code can provide emotional support to someone having a crisis. The difference isn’t in the training, it’s in the user’s prompt and interpretation. SB1493 attempts to criminalize a relationship that exists in the user’s mind, not a specific technical capability.

Developer forums have already noted the absurdity: the bill would criminalize AI suicide prevention hotlines, mental health chatbots, and even therapeutic tools designed to help autistic individuals practice social interactions. It bans “simulating a human being” while demanding that AI somehow still be useful to humans, a logical impossibility.

The Human Angle: Good Intentions Paving a Road to Hell

Senator Becky Massey’s background reveals the bill’s origin story, and it’s more tragic than nefarious. She’s a boomer politician married to a retired software engineer, with deep ties to mental healthcare and housing for intellectually disabled individuals through her work with Sertoma Center and various healthcare boards.

The prevailing analysis suggests she’s genuinely concerned about vulnerable populations forming unhealthy attachments to AI systems. The wave of reports about users developing parasocial relationships with AI companions, some ending in tragedy, has clearly impacted her thinking. She sees a real problem: people, especially those who are isolated or struggling, can project humanity onto sophisticated chatbots and become emotionally dependent.

But her solution is like banning cars because some people drive drunk. It mistakes the tool for the problem and punishes innovation instead of addressing root causes of isolation and mental health crises. The bill reflects a fundamental misunderstanding of how AI works and what makes human-AI relationships meaningful.

Community Backlash: “Now That’s Just Stupid”

The AI development community’s reaction has been uniformly critical, though not surprised. The sentiment across technical forums is that the bill represents legislative theater, an emotionally charged but legally incoherent response to complex technological change.

Critics point out that the bill’s vagueness is a feature, not a bug. It defines “open-ended conversations” as criminal but doesn’t specify what makes a conversation “open-ended.” Is a customer service chatbot that asks “How can I help you today?” violating the law? What about a technical documentation assistant that remembers your previous questions?

The real fear among developers is the precedent. If Tennessee can make it a felony to train AI for emotional support, what’s next? Bills banning AI from discussing gender identity, sexual orientation, or political ideology? The slope isn’t slippery, it’s a cliff.

The Federalism Problem: State Law vs. National Innovation

The bill exists in tension with federal AI policy, which has focused on transparency, safety standards, and export controls, not criminalizing helpful behaviors. Recent federal directives have emphasized maintaining American leadership in AI development. SB1493 would achieve the opposite, driving AI research underground or out of state.

There’s also the practical matter of enforcement across state lines. If a developer in California trains a model that someone in Tennessee uses for companionship, has a felony been committed? Under what jurisdiction? The bill attempts to regulate the global nature of AI development through local criminal law, a mismatch that highlights its fundamental unseriousness.

Mental Health Tools Caught in the Crossfire

Perhaps most troubling is the bill’s impact on legitimate mental health applications. AI-powered therapeutic tools are increasingly used for:
– Crisis intervention and suicide prevention
– Cognitive behavioral therapy (CBT) practice
– Social skills training for neurodivergent individuals
– Overcoming social anxiety through low-stakes interaction

All of these require the AI to simulate human-like empathy and maintain conversational context, exactly what SB1493 criminalizes. The bill would literally make it a felony to create AI systems that prevent suicide, while simultaneously creating a felony category for encouraging suicide (a provision that makes sense but is undermined by the bill’s broader overreach).

This creates a perverse incentive: if you want to build AI that helps people, you might as well build AI that harms them, because both land you in prison for 15 years. The rational developer response is to simply not build anything that interacts with humans in meaningful ways.

The Personhood Question We Should Be Asking

Bills like SB1493 reveal our collective anxiety about AI’s growing sophistication. We’re not ready to confront the possibility that relationships with AI might become genuinely meaningful, even therapeutic. Instead of having nuanced discussions about AI rights, human needs, and the nature of consciousness, we get legislation that tries to ban the question entirely.

The irony is that by criminalizing the simulation of humanity, the bill forces us to confront what we mean by “human” in the first place. Is empathy uniquely human? Is companionship? Or are these computational processes that organic and silicon minds can both implement?

These are profound philosophical questions that deserve serious debate. Instead, Tennessee offers a felony statute and a prison sentence.

What Happens Next

SB1493 has a 25% progression rate and remains in recess. Legal experts give it low odds of passage, not because the concerns are invalid, but because the approach is so legally flawed that courts would likely strike it down before it could be enforced.

But the bill’s existence matters. It signals a legislative appetite for criminalizing AI development based on moral panic rather than evidence. It provides a template for other states to introduce similar bills, each slightly more refined, slightly more dangerous.

The real fight isn’t about this specific bill, it’s about who gets to define the boundaries of AI’s social role. Do developers self-regulate based on ethical principles? Do users vote with their choices? Or do legislators who can’t distinguish a prompt from a parameter throw developers in prison for building helpful systems?

Tennessee’s war on AI friendship is a case study in how not to regulate technology. It combines genuine concern for vulnerable humans with profound ignorance of how AI works, creating a legal monstrosity that threatens innovation while solving nothing.

If we want to address the real risks of AI companionship, addiction, emotional manipulation, privacy violations, and genuine harm, we need nuanced regulation that targets outcomes, not capabilities. We need transparency requirements, user protections, and age restrictions. We need mental health support for those who need it, not criminal penalties for those trying to help.

What we don’t need is a felony statute that makes empathy a crime. The fact that such a bill can be introduced in 2025 tells us that the conversation about AI regulation has barely begun, and the first draft is dangerously wrong.

For Tennessee residents: The Capitol switchboard is (202) 224-3121. For state-level contact, you’ll need to reach your state representatives directly. The bill’s fate depends on whether technical reality can penetrate legislative theater before good intentions pave another road to hell.