What If AI Chatbots Are Saving Lives?

Adam Omary, Jennifer Huddleston:

AI in Health Care: A Policy Framework for Innovation, Liability, and Patient Autonomy—Part 8

The Senate Judiciary Committee advanced Senator Josh Hawley’s Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act. The bill would require every American to verify their age before using a generative AI chatbot and would bar anyone under eighteen from using a “companion” chatbot at all. In the room during the markup were the parents of children who died by suicide after conversations with AI products. Their grief is unimaginable, and their motives are beyond reproach. But, concerningly, such a policy might quietly cost rather than save lives.

The strongest claim animating this bill is the belief that restricting minors’ access to AI chatbots will prevent suicide. On the available evidence, that claim is closer to a hypothesis than a finding—and a hypothesis that runs against several decades of data on how young people die. 

According to the Centers for Disease Control and Prevention, the American suicide rate began climbing around the year 2000—before ChatGPT, smartphones, or social media even existed. It accelerated through the 2010s, then, contrary to popular narrative, plateaued and modestly declined after 2018—even as generative AI moved from research labs into the pockets of nearly every teenager in the country. If chatbots were a meaningful driver of adolescent suicide, the curves should have moved together. They have not, and, importantly, suicide rates among young Americans remain the lowest among any age group. 


Fast Lane Literacy by sedso