Rep. Kim Banta, R-Fort Mitchell, discusses her bill to limit AI use in mental health theraoy on Feb. 18, 2026. (Kentucky Lantern photo by Sarah Ladd)

This story mentions suicide. If you or someone you know is contemplating suicide, call or text the suicide prevention lifeline at 988. 

Proposed legislation that cleared a legislative committee Wednesday would limit how Kentucky mental health practitioners use artificial intelligence. 

Rep. Kim Banta, R-Fort Mitchell, said she wants Kentucky to have a law in place to prevent chatbots from encouraging people to end their lives, which has already happened in other states

Stateline reported in January that several states are moving to regulate AI as a result, including California and PennsylvaniaIllinois and Nevada already banned the use of AI in behavioral health while New York and Utah require chatbots to tell users that they are not human, according to Stateline. 

Under House Bill 455, licensed mental health practitioners in Kentucky would be barred from using artificial intelligence “to assist in providing supplementary support in therapy or psychotherapy services where the client’s therapeutic session is recorded or transcribed” unless explicit, informed consent is given. 

“I just want a human to interact with other humans when we are dealing with mental illness,” Banta told the House Licensing, Occupations, & Administrative Regulations Committee. 

Under her bill, which unanimously passed the committee, mental health providers would also not be allowed to use AI to:  

  • Make independent therapeutic decisions
  • Directly interact with clients in any form of therapeutic communication
  • Generate therapeutic recommendations or treatment plans without  review and approval by the licensed professional
  • Detect emotions or mental state 

Eric Russ, the executive director of the Kentucky Psychological Association, said the organization generally supports the bill but wants to see some edits. The bill would ban several therapeutic tools that are helpful in mental health treatment, including homework tools between sessions and programs that flag certain risk factors in a client’s language. He’d like to see those parts removed.

“One of the useful cases that we’ve seen are some tools that generate AI transcripts of therapy sessions and then flag potential risk issues,” Russ told the Lantern. “This is particularly useful in training settings where student clinicians might be working with patients. A supervisor might then get a flag that, ‘hey, there was some discussion of violence or suicidality. You should take a look at this.’ It enhances the communication between students and supervisors, and the language in the bill would eliminate the use of that sort of tool.”  

None of these tools are used independent of clinician review, he said. Russ is not aware of any chatbot-driven suicides in Kentucky, he said, but “I do think we need a bill” that restricts potential problems and permits innovative tools. 

“I’m hoping that with potentially some amendments, that this would be something that we could support. There are some really good pieces in here,” Russ said. “There is a real need here to put some guardrails around how folks are using these tools.” 

HB 455 can now go to the House floor for a vote. 

This story originally appeared at kentuckylantern.com.