Conversational AI's inconsistencies pose serious risks
When suicidal ideation or self-harm is detected by a conversational AI, developers and deploying organizations are inconsistently deciding how the AI should handle that interaction. We believe there is a better way, pulling from a centralized resource called the Behavioral Health Intent Library (BHIL).
Our Mission
To set, develop, host, and educate on the gold standard options for AI’s responses to dangerous behavioral health intentions, like suicidal ideation and self harm.
Our Vision
A future where AI is a trusted ally in behavioral health, safeguarding users with standardized, evidence-based, responsive workflows.
Clinical Integrations
To be leveraged by health systems and digital health providers
FHIR CDS Hooks
Integrations with EHRs.
Decision Tree for Crises
A step-by-step process on IFTT possibilities
Health System Adaptations
Customized workflows when additional clinical resources are available
Programming Tools
To be leveraged by developers, engineers, and technical leaders.
Open-Source Repository
Publicly accessible, pre-trained, fine-tuned code stored on GitHub
Response Testing Suite
Compliance testing tools to audit model responsiveness
Governance Recommendation
Ideal information to monitor or report on for effectiveness
Educational Content
To be shared broadly with relevant organizations.
Written Educational Materials
Web content, publications, and downloadable materials.
Training Course
Informative videos online or in-person training with certification
Governance Briefs
Recommendations outlining best practices
We need your endorsement.
This is more than just adding your or your organization’s name. It’s a statement that responsible AI in behavioral health matters. Every signature strengthens our case to funders, demonstrating widespread industry and community support for an open-access solution that will transform how AI assists people with suicidal ideation or an intention to harm themselves. Your endorsement signals to philanthropic partners, corporate sponsors, and government funders that BHIL is not just an idea, but a movement backed by professionals, advocates, and organizations committed to responsible AI. By signing, you help ensure that BHIL remains a free, trusted resource for all who are working to improve health and wellness.
Stay Up to Date
Imagine what Responsible AI will do for health.
Follow Us