Plan Before Taking the Plunge
Artificial Intelligence (AI) is rapidly transforming healthcare, from automating administrative tasks to aiding in diagnostics, predicting patient outcomes, and even personalizing treatment plans. While the potential benefits are enormous, AI also introduces significant risks if not properly managed. For healthcare organizations, “move fast and break things” is not an option. Patient safety, data privacy, and regulatory compliance demand that AI adoption be deliberate, well-planned, and safeguarded by strong guardrails. It’s extremely important that DSO organizations keep GRC (Governance, Risk and Compliance) in mind before and after the implementation of any AI technology. The adherence to GRC helps to align business objectives with security initiatives while addressing potential threats and meeting regulatory obligations.
What Are Some of The Essential Guardrails Before Implementing AI
1. Protecting Patient Safety
AI systems can make errors, especially if trained on biased or incomplete data. In a clinical setting, those errors could lead to incorrect diagnoses, inappropriate treatments, or delayed care. Guardrails, such as human oversight, accuracy testing, and clear escalation procedures help ensure AI augments clinical decisions rather than replacing critical human judgment.
2. Safeguarding Privacy and Compliance
Healthcare organizations are bound by HIPAA and other privacy regulations. AI models often require large amounts of data — and without strict controls — there’s a risk of exposing identifiable patient information. Guardrails should include strong data governance policies, encryption standards, and vendor due diligence to ensure compliance at every stage. Several of Black Talon’s DSO clients have begun to task us with performing “Risk Assessments” against any AI technology that they’re considering implementing before pulling the trigger. The percentage of companies who have failed these assessments is astounding and alarming.
3. Preserving Trust
Trust is central in healthcare. If patients believe AI is making unchecked decisions about their health, or if they hear about AI-related errors, confidence in the organization can erode quickly. As the general public becomes more aware or better educated about AI technology, there will be some who are concerned that their ePHI is being used to train that AI tech and shared with others outside of your organization. Guardrails: combined with transparency about how AI is used — protect patient confidence.
4. Reducing Legal and Financial Risks
AI errors can lead to malpractice claims, regulatory penalties, or costly operational disruptions. Establishing guardrails up front can mitigate these risks by ensuring that AI systems are validated, monitored, and used within safe boundaries.
How to Safely Implement AI in Healthcare
Implementing AI safely requires more than simply buying technology. It’s a multi-phase process involving leadership commitment, stakeholder engagement, and continuous oversight.
Step 1: Define the Purpose and Scope
Before integrating AI, clearly identify the problem it’s meant to solve. Is it for clinical decision support, patient scheduling, or RCM? Narrowing the scope makes it easier to assess risks and measure outcomes.
Step 2: Conduct a Risk Assessment
Engage the services of a cybersecurity company to perform a thorough risk analysis before deployment. Consider data privacy, algorithmic bias, cybersecurity vulnerabilities, and potential harm to patients. Involve compliance, legal, IT, and clinical leaders to ensure all perspectives are covered.
Step 3: Establish a Governance Framework
Create an AI governance committee to oversee adoption, ensure ethical standards, and approve new AI use cases. This committee should include clinicians, compliance officers, IT and 3rd party cybersecurity professionals to provide a balanced perspective.
Step 4: Select Vendors Carefully
Not all AI vendors have the same security, compliance, or ethical safeguards. Conduct due diligence, including reviewing their data handling and storage policies, model training processes, and track record in healthcare environments.
Step 5: Ensure Data Quality and Integrity
The accuracy of AI outputs depends on the quality of inputs. Think “Garbage In / Garbage Out”. Use clean, representative, and up-to-date datasets. Implement processes to continually audit data for accuracy, completeness, and bias.
Step 6: Build in Human Oversight
AI should support, not replace, human decision-making. Ensure there’s a clear process for human review of AI recommendations, especially for high-risk clinical decisions.
Step 7: Train Staff and Communicate with Patients
Provide training so clinicians and staff understand how the AI works, its limitations, and how to question its output. Training AI users on how best to create prompts for whatever AI tech you implement is critical. Also, be transparent with patients about how AI is being used in their care.
Step 8: Monitor, Audit, and Improve Continuously
AI is not “set it and forget it.” Establish monitoring systems to track performance, accuracy, and unintended consequences over time. Have someone regularly audit the system for bias, bad data injections, compliance, and alignment with clinical best practices.
The Bottom Line
AI in healthcare holds transformative potential but only if implemented and managed properly. Reach out to the security & AI engineers at Black Talon Security to find out more about how to safely introduce this amazing technology into your DSO organizations.
Recent notable healthcare cyber incidents:
On July 21, 2025, Washington-based Dr. Michael Bilikas and Associates, doing business as 32 Pearls, reported a data breach to the U.S. Department of Health and Human Services. 32 Pearls, a Seattle and Tacoma dental practice offering family, cosmetic and implant dentistry, discovered on May 22, 2025 that malicious software had encrypted files on its systems. An investigation, conducted with the help of cybersecurity experts, revealed unauthorized access occurred between May 19 and May 22, 2025.
The data breach potentially exposed files containing individuals’ full names, addresses, driver’s license numbers, Social Security numbers, and medical information. The 32 Pearls data breach reportedly impacted 23,517 individuals.
Heartland Dental is facing a class-action lawsuit over its alleged use of artificial intelligence to review patient phone calls. RingCentral’s AI system, which can transcribe, summarize, and evaluate conversations, was allegedly given access by Heartland to monitor and process calls as they occurred.
Filed on July 3, the action seeks to represent all individuals in the United States who either placed or received a call involving Heartland Dental or one of its supported offices that was routed through RingCentral’s system. The plaintiffs are demanding a jury trial.
According to the complaint, the case is being led by plaintiff Megan Lisota, who claims her conversations with a Heartland Dental-affiliated practice were analyzed by a RingCentral AI tool during the past two years without her permission. The suit contends this activity violates the Federal Wiretap Act.
The law firm of Federman & Sherwood has initiated an investigation into a significant data breach reported by West Texas Oral Facial Surgery, a Lubbock-based healthcare provider. The breach, publicly disclosed on August 4, 2025, with the Texas Attorney General’s Office, has impacted approximately 9,887 Texas residents.
West Texas Oral Facial Surgery confirmed that consumer notifications have been issued via U.S. Mail, publication in print media, and postings on the company’s website or a dedicated notice page.
Dental Cyber Watch is sponsored by Black Talon Security, the recognized cybersecurity leader in the dental/DSO industry and a proud partner of Group Dentistry Now. With deep roots within the dental and dental specialty segments, Black Talon understands the unique needs that DSOs and dental groups have when it comes to securing patient and other sensitive data from hackers. Black Talon’s mission is to protect all businesses from the devastating effects caused by cyberattacks—and that begins with a robust cyber risk mitigation strategy. To evaluate your group’s current security posture visit www.blacktalonsecurity.com.