Embracing AI in Regulated Industries Without Compromising Meaningful Access

Episode 273 April 04, 2025 00:19:08
Embracing AI in Regulated Industries Without Compromising Meaningful Access
Localization Today
Embracing AI in Regulated Industries Without Compromising Meaningful Access

Apr 04 2025 | 00:19:08

/

Hosted By

Eddie Arrieta

Show Notes

By Ryan Foley

Although AI can enhance language services in regulated industries such as healthcare, legal services, government, and education, human expertise remains essential to ensure accuracy, safety, and compliance in critical situations.

View Full Transcript

Episode Transcript

[00:00:00] Embracing AI in regulated industries without compromising meaningful access By Ryan Foley Organizations in regulated fields healthcare, legal services, government, and education are required to provide robust language support for individuals with limited English proficiency as well as for the deaf and hard of hearing community. Artificial intelligence has emerged as a potential game changer, promising efficiency and scalability. [00:00:33] Yet questions remain. [00:00:37] How can organizations harness AI's potential without compromising compliance or meaningful access? For communities that rely on accurate, dependable language support, consider the following successful A first generation Hmong speaker steps up to an AI powered kiosk at a Minnesota Department of Public Safety facility. Rolled out in a 2023 pilot, the kiosk delivers prompts in mong, enabling the user to handle basic driver's license renewals and reducing service wait times. [00:01:10] A notice that more complex inquiries can be facilitated by qualified interpreters assures the user that important communication is not limited to AI and guarantees meaningful access to the full range of department services. [00:01:24] In contrast, the Careless Whisper Speech to Text hallucination harms study 2024 examined a top tier AI speech to text solution currently being used by more than 50,000 clinicians to transcribe patient consultations and found that roughly 1% of audio transcriptions contained entire hallucinated phrases or sentences. [00:01:47] While 1% may sound minimal, the study concluded that 38% of these errors carried a risk of causing harmful misunderstandings in a hospital, legal proceeding or government office. Such inaccuracies could have a severe downstream impact. From these contrasting examples arises one Clear Any AI deployment in regulated industries must be validated, transparent and accountable AI can both bolster and hinder meaningful access, something we will explore in the following sections. By examining real world examples, it becomes clear that pairing AI innovations with skilled human validation and oversight emerges as the guiding principle to ensure that accuracy, safety, and compliance remain front and center. AI's expanding footprint in language access Whether through machine translation, MT widgets on government websites, automated interpretation tools in court self help centers, or multilingual chatbots at school district front desks. AI is transforming the way organizations communicate with diverse communities. [00:03:03] By and large, these systems excel in low risk scenarios, quickly handling frequently asked question FAQ style queries, generating routine appointment reminders or captioning well rehearsed online presentations. C. Table 1 Yet an advertised 95% accuracy may be wildly misleading if the metric is vague and unstandardized. A misinterpretation in a court summons or hospital discharge instruction can have an outsized impact on individuals rights, health and well being. Regulated industries don't just want AI, they want AI that can uphold or even enhance the standards of meaningful access. [00:03:48] Why Ethics and Compliance are non Negotiable in the United States, Title VI of the Civil Rights act and other mandates prohibit discrimination based on national origin, effectively requiring that recipients of federal funds provide language assistance. [00:04:06] Section 1557 of the Affordable Care act reinforces similar obligations in health care, specifying that key documents must be validated by qualified human professionals. The Americans with Disabilities Act ETA stipulates effective communication for individuals who are deaf or hard of hearing, and privacy regulations like the Health Insurance Portability and Accountability Act HIPAA impose strict data handling requirements. In short, while AI is welcome to accelerate or supplement certain services, compliance in regulated industries means that human oversight remains integral, and this is not likely to change anytime soon, even under a new U.S. administration. [00:04:53] In a January 2025 webinar, David Hunt, J.D. senior director for Health Equity at BCT Partners, pointed out, now the question that's on everybody's mind Will the new ACA Section 1557 rules survive under the new administration? Hunt argued that section 1557 would likely remain the law of the land because language access obligations have become deeply ingrained as a best practice for preventing errors and protecting patient safety. These obligations transcend political changes underscoring their durability. The foundation of SAFE the Stakeholders Advocating for Fair and Ethical AI SAFE AI Task Force has emerged as a leader in developing a practical and ethical framework for implementing AI in interpreting across diverse industries. In July 2024, the task force published its landmark document, interpreting Safe AI Task Force guidance on AI and Interpreting Services, the culmination of a comprehensive global survey, two extensive studies, and a year of engagement with a wide range of stakeholders. The document outlines ethical principles and provides actionable examples designed to ensure that AI technologies enhance, rather than undermine the quality, accountability, safety, and transparency standards in language interpretation. These four principles align with the needs of regulated industries seeking to integrate AI while upholding meaningful access. [00:06:38] While designed for interpreting, they apply broadly to translation and other language solutions. [00:06:45] End user autonomy, and must be deployed with opt in consent wherever possible. End users have the right to know when they're interacting with AI and the right to request a qualified human interpreter or translator instead. Evidence of improving safety and well being, AI tools should align with existing legal and ethical frameworks for interpreting. If the tools cannot meet these standards, especially for critical interactions, they should be restricted or come with prominent disclaimers. Transparency of technological quality and implementation Organizations should practice clear disclosure about AI usage, data handling, and known limitations. [00:07:30] This entails disclaimers in user interfaces, policies on pilot testing and staff training on error escalation Accountability AI vendors and buyers share responsibility for the outcomes of AI driven services. [00:07:47] Buyers must validate accuracy, define use case limitations, and train staff to avoid misuse. If an AI tool fails, accountability should be traceable and well defined. Real World Case Studies Considering overarching safe AI principles Real world examples offer valuable insight into how AI can serve or undermine meaningful access. [00:08:15] The following case studies illustrate both the advantages and potential pitfalls of AI across key sectors, highlighting where human oversight remains paramount. [00:08:27] Speed versus Risk Inpatient Care A major children's hospital in Seattle, Washington, is launching a pilot that uses AI to translate English language clinical documents such as discharge papers into Spanish, Somali, Vietnamese, and simplified Chinese. The output is then reviewed by qualified human translators. Previously, patients had to wait several days to receive translated discharge instructions in the mail. Additionally, a report in Healthcare Brew explains that the hospital built AI language translation tech into its own data system so third parties aren't given access to sensitive patient information. [00:09:11] This approach a human in the loop improve turnaround times without jeopardizing safety. Any scenario involving emotional or high stakes communication, for example Trauma cases, complex diagnosis discussions, and mental health episodes, places a premium on empathy, cultural context, and domain knowledge. [00:09:36] That's why the hospital relied solely on qualified interpreters. In addition, Section 1557 requires that vital medical documents be validated by qualified translators, reiterating that disclaimers about AI accuracy are insufficient for compliance. To address the issue of transparency as AI becomes more pervasive, industry standards thought leaders advocate for a labeling framework to clarify how thoroughly a document or interpretation has been vetted by professionals. [00:10:08] According to Dr. Alan Melby, an emeritus professor of linguistics and active participant in translation standards development, translation output falls into two broad categories. Professionally verified translation quality indicates that text has undergone review by a qualified human translator. It reassures readers the document meets specific accuracy or more specifically, fluency and correspondence benchmarks. Unverified Translation UVT indicates that text underwent machine or human translation that was not verified by a professional. These labels are under consideration for inclusion in the next revision of ASTM's translation standard F2575.23. Regulated industries stand to benefit by using clear labels to signal whether a text is fully trustworthy for critical decision making. For hospital discharge instructions or court documents, a PVTQ label can mitigate risk and demonstrate compliance. Legal Services Ensuring Due Process Court self help centers in several jurisdictions have implemented AI assisted tablets to handle low stakes scenarios such as answering simple questions like which form do I need to file for a name change? Or where should I submit my traffic ticket payment. In early trials, these tablets were helpful to users speaking languages such as Vietnamese or Somali, languages for which staff often have little to no proficiency. When legal stakes are high, for instance in custody disputes, felony cases, eviction proceedings, the allowable margin of error shrinks to nearly zero. Even a minor misinterpretation can alter someone's legal standing, infringe on due process, or result in missed deadlines, according to legal expert Christina Lapp, who reviewed live AI assisted translations at legal aid centers and shared her findings with the Stanford Legal Design Lab in February 2025. [00:12:19] Staff often think they've captured the user's full story. In reality, we found so called parallel conversations where the translation missed a detail like I never received the notice and now the advice is off target. That leaves the litigant confused and jeopardizes their case. Lopp emphasizes that while AI tools might appear to improve initial access by bridging language gaps, they can also mask critical inaccuracies. People left thinking they'd gotten the help they needed, but they were actually misunderstanding vital legal steps. The potential harm is huge for any case beyond basic informational inquiries or cases in which a user describes complex personal circumstances, qualified interpreters remain non negotiable. Multiple court systems, including those in California, have stated that AI tools cannot replace certified court interpreters for in session proceedings. Government services streamlined access with safeguards in some agencies, AI kiosks or web chatbots guide LEP residents through straightforward interactions such as license renewals or tax queries. This is the case for the Minnesota Department of Public Safety mentioned at the outset of this article. [00:13:42] The departments used AI powered kiosks and virtual assistants to enhance accessibility for speakers of their most in demand languages through written interactions serving 67% of Minnesota's non English population while retaining human interpreters for more complex spoken interactions. [00:14:01] While these solutions can reduce wait times for individuals with lep, safe AI Task force guidance recommends including disclaimers and fallback options for more complex or legally binding inquiries. By clarifying that a human representative or interpreter is available, these disclaimers reinforce trust and mitigate the risk of misunderstandings when disputes arise, for example contesting of a tax assessment, agencies revert to trained human interpreters. Public trust hinges on clarity. An unresolved misunderstanding about financial obligations can have legal ramifications making pure AI usage inappropriate in higher risk situations. [00:14:47] Building organizational frameworks for AI integration these scenarios include a simple truth. When AI is used judiciously, it can enhance efficiency and widen access. [00:15:00] But without rigorous evaluation, oversight, and ethical guardrails, the risks to compliance and public trust can escalate quickly. To navigate this delicate balance, organizations need clear, practical strategies, and that's where frameworks for AI integration come into play. Risk assessments categorize scenarios by level of risk. Low risk content like routine FAQs, can be handled by AI with minimal oversight, whereas high risk engagements demand trained human involvement. Human validation for medium risk tasks e.g. discharge summaries, parent notifications about moderate school concerns Deploy AI to save time but require a bilingual professional to verify accuracy before release. [00:15:55] Clear escalation paths Train staff to recognize triggers any sign of complexity, urgency, or emotional distress to pivot from AI to qualified interpreters or translators. Transparent labeling if an end user is reading an AI generated translation, disclaimers should indicate that it is unverified or machine only. Organizations might tag fully human verified text as pvtq, following emerging industry standards, accountability, and continuous improvement, Institute processes to measure and record AI errors, gather user feedback, and retrain models. If a solution consistently fails in certain domains or languages, revert to human professionals until technology improves, elevating access through collaborative AI and human expertise in regulated industries, any lapse in communication quality carries profound implications for safety, compliance, and public trust. AI alone cannot shoulder these responsibilities. [00:17:07] It can, however, bolster your capabilities when paired with qualified human oversight and guided by strong ethical principles to stay on the right side of Title VI, Section 1557, and other mandates, not to mention fulfilling a moral duty to the communities they serve. Organizations establish clear thresholds for when AI is appropriate versus when trained professionals are essential. Ensure continuous validation of AI outputs by certified interpreters and translators. Maintain transparent disclosure so that end users are never in doubt about how information is being translated or interpreted. As language services continue to evolve, AI will be able to handle more tasks and serve more language communities. [00:18:01] Yet, no matter how advanced the technology becomes, trust and accountability hinge on human expertise. [00:18:09] Qualified human experts serve as the essential counterbalance, verifying critical documents and conversations and stepping in whenever complexity or risk escalates. [00:18:21] Together, AI and human expertise can elevate the standard of care and compliance, ensuring that innovation does not eclipse safety or quality. By weaving rigorous human validation into every stage of the AI workflow, regulated industries in the United States stand poised to deliver safer, more inclusive communication services for all. This article was written by Ryan Foley, director of communications at Masterward and an advocate for ethical AI integration in language services. Contributing to the SAFE AI Task Force with expertise in regulated industry compliance Originally published in Multilingual Magazine, Issue 238, March 2025.

Other Episodes

Episode 256

November 02, 2022 00:04:00
Episode Cover

Straker reports strong growth in quarterly report

Straker Translations reported on Friday that it has earned $33 million NZD (19.37 million USD) in revenue during the first half of Fiscal Year...

Listen

Episode 42

February 22, 2023 00:02:39
Episode Cover

Phrase CEO Georg Ell talks new releases Orchestrator and Analytics

It’s a big week for LSP Phrase, with the company announcing the most massive product release in its history, and company CEO Georg Ell...

Listen

Episode 246

January 08, 2025 00:20:37
Episode Cover

How Language Industry Jobs Are “Shifting Left”

By Agustín Da Fieno Delucchi, Alfredo de Almeida, and Jorge Russo dos Santos Traditional localization roles are morphing to meet the demands of an...

Listen