[00:00:03] Speaker A: Hello and welcome to Localization Today. This is the show where we talk all things language. Today we have an amazing group of professionals that are going to be talking to us about AI Interpreting Solutions Evaluation Toolkit, or rather the AI Interpreting Solutions Evaluation Toolkit Part Organization, Implementation and Management. My name is Eddie Arrieta. I'm the CEO here at Multilingual Media and I'm very honored to be today with the four professionals that are going to be introducing themselves next.
So with us, Katherine Allen, Molly Glass, Jeff Shaul, and Ludmila Golovini. Welcome.
[00:00:52] Speaker B: It's great to be here.
[00:00:54] Speaker C: Thank you for having us, Eddie.
[00:00:56] Speaker A: Wonderful. And I will now give the opportunity to each of them to introduce themselves. So thank you so much for being here. And Katherine, if we can start with you, we will then do Molly, Jeff, and finally but not least, Mila.
[00:01:12] Speaker B: Thank you very much and thank you for this opportunity to be part of this interview.
So I've been in the field for a long time and I think probably some of you recognize me. But what's most relevant for this moment in time is that I am a co founder of the or a founding member of the SAFE AI and Interpreting Task Force and also a member of the committee that created this AI Solutions Evaluation toolkit. And I currently hold a position as the language as the Director of Language Industry Learning with Boost Lingo, which is. Which is a big on demand remote interpreting software company.
And I bring in a lot of subject matter expertise in my role there. So I've been a longtime observer of the field, especially around technology and how it impacts interpreting and I'll leave it there.
[00:02:10] Speaker D: Hello everyone.
Thank you for inviting me to be part of this conversation.
[00:02:15] Speaker E: Molly.
[00:02:15] Speaker D: I'm Molly Glass and I'm from Virginia. I've been deaf since about the age of two and over the past three years I've been working as a deaf interpreter for an IT company based in New Zealand.
They develop sign language avatars.
In that capacity, my primary work has been in translation.
Over the past year, I've been involved as well in the Coalition for Sign Language Equity in Technology, or cosette, where my colleagues and I partner to advocate for the deaf community for more equitable access to sign language and AI interpreting.
Thank you for having me.
[00:03:01] Speaker E: Hello everyone. Thank you very much for having us. Indeed, it's my very great pleasure to be with you.
My name is Jeff Schall.
I hail from the state of Ohio and I serve as CTO for a company by the name of GoSign AI.
This is a sign language data gathering effort because the data for deaf people and AI is still very much in its infancy. There's not that much out there relative to the vast amount that's available for hearing people. So we're trying to level the playing field there and we have created a language game in order to do so. I serve as part of the COSET Group, the Coalition for Sign Language Accessible Technology with my colleague Molly. So I'm very happy to be here to offer the deaf perspective on this topic and I'm very excited for the new technology that is in front of us.
[00:03:56] Speaker C: Hello, my name is Mila Gulivine and alongside with Katherine Allen, a co founder of SAFE AI task force that came together in 2023 to see how the interpreting industry can best adapt and implement AI safely and responsibly. I'm also the CEO of masterward and I'm very excited to see how the industry is adapting to the new tools that are out there. And we are undergoing yet through another technical revolution.
I've been in the industry for over 30 years at this point.
This is a very exciting new development.
[00:04:46] Speaker A: Thank you everyone for taking the time to letting us get to know you a little better.
One of the things that I have to say is that Mila, you are right. Adaptation is happening in the industry and specific to this webinar. The listeners and those that are perhaps reading and also watching should also know that we are moving at the speed of ideas. We came up with this idea a week ago and here we are with an amazing group of professionals making it happen.
And I'm very glad, I'm very proud that we are able to have this conversation that I'm sure is going to resonate and impact many around the world. And to begin with, of course I will need to ask the most basic question out of this conversation.
So give us the headline. When was it launched? How can we access it? And of course more and more in depth. What problem does these AI Interpreting Solutions Evaluation Toolkit Part A Organization Implementation and Management solve for language access?
[00:05:58] Speaker B: And I think I'm going to jump in here and start the conversation. We launched it this month, so earlier this month it's accessible on both the SAFE AI and Interpreting website as well as the COSET website. So I think we have a single link that goes to it and I'm assuming we'll have that in the article or put that below somewhere on this recording.
But more importantly, what problem does this solve? I think Mila started to get to it. We are faced with a sort of world transforming technology that's going to roll and disrupt everywhere in every field. And for interpreting, you have AI interpreting solutions that are being rolled out now and legislation and official guardrails and research is going to lag far behind adoption of AI interpreting solutions. And we are talking about machine interpreting solutions that actually function as an interpreter or in some way functions for real time communication that's happening with an AI tool. So what safe AI in general does is to try and be an industry voice that's neutral, that gathers together many experts and stakeholders and puts out guidance for people to follow. And the toolkit in itself is a very, very practical document that if you are now faced with the mandate to purchase one of these AI solutions from your workplace or in your company, this is a very practical and in depth toolkit that will allow you to evaluate that tool and to consider its risk and to consider if you have the right guardrails in place for how you want to implement it.
[00:07:55] Speaker D: I'd also like to note that we understand the challenge when making decisions about AI interpreting.
Most of us aren't experts in sign language access needs or perhaps we don't know French or German or the sign language.
So it's a struggle to confidently evaluate the quality of the service. This toolkit provides clear guidelines and criteria that will help you know what to expect, how to determine high quality versus ineffective interpreting services.
It gives you the information you need to make confident and informed decisions.
[00:08:32] Speaker C: Allow me to add, Eddie Multilingual is the very first interview we have regarding the toolkit.
[00:08:38] Speaker E: Yes,
[00:08:42] Speaker A: and we are extremely happy to have the privilege to be the first.
We hope not the only. We hope not the only, but the first. And we will always have an open forum and an open stage here for your story. Safe AI has been part of the coverage that we consistently do here at Multilingual magazine. So we'll continue doing that. Of course you are describing, or what you are describing rather is very practical. I love the checklist format to it. And we'll get into more detail as we continue the conversation. But as you describe part A as a practical risk, informed research, not anti AI, but pro evidence, what does responsible adoption look like in real organizations?
[00:09:33] Speaker B: Well, in terms of the principles that we would hope you would follow, you know, we sort of have four key ones. One would be that it's safe in, in the sense that, that any AI solution adopted is not going to harm the people who are using that AI tool. So we would hope that it would expand access responsibly but not create harm to the user because perhaps it you know has too many errors for the use case, it should be accountable, meaning it's clear who's accountable for this adoption. What are the potential risks that you're looking at? What are the potential impacts? Again, we don't have legislation, we don't have clear guidelines, you know, from a government level. So we need to create them ourselves. In our industry, we would want it to be fair, to promote autonomy for all of the primary communicators. Right. And make sure that the biases are taken account of, for example.
And we would want it to be ethical in the sense that it would be transparent and safe and you would know that it was being used and you would have the option to opt out or roll over to a human interpreter at any given time.
So that would be the broader answer to that question. What does it look like in real organizations? I think we'll get to. When we talk more about the checklists. Right. But really what it means is you are going to have to be accountable for evaluating that on a case by case, organization by organization basis. And that is what this toolkit is, hopefully will empower you to do.
[00:11:16] Speaker D: Katherine just made some important points, and I'd like to emphasize two of them, accountability and transparency.
We would like to see AI interpreting technology be accountable in that it's built around the understanding that errors happen and to have mechanisms in place to repair those errors in real time.
In addition, we want platforms to be transparent to collect statistics on their error rates and other factors and then communicate those stats back to all users.
[00:11:47] Speaker A: Thank you for that. And I just want to take one second to thank our listeners, our viewers, and also our readers. If you are listening to this conversation or watching it, you should also know that there is a written summary of this conversation
[email protected] we want to make sure that you get the information about the toolkit in as many formats as possible so that you can see the great information and the great work that the Safe AI team has been doing.
And now to get a little perspective before we dig into each of the elements of the checklist, we have five elements there. Organizational readiness, setting, a specific guidance, risk factor assessment, vendor assessment, and RFP guidance.
Could you walk us through this initial set at a glance very quickly for those that are going to just listen to it a little bit now and then we'll dig deeper later. We might also take this as a snippet to share it on social media for those that want to get a quick glance at it.
[00:12:55] Speaker B: All right, well, I'll, I'll start this one as well. And just say that, yes, there are five checklists. This is meant to be an extremely practical tool where you can literally go through a whole bunch of elements and determine whether your organization is ready or not and what gaps you might need to fill. So the first one is what it is. Is your organization ready to adopt an AI tool? Do you have the it necessary? Do you have the people in place? You know, it runs through that. The checklist too kind of is related to that. It provides some sample checklists for specific settings that where this, where AI interpreting is now being sold into for health care, legal education and business environments. The third checklist is a really kind of at the heart of soul. It tries to get at a way for each organization to assess their own risk environment and framework. Because every, you know, setting is going to be unique. And because we don't have the research that tells us how to do this, we want to help people be empowered to do it themselves. And then the fourth checklist is a vendor assessment checklist which, you know, goes through like all of the more of the technical evaluation of the tool itself that's being sold. Those four are really for the organizations that are considering buying an AI solution. And the fifth checklist is really targeting those entities like Big Healthcare, you know, enterprise healthcare, that are writing requests for proposals and we're trying to give them the right questions to ask. We're trying to educate the RFP right writers how to realistically, you know, seek AI tools and ask the right questions because We've all experienced RFPs in our world where the, the people writing them don't really understand the industry.
[00:14:53] Speaker D: Catherine has just very clearly outlined a list of important criteria discussed in the toolkit for evaluating AI interpreting solutions. And all of those guidelines are very useful for people developing RFPs. I think in particular, the fifth item on the checklist is critical because it focuses on testing interpreting solutions before adopting them for primary communicators. It's really an important insurance to know that the interpreting solution their communication depends on has been thoroughly vetted before implementation.
[00:15:32] Speaker E: To which I would add, this is Jeff speaking, that the checklists are not intended to be comprehensive, but they are a starting point for getting underway and having those critical conversations to help identify things that might be unknown unknowns that you wouldn't have otherwise considered. So that is really the purpose and point of it.
[00:15:54] Speaker A: And that's a wonderful point you make, Jeff, because from the very beginning of the inception of SAFE AI as an initiative, it has always given me that impression that these are Starting points that these are conversations that come to life which allow us to then do things like what we're doing today, which is dig deeper, also with the idea of adding to the conversation, more professionals, more thoughts, and also moving the conversation forward.
And if you would allow me, of course, I love to get in detail about what each of these checklists entail. So if we can get started with the organizational readiness checklist, you list eight domains.
Strategy, governance, infrastructure, privacy, training, qa, budget, and rollout. Which two are most often overlooked and what are the consequences? I'm sure I'm saying two, but I could have said four or three. Or are they all overlooked?
[00:17:02] Speaker B: Oh, I mean, for me, I think that one way to think about this is, you know, at the beginning of the intern, the computer age, I'm old enough to remember working in workplaces where there was no such thing as an IT specialist, for example, like computers came in and people didn't know what to do with them, organizations weren't ready for them. And eventually we all figured out pretty quickly that you had to have an IT specialist and that IT specialist was then responsible for threading technology throughout the entire organization and how IT impacts everywhere. So I would say there's a big overlook. These aren't plug and play tools where you sign a dotted line with a company and then suddenly you have magic. You know, AI interpreting. It's going to impact your entire service provision, especially if you are already providing interpreting. So I think the IT and I think the budget is the other thing. I think people are not. It's not just the cost on the dotted line of what the AI solution is telling you, hey, look, I can save you this much per minute or per hour. There's a big backend organizational cost to rolling out new technologies and to changing service structures. So I think these are probably the ones I would like to call out.
[00:18:20] Speaker A: Thank you, Catherine. And of course, on the risk factor assessment, I'd love to know, and would love to know how should teams score a use case and decide, sorry, where to escalate to qualified human interpreters. What's a good borderline example? What's a good borderline example that teaches us the method?
[00:18:48] Speaker E: Certainly this is Jeff speaking. I can respond to that, Eddie. I think the traditional thinking about assessment is along the lines of kind of a linear process from lower to higher risk, which I think is flawed thinking.
And it's better to break it down into three different dimensions of risk assessment in this context. First, being that you have to consider the basic technological specs, is the Tech going to do what it's intended for full stop.
Also secondly, the individual needs of that primary communicator who's in that conversation, meaning who is the most important person in the room in the conversation, in the exchange, and what are their individualized needs. We need to consider them as we have not may be heretofore done. Thirdly, the dimension of the environmental factors taking into account maybe are we in a healthcare setting, are we in education? Or any number of other domains where you might see the risk landscape change. So these three dimensions I think are the most critical ones to take extra care in looking at. And if you err on the side of one too much, maybe you have to apply different kinds of mitigation strategies. For example, if you're at a lower level of risk, maybe you could use a simpler abbreviated strategy like consent.
Then as you move up and as you escalate for greater risk, maybe you could fold in a human in the loop interpreter.
Maybe you could use a sole solution of a human interpreter and dispense with AI entirely. So mitigation strategies have to be involved. I think there are two good use cases that we can illustrate this with. For example, customer service. If you call in to align, typically it's very scripted, very rote, very predictable for that decision tree. And you could certainly make really good use case for AI tools, especially in the very beginning of that interaction when you are navigating a phone tree. Yeah. So maybe later on down the line, if you have a unique situation that you need to talk about or a pattern of them that kind of come up, then you need to apply some critical thinking to the special needs of primary communicators in those conversations.
Another example of a use case might be in a restaurant.
So it might seem like no great shakes, like, you know, it's not that high risk, but it could be if you consider the individual needs of the primary communicators. And what if this individual has, say, a peanut allergy? Yeah.
So if you make an inappropriate or incorrect translation, it could be a life threatening result.
So in these two contexts, we need to think about those three dimensions that I laid out.
The basic tech specs, the individualized needs of the primary communicator, and the specific setting where that conversation is taking place.
[00:22:07] Speaker A: Thank you, Jeff, for giving us the context and the perspective. I'm pretty sure that our audience is going to appreciate the way in which you have explained it to us. And of course, one of the checklists that I really liked as well is the vendor assessment 1. When I saw the different elements. I thought this is not only useful for our current conversation, is probably really useful for many other sectors of our industry.
You flag 10 evaluation categories and as if you're looking into this, you probably appreciate it because we're talking about usability, accessibility, technical fitness, security, privacy, ethics, customization support, escalation, compliance, cost and stability.
I already see how much thought and care has been put into the creation of this checklist and we're only halfway. And the question is, where do buyers more often get wowed by demos but miss critical gaps?
[00:23:23] Speaker E: This is Jeff speaking. Oh my goodness. You have said it very well that you need to take great care that the dog and pony show is not hiding the worst of a given tool. And it's critical that buyers be proactive in unpacking, asking tough questions.
I think in my considered opinion, maybe other people would differ.
The QA and the testing process for the potentialities of primary communicators tends to be an afterthought. Slap dash. And we've seen that many times in the deaf community, in the deaf world, as we call it, that in general, any given tech will ignore the population until a few months after implementation into the real world.
And deaf people as primary communicators are up against tech that did not consider their needs or how to handle their communications strategies.
And so we want to say that a simple example would be a parking lot where you have the arm that comes down and if it's voice controlled or voice activated, I can't access the instructions on that kiosk.
So the affordance there for deaf people was missing if you can't access the spoken robot. Also, if you are selling a solution that has a given affordance, you have to also empower people who don't use spoken language, deaf people also people with limited English proficiency, people with special needs, people who neither see nor hear and acquire and use language through their hands, tactically, through touch. So you need to make sure and give consideration for all of these contingencies. I think that's often an afterthought. That's my considered opinion and I'm very happy to entertain other people who might have differing opinions and looking toward vendor solutions that best meet the needs of everyone.
[00:25:37] Speaker B: And Eddie, I would add two more if I can to that two more areas. One would be for the is. Is the. The myth and that we deal with in language services all the time that a, you know, like, you know, so word to word or meaning to meaning crossover between a machine translation from a spoken like, you know, with something that's written in English and then written in Spanish, and that that's all that the communication is. So it misses out all of the other aspects that make communication possible, which is tone and expression and culture and context and complexity. Right. So people miss that in the promises. The tools promise that, but they don't deliver. And the other main thing that people really are unaware of is that it really depends on the language pair. So, you know, you may have a. You may have 10 languages you need a solution for, and you might have an 80% solution in one and a 30% solution and another and somewhere in a. In a mix for the remaining in between. So I think that I would add that as well.
[00:26:39] Speaker A: Thank you, Jeff and Katherine. And definitely it echoes right now in my brain this main idea of all of the elements that need to be considered in this conversation. So I'm letting our audience know right now that we will continue this conversation. We have to actually probably have an episode for each one of these elements of the checklist to really dig deeper into the conversation. And of course, as well, there is a strong emphasis on quality, safety, and equity as a serious conversation. That this is, of course. Where do you see the biggest risks today and what are the most promising opportunities for teams to follow part A of this toolkit?
[00:27:32] Speaker C: I think, Eddie, I'll take this. I think the biggest risk is not understanding the risk.
And risk is actually both individual and situational. To give you an example, for somebody ordering a cookie at a coffee shop, it may be considered very low risk. Let me use an AI interpreter, an MT interpreter, or Google Translate. But if I have severe allergies, that may impact me, and any type of mistranslation can impact me deeply.
It's situational, right? Is it blue wire? Is it blue shoe, green shoe, and if I get the wrong color shoe, I'm okay? Or is it blue wire, green wire, and it's a nuclear reactor. And maybe I want two or three engineers before we cut the wire to confirm which wire to cut.
Also, for example, some of the technology tools may cut the background noise. And let's say we were using AI interpreting simultaneously for Spanish and French and German. Right now we'd be doing great. We're on a podcast. But what if it's a 911 call and the background noise is violence, and that's really critical information for the team that's going to walk into the scene.
So both risk is individual and situational.
Then there's another component of trauma and the way trauma affects communication on top of cultural differences, dialects, regional differences.
Trauma may cause a person to regress in their conversation and actually sometimes regress in their language comprehension completely and in the way we express things.
So that's another risk that's again situation dependent and of course literacy levels right in each language. So I think the biggest risk today is not understanding all the risks and thinking, hey, this solution is a fit for all solution.
And that's exactly what part A does right now. It gives you very comprehensive, very organized way of looking at your organizational needs, your situational potential risks, and creating a solution or implementing solutions or a number of solutions that will be best fit to support you.
[00:30:09] Speaker A: You put it in a really great way, Mila. The toolkit is there to support professionals if we could look into it as well. How does part A help compliance leaders align innovation with obligations tied to civil rights and accessibility without freezing progress?
[00:30:35] Speaker D: We have what's called the ADA Americans with Disabilities act, which encompasses provisions relating to communication access.
This is still established law and mandates effectiveness in communication access.
The arbiter of effectiveness must be the primary communicators, which means the procurement office or person making the purchase cannot make a decision on what will be the best fit for another individual.
The person using the solution must agree to it.
So evaluating the interpreting solutions will help the entity understand what will be the best fit as well as ensuring they remain in compliance with all applicable legal requirements, whether that's the ADA or in the education realm, IDEA or ferpa. Critically, organizations will want to be in compliance.
[00:31:37] Speaker B: And I guess I'll follow up with that. I think on the spoken language side, for those who are relying upon language access requirements and compliance issues Based on Title 6 of the Civil Rights Act. We all know that it's quite a turbulent time for policy at the federal level in the United States. And I think the most important thing to realize is that while the current administration has really cut down a lot of compliance features of, you know, that the federal government at the federal agency level, you know, they've really gone after the, the compliance ability to make sure that people are complying with these laws, the laws are unchanged and the state laws are unchanged. So we really do still have the compliance obligations in healthcare and in legal and in education and in other places where people are receiving federal funding. And so I think for me with this, what I hope this tool, this part A helps people do, is contract with services that will take into consideration the risk around the most high risk areas of communication and bring in these AI solutions where they will expand access and that they will do so in a way that is transparent and that means that people know AI is being used and that people are able to opt in and out as they need. Right. So that you can show that kind of accountability and that you have the ability to roll over to human interpreting when a simple conversation suddenly goes quite complex.
So I think that I'd say that. And then the idea is that part C will really dig into this topic.
[00:33:20] Speaker C: Now I want to add to this. Compliance aside, there's a human component.
So if you're making a decision for your organization, just let's switch hats for a moment. If it's your child on the other side trying to do something, if it's you yourself, if it's your loved one, do you want your loved one to be navigating the situation with no understanding?
And how do you feel when you are on the permanent cold with. I'm sure most of us have experience with a credit card where you can never get to a human and you're trying to address a situation that's not exactly straightforward. Or what if it is a situation that really requires care? It may be a hospital, but it may be a customer service issue, maybe even at a department store. So it doesn't have to be courts, legal, government. Right. It can be any situation.
So from the human perspective, how would you like to have been treated in the same situation? And I think if we switch those hats, then language access stops being a compliance issue, it becomes a human issue, and it becomes almost right, it becomes a human issue.
[00:34:40] Speaker A: And isn't it always better to talk about humans than to talk about machines? And don't get me wrong, I have nothing against machines. I think machines are some of the greatest tools that have allowed us to bring a lot of also development to places like my home nation, Colombia, where today we're able to connect via Internet, which would have been impossible 20 to 30 years ago.
But we have to be careful. And I understand why the toolkit has been created and why we're thinking about it today as a community. And if you allow me, many buyers are encountering bold claims and uneven results.
How can part A help separate marketing from measurable performance before contracts are even signed?
[00:35:35] Speaker C: Well, it's an excellent question.
And first, the part A asks your organization to look and assess. Is your organization ready? Right. Or let's say you may have. If you're a hospital system, you may have interference from certain types of equipment. Or if you are a plant, you. You may have certain noise, type of interference that can break the Internet connection or the ability to use the tools.
What are the setting specific requirements that you have? So the part A helps you prepare the organization and then puts all the vendors on the same kind of playing field, regardless of the claims. And we have seen some claims that are really bold.
And the good news is that technology will continue to evolve and it will continue to improve.
We have lived in the world where that has been defined by a shortage of a human interpreter. Many, many situation interpreter was not provided at all today because of the technology those situation, we can have potentially an interpreter in every situation. You can be traveling, put your phone up and understand the road sign in Japanese if you on the road.
So it allows you to put all of the claims on an even playing field and then evaluate them. How does each tool handle, for example, languages of limited diffusion or indigenous languages or certain domain expertise?
[00:37:17] Speaker D: Right.
[00:37:18] Speaker C: So part A is really critical to be able to implement a solution and to compare the solutions that are in
[00:37:30] Speaker A: front of you now for working interpreters, spoken and signed, what changes when an organization starts using the toolkit?
[00:37:42] Speaker B: I'll hop in, but I hope Molly will follow in or Jeff for the signed part of it.
I think hopefully this is what would happen. I think we're headed into a world where we are going to have a hybrid continuum of interpreting solutions and you're going to go from fully AI interpreting tool all the way to human only. And in between you're going to have sort of hybrid mix cases. And I think thinking about a customer service continuum can help people imagine this when you get on the website, you get the chatbot and you get a few canned answers and then eventually you may need to escalate it to a phone call and then you get through the customer service tree and you get finally to a human being. So I think these companies have thought through their continuum of customer service from fully automated to fully human. And that is where interpreting is headed. So my hope would be with this toolkit that it would help organizations get very clear on how they're handling that continuum and that the interpreters that are working with them are very clear on where they're coming in in that continuum.
[00:38:49] Speaker E: Right.
[00:38:50] Speaker B: And this is a big change for interpreting. This has not happened for interpreting before, where we're sharing the space with automation like translation has shared for many, many years. Right. So I think that's the biggest change and ideally they would have that rather than a very chaotic, well, we're dropping our human interpreting and oh no, now we have AI repair and we have to go back and get them back. Hopefully it will help ease that process and have the industry maybe adopt a smoother process with the implementation of these tools.
[00:39:26] Speaker E: This is Jeff speaking, to which I would add we can make mention of language pairings and things being sometimes easier than not. Let me expand on that. Often people consider the language pairing in a given interaction, maybe English and Spanish as an example. We want to make good and sure that we add the consideration of not only languages, but modality.
And I mean by that, are we in a visual language that requires the use of your eyes? Are we in an auditory or spoken language that requires the use of your ears? Are we in a tactile language that requires the use of hand on hand contact?
So often this is again an afterthought. People make a lot of assumptions about language pairings and writing systems into place that really are based on spoken languages and not understanding the other modalities exist for people to receive information. So we want to make good and sure that the modalities as they differ across language pairings are supported. For example, Spanish will have three or four different sign language dialects in a context situation. You've got Mexican Sign Language as a national sign language.
The language of Spain itself has a different signed languages, different regions there. So we want to give some due consideration to the kinds of pairings that can happen.
[00:40:55] Speaker D: Briefly for those languages where the databases are extensive AI systems can handle basic, more simple translations.
But most of these systems are not sufficiently matured to have robust medical or healthcare or legal vocabulary databases.
And this lack of information can lead to serious harms.
[00:41:24] Speaker A: And thank you for your perspectives because it allows me a absolutely the audience to realize how we are not having a conversation of us against artificial intelligence or us against tools that have been created by humans. Which gets me to our next question. Where does artificial intelligence meaningfully assist? And where should humans stay firmly in the lead?
[00:41:55] Speaker C: That's again, not a straightforward answer to this complicated question.
One would expect to say artificial intelligence can absolutely assist in situations where no interpreting was provided before.
Humans should be there in the very complex situations.
Yes, that's absolutely correct. I'm going to give an English to English example. I'm a big apple eater can be translated as I love eating apples. I'm a big person that eats apples, or I only eat big apples.
Now take that to a complex situation and suddenly a lot of additional information is required that only a human can pick up based on nonverbal cues, context, situational awareness, all kinds of knowledge.
Of course, in those situations, humans are absolutely needed. But let's reverse that situation and let's talk about the speakers of languages of limited diffusion or like the other 40% of the world that don't have access to education in the language they understand and just don't have access to any information.
So if you are a speaker, one of the 7 million people who speak Somayan language, and you try to google anything online in your language, there is nothing available.
Same may be the reality for about 65 million Fulani speakers. And then of course, other much smaller languages that can be brought as an example for these languages.
Building small language models and allowing to create any type of AI is actually a window to the world because how do you. And again, in case of Mayan language, because of lack of language to access to education and a person cannot learn a new language in a new subject. At the same time, we're talking about literacy rates overall at about 1 or 2% of the population graduating past the sixth grade. No, there is not a lot of translators and interpreters who can suddenly go and technically translate the entire wealth of the World Wide Web.
So in these cases, even limited AI, even limited access to technology, can give that kid a chance to access information, maybe in Germany, maybe in Africa, maybe in Japan, and learn and study and innovate. So in that case, it's almost a reverse opportunity that gives a whole country, a whole large group of speakers a chance. So that's why I see that AI can meaningfully assist in tremendous variety of situations, but at the same time, human role remains critically important as well.
Guys, feel free to add.
[00:45:00] Speaker D: For translating content using AI technology, success can be realized when the content is very basic and predictable, such as at an airport or a train station where delays or gate changes may be announced.
There's little language variability in those announcements, just numbers of gates or locations.
And this content is predictable.
We strongly recommend what we're referring to as a hybrid approach, which means that the AI system takes the first pass at translating the content and then a human interpreter or translator reviews the output to remediate the final result so that it better meets a particular audience's needs.
That's the middle way, using an AI solution, but also benefiting from human knowledge.
[00:46:06] Speaker A: And that is a wonderful thought. I am very glad to see that in the industry we have gone past the whole denial phase and the cynicism that sometimes comes with change and that we are in a place where we can actually talk about these topics openly and actually find a path forward in the conversation.
Now, I have to say that the first time I saw the Toolkit and so Part A. I was very curious because I said part A. I'm assuming then there is at least a part B. But I have heard before this conversation that we have a part B and a part C. So what should our audience expect from the second document in the Toolkit series coming out later this year? Part B, Technical Specifications.
[00:47:04] Speaker E: Eddie, this is Jeff speaking. I'd be happy to respond to that.
As I introduced myself, I am a Full Stack engineer and in so doing I have a lot of excitement about Part B because it applies heavily to my area of expertise and that of my colleagues.
And before I delve too deeply into it, let me explain what it is not so the exclusion criteria are such that AI can look like a black box and companies aren't sure of, you know, what the font size customization can look like, and they don't want to micromanage it for the interface too much and a lot of the information becomes proprietary.
So it depends on what the inputs and outputs will be. For example, the modification for vehicles. For example, you have an engine in a car and the driver doesn't need to know what's going on under the hood.
They don't need to know how it's working, they just need to drive it.
So what is more important in that context maybe is that the driver is well aware of the different levers that they have access to, or dials and knobs on the dashboard that can influence the performance of the vehicle.
For example, maybe you want to hop in and out of a complaints procedure. Maybe you want to get to a human in the loop or a human a solution.
So the AI interface itself is kind of like the vehicle in this context. Also, other considerations would be to address in Part B, for example, where the modifications maybe were getting too far afield, but the laws of the road, for example, maybe there's a detour available if there's an outage in one given spot.
So you need contingency strategizing and planning to circumnavigate these inevitabilities and so you can address them in this way in Part B. For me, I'm very excited to see this coming down the pike because it allows me the opportunity as a developer to build in and against certain things that might or might not happen so that we have a golden standard. Also, it affords me the opportunity to build overlays and build upon what we already have for existing knowledge of best practices and share among ourselves and identify the most critical components and maybe the not so critical ones.
So that we can have kind of a North Star facing posture.
[00:50:03] Speaker A: Thank you, Jeff. And I have to say that I'm not going to put you on the spot to ask for a date on when Part B is, but I can certainly tell you that we will have a follow up episode on this one.
[00:50:17] Speaker E: The Fall. Eddie, It'll happen in the fall.
[00:50:20] Speaker A: Thank you so much.
We have a hint in there and we look forward to that interview on Part B. There is also a Part C of the Toolkit series Legalities and Practical Considerations.
How will you translate the legal environment into day to day policy and procurement guidance?
[00:50:51] Speaker D: Well, with a disclaimer that we are not lawyers, we want it to be known that we consulted with both hearing and deaf attorneys to better understand compliance within the current legal framework of language access rights.
We do have many language access laws currently.
As mentioned before, the ADA idea as well as others.
The current difficulty is people being confused about knowing which laws are enforceable due to recent executive orders and the current climate where laws may seem to be in conflict.
Our hope is that Part C of this document will help to provide clarity regarding laws that are still very much on the books and how to be in compliance with them. If you choose to use an AI interpreting solution as well, the document will elucidate the risk exposure you may have vis a vis legal compliance.
It should give you the confidence you need related to those issues.
[00:52:02] Speaker B: Please I was just going to add in, just for those people in the audience familiar with how compliance has worked. I think a really helpful way to consider this and what we'll try to do with Part C is that we know what access means, we know what meaningful access means, we know what equal access means as it applies to the various laws that, you know, govern language access. And so you don't have to have a federal law or state laws that tell you how an AI tool is compliant. You can instead ask does the impact of this tool, you know, is does it provide equal access across language pairs, does it, is it meaningful access, etc. Etc. So I just think it's an easy way for people to think about it right now is the laws are still the laws. We've been with these laws for, you know, a long time and we know what compliance looks like for interpreted communication. And we just need to figure out how we apply that same rubric to the AI interpreting tool and that's hopefully what we'll do in Part C.
Sorry about that.
[00:53:08] Speaker C: And given the fact that a lot of multilingual audience or clients that may be listening to multilingual don't necessarily fall under the strict compliance requirements of federal government or particularly possibly even healthcare organizations. I want to bring again a human component.
Do you want it to be AI telling you that you have a cancer diagnosis or would you rather have a human interpreter? Or is it even an end of life situation?
Let's say somebody is passing and a priest comes over to your family living room. In that case, do you want that interpreted by AI or do you want a human sitting there? And many other situations that we can bring in where human consideration I think goes above compliance and laws and I think beyond anything else, we are first human and that's where it becomes important.
[00:54:21] Speaker A: Thank you, Mila. And of course, one of the thoughts that someone might have on the toolkit is that it might be so overwhelming that it could paralyze those that are looking at the list. If a mid sized organization wants to start next week, what are the first three steps you would recommend when using part A?
[00:54:50] Speaker B: Do you want to jump in, Molly, or do you want to follow me?
[00:54:56] Speaker D: There are two sides to this equation. The organizational evaluation and the primary communicators.
For organizations, we suggest they use the checklist to gauge readiness for AI solutions.
For primary communicators, we recommend looking at the risk factors in a framework of possible harm, which we are shorthanding as the three Ls.
Life, Liberty and livelihood.
In other words, work itself.
Oh, sorry, my cat has joined the podcast.
So the three Ls must be considered in terms of possible harm to the communicators.
For example, in a medical situation such as Neela talked about with the cancer diagnosis, or a police interview where one's liberty might be curtailed as a result of a wrong answer being given or a misunderstanding occurring.
There are many situations which could have dire consequences.
[00:56:11] Speaker B: And I would just say really practically, if I'm Katherine Allen working in a school district and we are looking at adopting an AI tool to meet certain kinds of needs.
One, the checklists are in order. I think they're in a good order to start working through.
And the other thing is look through the questions and maybe circle the ones that are relevant to your organization. There are a lot of bullet points under each of the checklist tables. And I would probably go through and just figure out the ones that are relevant to us and then start putting together that working group that will just systematically go through and kind of start to figure out, okay, what is it we're missing and what is it that we need to answer if we're going to Adopt this tool and always within the framework of what Molly just said, we want to keep the primary communicator's best interests at heart.
[00:57:06] Speaker A: Thank you.
[00:57:07] Speaker C: I want to add one more point. I think the tool is specifically going to help smaller and mid sized organizations because super large organizations normally have resources to hire consultants to do all this type of research and evaluations.
So I see the stool specifically helping the organizations that may not have those vast funds. And just like Catherine said, look at what questions apply to you and this is going to make it very simple and very helpful.
[00:57:43] Speaker A: Thank you everyone for your thoughts. Molly, I have to say that I've had a cat once in my life, but I never had the pleasure of being interrupted in a webinar by my cat. So I look forward to a future where I get a beautiful cat also walking in front of everyone for myself. And we are coming to an end of our conversation. But I can't leave this conversation without asking how can others get involved? How can we be part of the movement? How can we bring talent that's necessary for the understanding of the conversation and to also bring in a new perspective.
[00:58:28] Speaker B: You want me to. I'll jump in because I have a couple answers. One, if you want to be involved in SAFE AI or know when the next toolkits are coming out, you can go to the website and sign up and we'll keep you informed. But for me the getting involved in this is much broader. I just think it's on. We are at one of the sea change disruptive moments for our profession. And so you know, you can publicize about the toolkit, you can make sure you yourself are getting AI trained.
We need AI trained interpreting workforce. We need people who know how to advocate and lead and inform.
So for me the getting involved is actually is just stepping up to the moment because our industry is not going to look the same five years from now and we need as much proactive action as we can to have that ride be not as bumpy, I guess is what I'd say. So for me it's a broad get involved like wherever you are best situated. Do what you can in your place.
[00:59:32] Speaker E: If I may. This is Jeff speaking. Eddie, I'll respond to that.
I think I have two thoughts.
Firstly, let me give a shout out to SAFE AI and to coset, the Coalition for Sign Language Equity in Technology.
So this could be people who use language visually auditorily or tactilely. We encourage folks to sign up at CO C O S E t dot org.
You could also access it at the safeai pem dot org as well, which would be fine.
My second thought is circling back to what was said earlier about language pairs.
Let's expand our conception from languages to modalities, language and modality pairings for people to receive and express the language in the language and in the manner in which they prefer.
[01:00:44] Speaker D: We do suggest that everyone download the document and share it widely because everyone can benefit from knowing this information, whether they're interpreters, translators, language agencies or others.
We all need this information
[01:01:02] Speaker C: and adding on to what everybody else said. We can also get involved by spreading the word and letting the customers, the end customers, know the organizations that may be implementing it, a transportation authority, a company that sells, whatever, pharmaceutical company or retail organization, any and all who are within your reach.
Please spread the news, spread the word. Let's all get involved. We're interested in hearing success stories, how the toolkit helped with implementation and how the toolkit helped organizations. Or we're interested in hearing feedback on how the toolkit can be additionally improved. Maybe we're missing a section or two.
So we would love to be very engaged with the audience of multilingual and with the audience in general.
[01:02:02] Speaker A: I have to say that this is the type of conversations that I love. I know Camilla Sabogal, who is in the back end of this conversation, is already thinking about what are some of the different podcast channels where we can feature some of the stories and ideas that we are getting out of this conversation.
I, for example, did not know that Co set existed and I believe there is a great opportunity there. I have some amazing.
Well, I'm saying amazing. Maybe they are not as amazing as ideas, but I've got some ideas in my mind right now of things that we could do together.
I cannot leave, of course, once again, without giving everyone the opportunity to give me any other final thoughts. Any final things that you believe we should say before we go, Any invitations or any thoughts that are crossing your minds right now,
[01:03:01] Speaker E: If I may. This is Jeff. Allow me to thank the Access Team for providing interpretation for this and facilitating our conversation. I appreciate their work very much. So thank you very much to Ann and to Cheet.
[01:03:12] Speaker C: I think my final thought would be any new change and any new technology is scary.
We've all been introduced to computers at one point. We've all been introduced to telephones, mobile phones. AI is probably the most disruptive technology, but just like any technology in the history of humankind, it's going to be adopted and before we know it will be integrated.
And I think it's neither 100% AI nor 100% human. It's a continuum and it's collaboration together.
[01:03:50] Speaker B: I think what I'd like to end on is just to celebrate the collaboration between safe AI and coset.
This began kind of early on with the beginning of the task force and I think represents maybe one of the most robust collaborations between spoken language and the deaf and hard of hearing communities and sign languages that we have in this industry.
And I think it's unbelievably valuable. And I just want to thank.
I just want to thank those present for the incredible collaboration because it's really been enriching and I think it makes this tool so much more relevant.
[01:04:35] Speaker E: APPLAUSE and second Absolutely.
[01:04:40] Speaker A: Thank you so very much, everyone for joining us today. I'm sure this is the first of many conversations that we will have and I am very sure that the way in which we are covering these and future stories is going to change significantly and multilingual as a result of this conversation. I am very happy to say that as the first time including ASL interpretation, it's been impactful success and I'm sure Camilla in the back end is going to be really happy to hear that as well. My name is Eddie Abrieta, I'm the CEO here at Multilingual Media and today we had an amazing conversation with Katherine Allen, Molly Glass, Jeff Schall and Lyudmila Golovine about the AI Interpreting Solutions Evaluation Toolkit. Part A, Organization's implementation at management.
Please stay tuned for the fall when the Part B technical specifications will come out and when Part C, Legalities and Practical Considerations comes out, we will have the teams of SAFE AI and COSET back on Multilingual Media and hopefully also Multilingual Magazine to cover these and all the stories coming out. Please stay tuned for future collaborations and thank you so much for for listening. Until the next time. Goodbye.