The Real Danger of Good Enough in AI Dubbing

February 11, 2026 00:58:10
The Real Danger of Good Enough in AI Dubbing
Localization Today
The Real Danger of Good Enough in AI Dubbing

Feb 11 2026 | 00:58:10

/

Hosted By

Eddie Arrieta

Show Notes

Is AI dubbing ready for prime time without human guidance? In this episode, Taras Malkovych, founder of Tapas Localization, joins us to discuss the realities of integrating AI into complex audiovisual workflows. He argues that while AI has made massive strides, it remains a tool that needs to be "led by the hand," specifically to capture the nuances of human emotion, pauses, and natural imperfections that machines often miss.

This conversation moves beyond the hype to practical production strategies, examining the shift from subtitling to game localization and ethical AI dubbing. Taras warns against the industry settling for lowered quality thresholds and explains why his team focuses on workflows where linguists act as decision-makers who sculpt raw data.

View Full Transcript

Episode Transcript

[00:00:03] Speaker A: Hello and welcome to Localization Today where we explore how language, technology and community converge to unlock ideas for everyone everywhere. I'm Eddie Arrieta, CEO at Multilingual Media. Today's episode takes us into the realities of scaling audiovisual localization in an AI driven world. Beyond the hype into what actually works in production. We'll explore how subtitling, game localization and doming are evolving. What it really takes to integrate AI without losing editorial responsibility. Our guest is Taras Malkovich, founder of Tapas Localization. Taras built tapas. Taras built Tapas around complex multilingual workflows, starting with subtitling and expanding into game localization and human led AI dubbing. His approach is grounded and in real production environments where quality, voice and trust matter. Rather than chasing full automation, Tapas is experimenting with how AI can support human teams at scale. Taras, welcome and thank you for joining us. [00:01:17] Speaker B: Thank you so much for having me. Hi. [00:01:19] Speaker A: Of course, of course. Really glad, really glad to have you and your experience with us. Those are listening probably don't have any idea of like the type of projects that you have participated on. So how about we start there for those that might not know the experience and the work of an actual subtitler like yourself, how can you describe what you do to those that are listening and if you can tell us some of those amazing projects you've worked on. [00:01:48] Speaker B: Absolutely. So yeah, it's true that I've started as an individual subtitler. I've started it in as long, as long ago as 2010. So I've been working with subtitles for quite a while. And my first subtitling experience as an individual subtitler quite far before building Tapas localization was at some Ukrainian film festivals. Some of the actually best ones. And I still remember that, that experience as something magical. It was now 15 years ago. Wow, okay, that's quite a while. So, yeah, at the time I was, you know, exploring the ways to make subtitles kind of very natural because there was, there was this superstition, right, circulating about subtitles being a pale copy of whatever action is in the screen, you know, and you, you had to kind of like liven them up or like sprinkle them with color or whatever so that it doesn't just, you know, record, you know, record a shortened version of whatever is happening, but it's kind of like add in to the movie. The problem was that Ukraine has always been a dubbing country. So subtitling, it's like I actually didn't have A base to start with or like, you know, to learn from some, some experts in that field in the past in Ukraine. So I had to kind of trailblaze that, that state, that paths. Uh, and yeah, so we started at the Ukrainian. The Ukrainian Film Festival called Molodist. There was this semi documentary, right. Exit through the Gift shop. Uh, I still remember that because at the time, so we not. Not only did we have to translate the subtitles and you know, do the spotting, do the timing, but also actually come into the screening and kind of click through the subtitles so manually. So like the audience would sit and watch the movie and we would click every subtitle to be projected, you know, so it was kind of a little bit weird. And. Yeah, and I still remember that because I decided to experiment. I decided to put all this kind of human, you know, blips, bloopers, stoppers into the subtitles. So, you know, subtitles are usually kind of like a cleaned out version of human speech, right? Or shortened as well. And what I did was like all these arms, all this, all this like, you know, pauses and natural breath. Especially in that movie because there was the. The main character was kind of struggling to speak English because he wasn't an English in. Yeah. So I kind of felt all this urge in the, in the subtitles and it was like, Ukrainians have never kind of seen that, I think because, you know, dubbing is dubbing and subtitling is very kind of formalized there. And I do remember that I was clicking through that on the, you know, in the cinema and the audience was taking it really, really, really, really well. And so, you know, nowadays to do fast, quite, quite, quite forward, we actually, you know, looking at those things in AI dubbing. So like we at tapas localization that I, That I had right now we are looking at this. Pauses and what. Whatever makes, you know, human speech human. Because no human, you know, every human wants to sound perfect when they speak, right? And no human would like to. Would like his speech to be filled with all this herms. With all this oz. Too many pauses, you know, because that, you know, but actually that is what makes human speech human, you know, and that's where we can say that AI, as in dubbing doesn't really work well right now. You know, it doesn't. It doesn't really capture those. Those peculiarities. So, yeah, so I started with that and then Fast forward to 2017. You know, between those years I was subtitling on some festivals, some kind of individual errands. But then I remember Per Knockler, the CEO of Plint, the founder of Plinth at the time, he actually reached out to himself. That, that was, you know, for me as a young subtitler, it was like, wow, a wow moment. And yeah, so. And he said, do you want to. We were looking at, for the, for the linguists in Ukrainian. And yeah, so we joined. So. And I did participate in quite a bunch of interesting project at the time. You were, I remember we met emat first after I posted. I bragged about murder mystery 2 that I subtitled that one. I actually subtitled murder mystery 1 as well. And now it's, you know, now it's out there. So I officially have confirmed the permission to mention it. Yeah. So because the problem is that, you know, the NDAs have always been eating at me as an individual subtitler. And I was like, for six years, I think I was kind of silent as to what project I'm participating. And one thing is like to be silent, you know, to the, to the period, to the moment when the film is out. So like, you know, half a year, maybe at max, but six years is a little bit. And you start like, okay, am I doing this? You know, what for and what has happened? What is going on? Right. So yeah, but I really, you know, I really am grateful to Plinth for granting such high profile projects for me and you know, even better, even better demonstration of, you know, the trust that they, our placing in me was when I got to translate David Letterman's meeting with our president Vladimir Zelensky. So, you know, quite a, quite a bunch of high profile stuff which inspired me in my, you know, in its own way inspired me to create tapas, Tapas localization. That's of course I, I did sign off this clean version of our updated website right now. But in the very first version of the website there was this thing where I told the story of the name tapas. I actually, when I, you know, first time I was abroad, I was, I was very, you know, I didn't know a lot of, didn't know a lot of facts about foreign cuisines, right? And I saw tapas and you know, because P, the English P is looks exactly like a Ukrainian Cyrillic R. So Taras is like my name is kind of like, kind of like tapas. So I, you know, I did use that in the naming, of course, but then I figured out that tapas can be like a cover, you know, because the first tapas, as you most probably know, were used to like cover, you know, sherry glasses from the flies that were. That were kind of trying to enter that. That glass. And so we at tapas localization, from the very first days of our trade, you know, of our production, we kind of are trying to protect the translation, the localization from the. From all the mistakes, from all the, you know, mispronunciations, from all the cultural misalignments. Just like a tapas, you know, a piece of bread or whatever would protect. Would protect that sherry ring from flies really good. Who want to spoil it. Yeah. [00:11:00] Speaker A: Yeah, that's really good, Taras. Thank you so much. And I like the play with the tapas. Everyone Latin America would understand. And then some of us would think also of the tapas in Spain, where you eat the other thing that you put like food on top and whatnot. But yes, the tapas are protecting the companies from those. If we can just go back a little bit in time and then think about those projects. And it's very interesting when you mentioned, right, you know, having multiple years where you cannot talk about the thing that you're an expert on. It's like, how am I supposed to get more work if I cannot brag about the most important things that I'm doing? Is like, it's. It's almost. It feels counterintuitive for. For the career. But I presume, of course, great projects behind what you do, and that's why that was the case. Could you tell us a little bit about those NDAs and kind of like how that helps you get ready for other projects or how much of that really helps you in that professional kind of decision making that you have to do? I'm sure many are wondering and trying to navigate that as well. [00:12:18] Speaker B: Well, all my life, from day one that I started actually working is filled with NTAs because my prehistoric times, you know, pre subtitle times were publishing house. I was heading a foreign rights department of popular Ukrainian publishing house. And, you know, NDAs were something pretty much daily there, you know, book rights until it, you know, it's sadina because Ukraine especially at the time was unfortunately infamous for intellectual property piracy. And, you know, a lot of foreign clients didn't really want to trust us with, you know, their content, their materials, which that also goes for books, that goes with movies, dubbing, subtitling, publishing everywhere, everywhere where, you know, intellectual property was involved. Ukrainians have had to really fight for, you know, to get hold of, you know, as much of a material as possible to be able to work with it. So, yeah, so NDAs started there then of course, with the Subtitling projects where the NDA would. I was always, you know, too cautious with NDA. So I was always doing everything by the book. I was always, even, even though I would read, I would like understand that, okay, this clause allows me to disclose something, but I still would like, oh, maybe, you know, I should wait. Maybe I should not. Maybe, maybe I should. Okay, okay. Just to, you know, just to be on the safe side. So definitely something stopped me. Some definite causes did stop me from, you know, oversharing or sharing about my work for like six years before there was a workflow where we could actually confirm the permission to mention the work that we've done in cvs. I don't know if we want to get more, more project or anywhere else. And then, yeah, and then so the NDA thing organically passed to Tapas localization where now we also have to take care of the projects, take care of the intellectual property that we've been trusted with. So all of our project managers, all of our linguists, all of our, you know, even designers sign a lot of NDAs because, you know, so yeah, NDA is a central, central element of anything intellectual property related, especially when you're dealing with a country like Ukraine, like the country where I am from. [00:15:26] Speaker A: So yeah, that's, that's, that sounds really interesting. And of course your evolution from a subtitler as a professional, really informed way in which Tapas operates, what can you tell us about how that transition happened and how did you went from this individual subtitler to them building your company? [00:15:50] Speaker B: Yeah, so of course, well, I really did enjoy working as an individual subtitler, but I also enjoyed working in the publishing business as a communicator basically. So I thought why not, you know, combine all these two, you know, if I can tell that skills that I have, translation and kind of communication kind of the age and between different, you know, production teams and clients and translators and you know, the, the multilingual teams as well. Because I, I understood as a linguist and as a Ukrainian patriot, number one thing for me has, have all been, has always been Ukrainian language, my own language. But then we, you know, then I thought that, I think it started, I think mentally it started from my Fulbright experience because of course on the, you know, Fulbright is a type of scholarship where you do meet with all nations of the world and, and my, my year long Fulbright experience was so multinational, so international that I really dived into that context and it didn't properly release me properly. So all this time I was carrying the idea how to Expand how? I as an individual, individual subtitler can expand in different languages. And you know, to expand into those languages you actually have to master those languages. And I do speak, I do speak English, German and whatever. But you know, speaking is one thing and working as an individual suppliter is another thing. So I did gather a, I did gather a team from my Fulbright colleagues, from linguists that, you know, have been Fulbrighters as well. And unfortunately at the point, at the start, at the starting point of it all, it was kind of very slow to the point where most of the team kind of like quietly moved on, you know, because I was still doing the publishing stuff, I was still doing a lot of other stuff and also individual subtitling. So the team quietly moved on and for a little while I was kind of left alone to build all this. But yeah, so we'll build, we'll build. We've started with 11 languages first and we've started to be like from sub. To be subcontractors of bigger LSP companies and you know, all this kind of strategies to come into the big world of, you know, LSPs. But then, yeah, so subtitling went really well for us. We've been doing quite a bunch of interesting and kind of crucial things. When, for example, well, when the war started in our country, when the war started in our country in 2022, couple of months after that, after the start of the intrusion. We've kind of collaborated with a Ukrainian newly created war related media where what we did was, you know, they had like, they were producing short videos to fight Russian propaganda around the world. And you know, me as a Ukrainian, I was definitely interested in that. So we jumped all in and they would give us a bunch of like five minute long kind of videos and we would subtitle that into Chinese, into Arabic, into Hindi. A lot of languages that, you know, were, they could be used where we felt that, you know, people really need to know more about Ukraine to know more about what is happening. So yeah, so we kind of started big with that. So all 11 languages that we were working at the time were actually put to full throttle on that project. So we were really multilingual on that. Then we were working with a bunch of other clients where we extended to 20 languages. And then the AI started, the AI period started and we did fill how the world of subtitling, the world of audio visual translation on the client side is pretty perceptive of AI. So you know, we knew the pushback, especially the pushback of the translator of the Linguist community. But we did feel that audio visual translation and clients, the content owners and intellectual property owners, they are kind of, you know, they are open to adopt AI workflows like subtitling, AI assisted translation and everything. And so for quite a lot of reasons, subtitain has become too small, too kind of tense a niche for us to just go with supply chain. So we started looking for ways to expand our falcons, expand our horizons. And Last year, in 2025, we decided to enter game localization world. And that's where we found that. That's where we found that. Well, to my big surprise, when I was talking to, you know, bigger and smaller game studios a lot, quite a bunch of them were saying, you know what? In our official localization policy, we have a no AI clause. So when comparing with film people, when comparing with audiovisual translation, I did feel that, you know, gaming world, gaming studios were much more involved and much more appreciative of human translation work than the subtitling people, the audiovisual, the film studios. So games was something that returned us the belief of the importance of human subtitling, of human work, of human workflows and stuff and stuff like that. And so we are still pursuing that direction. And this. Well, last year as well, we've been quietly preparing the third pillar of tapas localization that we are now focused on is AI dubbing. That was a little bit unexpected for us at first because that was 2024, November 2024. There was one of the conferences where I was, where I saw the results, where I saw the workings that, you know, AI tapping solutions could, could offer at the time. And I was a little bit amazed. That was quite far beyond of what I was thinking AI dubbing could become and could work with. And so when I left that conference, I released this statement where I called AI in dubbing and AI in, you know, in localization, and astonishingly brilliant toddler. So, you know, it was AI at the time was really, really, really doing great progress. But still, you know, it had to, it, it, it was behaving like a toddler. So it had to be led by hand by quite a bunch of adults, you know. So that's why I definitely decided that it's worth exploring the AI way. But we didn't want it to just be something that some people call push the button and pray, right? So it's not like we wanted to go fully automated or something. So we started exploring how humans can interact with AI and how humans can evolve AI when and how deeply they can be involved in AI workflows, how deeply they can edit or redact the AI outputs. And that's where we made, we met our current partners, company called Voiceed. They are, they, they are really ethics driven. So the, their approach to ethical AI really aligns with our company values. And so we were really excited to push forward with partnership with them. And so right now our statement is that with Voiceed and with our team of specially trained AI dubbing editors for, you know, so we've, we've launched AI dubbing in 15 languages, right? And with our team of AI dubbing editors, we have really introduced some adults into that room with the toddler that AI can actually metaphorically be. And so, yeah, so we could actually be, I would not say one of the first, but maybe one of the first in the world who could have, you know, pushed for or launched the human LED AI workflow in dubbing. So not only just push the button, but AI dubbing as an AI would just generate the output. So we would use AI just for generating the dubbing and then we would use the platform with our partners for our AI speech editors to edit the AI outputs and then QA and all this stuff. So that was basically very exciting because with our partners, we've been working for a year to enhance that offer. We've been giving them feedback, we've been testing the production in real time production workflows. We've ran endless iterations of tests to actually see or to, to notice where AI works, where it doesn't work. So one of the moments where AI works is. One of the moment where AI doesn't work is kind of, as I said, the, the, the, the, the human kind of loopers where they, where they stop, where they say, ah, but they say, oh, you know, all these poses and laughter and human kind of emotions beyond just words. AI at that point was not so great in grasping it, was not always grasping it well, not always giving good output about that. So we've worked on that, we've given Voiceed quite a lot of feedback on that and we've been working on the ways to eliminate that. And I cannot say how. I would not pro, I would not definitely, you know, disclose the, the, the procedure, disclose the functionality. But I must tell you that right now we can, we can reach the, the AI output to the level of the, you know, the kind of, the very natural, quite less robotic, quite close to natural speech where all the poses, all the urns, all the arms would be definitely covered in all the languages that we are working with that is 15, and that would be expanded to 32 soon. Yeah. So basically that's what our three pillars are. [00:29:48] Speaker A: Thank you for sharing that, especially when you made the comment of working with the companies that still believe that humans are somewhat needed. And this is a very interesting time because we interview a lot of companies and some companies feel a bit uneasy, some feel a bit nervous, Some feel like they don't know where to really tackle artificial intelligence. How did your team navigate integrating the conversation internally versus what you share with the clients and what's actually tangible and practical rather than fluff? Right. Which is the difficult thing to do. [00:30:28] Speaker B: Yeah. So I think where we started is terminology. So there is this ubiquitous term that everybody use right now is human in the loop. Where it is felt, it is very, just very vivid that the loop is not quite a place where humans in localizations want to end up after years and years and years of successful and great work, work and very hard work on their side. And if you look at the abbreviation of human in the loop, I was actually dwelling on that with some officials of localization industry, and they agreed to my approach. So when we look at the abbreviation of the term hitl hito, you know, it's not something that we really want to use in localization because localization is where people are, you know, sensitive to language and sensitive to all the details that language can introduce. So we kind of told, you know, internally and also externally and communicated that externally that we, in our AI dubbing workflow, we do not want people to be in the loop. We would create AI dubbing workflows where it will be human led. So human led, because we do use AI just to generate the output. And then our editors, actually, then our editors come in, then our editors redact that. So AI is just. AI is just a kind of like a one, one step of a ladder. Right. That allows us to see a little bit behind that wall. But what elevates that, elevates us up that ladder is actually people. So people are still somewhat needed not only in audiovisual translation, not only in gaming, but also in AI dubbing. Because AI is just a tool. AI is not something a, is not a decision maker in our workflow. [00:33:17] Speaker A: Taras. One of the other questions that I had, of course, is related to in these transitions, and as you're bringing in new pillars through tapas, you have the reactions from the clients. So how have the clients been receiving your pillars? How have you been receiving their feedback? And how is that interaction with your current projects? [00:33:41] Speaker B: You see the Fear is there, of course, the fear and the worry is there from the client side as well. So one of the fears or worries of the clients is the number of back and forth because they keep comparing AI dubbing with, you know, traditional dubbing workflows. And of course traditional dubbing workflows are full of those back and forth on multiple levels from you know, finding or, you know, establishing a recording studio and all this post production and pre production and to finding good dubbing actors and whatnot. So the number, the amount of back and forth is what is not scaring the clients away, of course, but they, they, they, they usually we, we, we've been receiving a lot of questions like you know, your AI dubbing, how is it going to be? How, how much of a, how much of a burden it would be on our team, like on our project managers, on our, whatever from, you know, from your side. How can you guarantee or can you tell us that you will, you know, you will take care of the content, you will dub it and you know, you will, you see it, you will return it to us and just once we would say okay, it's, it's, it's great, or we, we would just want us to return something to rework and you know, so the back and forth is definitely number one that kind of is stopping, you know, is making the clients to say, to pause it and say okay, let's, let's observe, let's, let's pilot and let's see, you know, how it works in the real production. So of course, you know, you might think that number one would be the AI itself, but probably, probably, probably not. Right? So of course, because you know, this is a new workflow and I mean new ish, right? Newish workflow. And not everyone, especially on the client side, especially content owners who do not always, you know, know or want to know, end to end localization process. Because this is not, they're, you know, this is something that they, you know, outsource their intellectual property to us and they, they want to make sure that we take care of the content so much and they make sure that they want to make sure that they would trust their content to us so much that the back and forth, the numbers, the number of questions in the process from their side will be minimal. And yeah, so of course we have just launched AI dubbing. So we are on the stage of pilots and testing with the clients, with the actual intellectual property of the clients. So not, not internal anymore, but still not, you know, full throttle. So we're just Testing the waters. But so far the feedback has been great. We had a lot. We have demos in quite a bunch of languages and we show the demos and the clients remain happy, remain satisfied with the quality of the demos and they, you know, they, they start trusting us with their content. Because as I said, we've been working towards making our AI output as natural as possible with the help of, you know, working with this human natural poses and emotions and whatever articulation. Like for example, you know, there is this trend of changing the video, the leap movements in every language. We do not do that right now and not because we cannot. It's just, it's just maybe I need to fight with that in my mind a little bit. But that's just personal because I believe that, you know, it's kind of a little bit comical still. Maybe it's a matter of time, maybe it's a matter of latency, maybe it's a matter of, you know, not really smooth ways of being able to project that with the current technology. But I do feel like, okay, you know, when we are, when we are dubbing a comedic type of content, that's okay. When we are dubbing something serious, something, I don't know, conflict related or something very sensitive or in healthcare, you know, in healthcare industry, I do think that this lip movements might create an unwanted comical effect. This is, I don't know, this is experimental because a lot of people and a lot of companies, a lot of industries already work with that and you know, they kind of love the result. But whenever I see that maybe, maybe I will think I will rethink it in quite a while. But right now I'm like, okay, we are not doing this. We are. The lip sync that we are doing is the translation, the human translation, of course, the lip sync credit translation with the lip movements in mind and retiring. So that's it. We are not redacting the video because of the actors as well. I don't believe that all the actors, I don't believe that not only, you know, we're not talking about, you know, Hollywood actors because we're, you know, still not, of course, AI dubbing the big blockbusters or whatever features even. But I do not believe that any kind of people who project themselves from those videos would be entirely happy with the reduction of lips. That's just beyond my understanding, that trend. [00:40:21] Speaker A: And we will see with this whole shift left where we are dubbing and subtitling things that we didn't used to dub or subtitle to see what happens. I know A friend who is a chef who is always watching South Korean soap operas with the worst dubbing I've ever heard. It's a very, very monotonous and it sounds just like robots are talking. But he says like, hey, that's enough for me. I don't need like. He just cares about the plot, I guess. So it depends on the users. [00:40:56] Speaker B: Yeah, actually what you've just said, it reminded me of the very kind of the very worry that I've been struggling with in terms of AI adopting and in terms of, you know, rule of AI per se is good enough thing. So I would believe that, you know, this concept of beauty in the eye of the beholder is that AI and different automated solutions, like fully automated solution. I'm not talking about human led, you know, editing. I do believe that fully automated solutions would gradually lower the kind of the expectation threshold of an audience of content owners as well of spectators, because and, and of. And of gaming players because content either dubbed or even created across the industries. I do believe that with the AI, with the, you know, easy and quick ways to do stuff with time, people would just say, okay, this is good enough. This is, this has been dubbed good enough well enough. This had been subtitled well enough. And this game has been created well enough. And oh my God, this book has been written well enough by AI, Right. So all this good enough stuff, you know, we still care about quality and the real threat of AI might be not. You know, the. Of course the real threat of AI is elementary. Is seen by a lot of people as elimination of jobs and people, you know, doing nothing and stuck with, without their skills to do to apply. But the threshold of the lowering of the threshold of quality is the real threat that we might face. You know, there is, there are great game creators, there are great writers, there are great movie creators, you know, people. And so if in five years an audience of some country, I don't know, some, you know, even selected audience would say, okay, you don't need to like, don't sweat it, right? We, we are okay with whatever, like AI created or whatever. So, so, you know, your talent is not needed anymore because hey, this is good enough. Right? That is the threat that I don't want to happen. [00:43:58] Speaker A: Yeah, because that would be the, the good enough. That's terrible for everyone. And it is one terrible enough. Terrible enough. We were talking about it from the perspective of having specific languages dominate the world and across the whole localization and globalization industry, what's a reality is that English is the language that dominates the Structuring and professionalism of our industry. It's very interesting to start considering what are the other languages that are coming on board as AI allows us to do and how multilingual and multicultural are the companies that are actually leading those transformations. [00:44:44] Speaker B: Oh, yeah, yeah. With all the languages. So, as I told you, we've launched our first 15 languages for AI dubbing, but unfortunately there is. Right now we don't have my native language, Ukrainian, among them. And what we are working towards with Voice8, with our partners, is, is we've actually agreed that the rolling out of Ukrainian would be. So I kind of paid my way, paved my way into, you know, becoming the principal kind of output, QCR of the, of the Ukrainian language that we will be rolling out. So I would be the, I would be controlling. I would be assessing the AI language model learning and the output, the quality of the output in Ukrainian. So I would definitely make sure that. Ukrainian AI dubbing that we can provide would definitely be as natural as possible. And of course, you know, how multilingual. Yeah. So the good thing in our model is that all the voices that we, that we have, all the generated voices that we can use are kind of trained on the. Are kind of trained on the data that are. That is kind of similar. That is kind of so, so that every voice and every, every voice can be as efficient in every language as in every other language. So, you know, the voices that we use are truly multilingual and there is no instances where this voice of this language is underperforming. Although, of course, course, as we all know in LLMs, English is overarching language. And even from whatever chat GPT is responding, we can always guess that, you know, chat GPT talking Ukrainian has a surprisingly English grammar structure. Surprisingly English choreography of language. Right. So, yeah, we definitely have to push on. We definitely have to champion that training to be able to feed AI, feed the AI models with as much high quality data as possible to be able to come up with the results that were Ukrainian, were, I know Spanish were Afrikaans would sound grammatically and structurally as, you know, as people in that territories would speak. Not, you know, not the English construction like the Danglish or Spanglish or whatever. We wouldn't want our AI dubbing to, you know, parasite on. Yeah, so that's basically the multi, the multilingual. The multilingualism of it all is definitely crucial to us and we will be working toward refining every language structurally. [00:48:23] Speaker A: Taras, thank you so much for sharing that perspective. I'm of course coming from you. One thing that I like us to cover before. Before we go, and we're coming to an end of our conversation, but is the future of work. Of course, you. You've been a subtitler and others out there have been doing, you know, translation, interpretation, subtitling, and then they see AI and they say, okay, this is it. And then along the way, you start finding practical ways to use your expertise and to bring in your value. Taras, what have you seen as the new route for the future of work for those that are, you know, loving languages, those that are linguists, those that care about words? [00:49:05] Speaker B: You see, I do believe that because as I wrote a while ago, is that as a. As a managing director of TAPAS localization, I never sign off anything or any new workflow or any new solution without, you know, deeply diving into them myself. So I definitely studied the work and the workflows and the peculiarities of all the ins and outs of the process that AI speech editors, AI dubbing editors, or AI dubbing directors would face with. And I would say that, of course, it's not the same as just translation and not the same as working with translation memory systems, not the same as working with CAT tools, of course, but there is a lot of language learning and language, Language, language enhancement within the AI dubbing workflow. So for whoever loves languages as much as I do, and for whoever wants to work with languages as much as I do, there is plenty of things to discover. Because, of course, you know, when we see initial AI dubbing output, it might be bad. It may be bad, and it will be bad in some situations. So the editors would come in and say, aha. Okay, there is plenty of material to work on with here because the intonation, the fundamentation, all the platform, the platforms, the tech, the AI tech now allows customers kind of diving deeply into AI output for humans. So definitely there will be linguists, and I can definitely understand everyone. Everyone's stance on it, where people say, just say, okay, AI output is just simply something that I don't want to work on because it's not been produced by humans in the first place, right? But since that workflow, since that result, since AI is here to stay. And I wanna. I wanna circle back a little bit because recently we have. We have had a lot of wonderful conversations with Japanese studios, Japanese film studios, Japanese game studios in, you know, in terms of our, you know, outreach and everything. And one interesting hot take that I took as something, and someone who is crazy about languages is they do have the word AI. You Know AI, that means love. So that is a very controversial thing to, you know, impose upon the linguists or impose upon those who still push back the AI me not being the, you know, the advertiser of AI, me not being the. The 100% fan of AI. But if we look at that, if we perceive AI as kind of thing that can be loved in a way that, you know, it allows us to work with it. It's like, you know, it's like wood for a carpenter or for. It's like materials for sculpture for a sculptor. They don't talk. They, you know, but they are malleable. So we want AI to be malleable. We want to work with AI. We want to be able to structure AI to dive very deeply into AI output and redact it and change emotion and edit it and change the pace and change the phonemization, add the pauses, add the. The arms at the. Everything that, you know, non native speakers are full of and make the output after our involvement as natural as possible or as less robotic. Okay. As possible. So malleability of AI would probably be something that I, as a linguist and as a researcher, would be rooting for. And, you know, that. That. That also involves tech companies who, you know, who are working with AI and who want people like linguists to be willing to, well, willing, you know, not opposed to working with AI. So if they feel. If they see the malleability, if they see the amount of work that has to be done with AI, and this is language work, this is. Okay, this is not translation per se, but it's a lot of language work. And I do believe that a lot of people can find, you know, if they don't shun of AI as a term, if they see AI as a Japanese term, love, I do believe that they can be quite engaged with that. [00:54:34] Speaker A: You know, one of the things that I believe it's going to become a competitive advantage for companies that are going around the world is those that understand that the need for that malleability, that ability to really get into the different locales with grace, you could say, and with a lot of. And that requires a very high technical ability or either human ability. [00:55:06] Speaker B: Right. [00:55:06] Speaker A: And they would say that for it to be really advanced, it would be indistinguishable from magic. It seems that we are a few years away from that. Taras, this is, of course, an amazing conversation. I hope we can have it in person at some point before we go. Are there any messages that you think we should not go without sharing today or with Our audience. [00:55:32] Speaker B: Yeah, I. So what I just say is, you know, love the language. Don't. Don't allow good enough. Be your new standard. And love the. Love exploring whatever the technology and whatever the advancements are semi imposing of people. I of course, wish, you know, less aggression in this technological advancement, less imposition of that advancement on people, especially on people who are not yet ready to grasp it. But as I said, if a person is open to grasping it, I think there is endless possibilities. And police don't just shun away from whatever technology or whatever. You know, new workflows are emerging because new workflows have always been there, like far beyond localization. New workflows has been in, I don't know, in student clauses and whatever. That's just how humans are. They don't want to be stuck in the same workflow forever. They are, you know, they are emergent and. Yeah. So just love the language as you loved it. Don't take good enough an answer and please be ready to explore whatever comes your way. [00:57:01] Speaker A: Thank you very much, Taras. And of course, for those that are listening, we're talking to Taras Malkovich. And Taras is the founder of Tapas Localization. Taras, thank you so much for joining us today. [00:57:19] Speaker B: Thank you so much, Eddie. Great pleasure. [00:57:25] Speaker A: All right. All right, everyone. Thank you so much for listening to localization today. A big thank you again to Taras Malkovich from Grounding the conversation about AI and the realities of production and what actually happens when you're putting technologies to solve big issues. [00:57:44] Speaker B: Issues. [00:57:45] Speaker A: And of course, he put that conversation where it needed to be. Catch new episodes of Localization today on Spotify, Apple Podcasts and YouTube. Subscribe, rate and share so others can find these conversations. I'm Eddie Arrieta with Multilingual Media. Thanks for joining and see you next time. Goodbye.

Other Episodes

Episode 78

April 25, 2022 00:03:48
Episode Cover

Private equity firm eyes RWS takeover

Confirmation that private equity firm Baring Private Equity Asia Fund VIII (BPEA) is eyeing a takeover of translation and localization juggernaut RWS sent the...

Listen

Episode 166

April 15, 2024 00:20:56
Episode Cover

The Week in Review | Allison Ferch, GALA Valencia 2024

Allison Ferch, executive director of GALA talks about the GALA Valencia 2024, an event that provides many channels for brand recognition, thought leadership, and...

Listen

Episode 171

August 24, 2022 00:02:23
Episode Cover

Jaime Punishill to Leave Lionbridge

As we attempted to reach out to Lionbridge regarding their cyber security breach, we got an out-of-office message from Jaime Punishill, Lionbridge’s CMO.

Listen