Episode Transcript
[00:00:01] Speaker A: It's very clear. What we see is that especially tools like automated post editing will essentially per project is reducing the need of human effort, which is the prime reason why some segments of the industry is afraid that their revenues will be jeopardized. Then again, when we did the NIMSI 100 study then the vast majority of the respondents actually responded that their outlook for 2024 is better than it was for 2023, regardless of large language mod.
[00:00:34] Speaker B: The following is our conversation with Laszlo Varga, senior research consultant at NImsi Insights. We talk about their most recent research tool, the language technology radar and all things AI. Enjoy.
Hello Lasla, how are you doing today?
[00:00:56] Speaker A: Very good addy. Thank you very much for having me. It's a beautiful day here in Czech Republic and I'm looking forward to speaking with you about a whole bunch of topics. Of course it's AI.
[00:01:07] Speaker B: Please Laszlo, tell us more about you.
[00:01:11] Speaker A: Yeah, I'm Hungarian and I live in the Czech Republic and I live a bilingual life and I really enjoy working in the language industry for the past, I dont know, close to 15 years by now.
I started my career in the supply chain and I was directing the supply chain at Moravia back at the time a czech company. In fact thats why I moved to Brno where Im still located.
And I worked with Moravia for six, eight years somewhere around that time, and participated in various activities which came to technology, process improvement, strategic initiatives on performance management, and get very acquainted with a bunch of different roles at the company, from sea level to engineers and translators with different LSB. I work for Yanos Worldwide and I've been with Nimzee for the last two years and a bit.
And actually I have to say, although Laravio was a great experience in terms of the variety of work and people and individuals, it doesn't even come close to working with integrity. It is absolutely amazing. The range of clients that we work with the team is phenomenal and of course everybody knows that technology has been disrupting the industry and technology also became a part of the focus of Nimcy's research and content creation and insight generation and consulting projects, and I'm also heavily involved in those.
And so ever since the birth of, for example, chat GPT, I'm one of the few people at, quite a few people at NImsi who look into what's happening in that space and also bring our experience and expertise to our clients.
[00:03:07] Speaker B: Fantastic. And why don't we dig a little deeper onto that work that you've been doing and the research that you've been doing. Earlier, I was mentioning the language technology atlas, which has evolved into the language technology radar. Is that how it worked? What can you tell us about that?
[00:03:25] Speaker A: Well, what we figured is that the Atlas is a great tool. It is one of the most viewed and used assets on Lindsay's website.
Very importantly, though, we recognize that a kind of a static, even annual revision of the Atlas, even though we have multiple updates throughout the year, is not necessarily the right way of continuing, given the unbelievable speed at which language technologies are evolving, as well as the companies, the players that are acting in this field.
So we decided to move from a static view to a more dynamic one, as well as a big change, as Julia mentioned, is that we're actually inviting language technology players to create their own profiles and their own product profiles that they can keep updating for the future for themselves. And we, of course, did these profiles, and we very often we have discussions, demo sessions with many of these toolmakers, and we would love to invite anybody who's listening as well, to come in and drop a note to us. Hey, I have a wonderful language technology that fits into one of the myriad of categories that we can identify, or we have identified, and we listen our radar.
But the goal is to have a comprehensive overview of the language industry's relevant language technologies. To put it this way, that is in some sense future proof, kind of keeps updating itself with our supervision and monitoring.
And we, of course, we believe in the dissemination of, of knowledge in the language industry. So we provide this information back to our customers and our users, not just to our NIMSI members. We have access to our internal experts, but also to anybody who's interested in language technologies, can come to the NIMSI website and view and use and play and compare and do various things with these language technologies. And of course, for custom consulting projects, for specific tool selection, for a given use case. Whether you're a buyer or language service provider, we're very happy to help you with those challenges as well.
That's a really good, that's a really.
[00:05:42] Speaker B: Great tool that you're putting out there for companies to use. And yes, I've seen most consulting firms, not only in our industry, but in most industry, knowledge is not easily or readily available for everyone else. And I understand why some companies do it, but I really like that approach. If we are all sharing knowledge, it's probably likely that we'll all move forward faster and that's better for everyone.
[00:06:10] Speaker A: You're taking words out of my mouth. Basically, it's the similar principle as we have with the Mimsy 100, which is our annual report and language service provider ranking publication. It's another one of our flagship reports, and there are other options out there to see similar reports. Ours is free of charge for anybody. You we disclose this so that industry actors, whether it be language service, language technology providers, language service, or technology buyers, as well as even investors or even translators that can gain information that can be useful for the future growth and prosperity of the industry.
[00:06:54] Speaker B: Thank you for sharing that and talking about the language technology radar. It's got some categories, it's got some ways in which you can understand it. Can you tell us a little bit about how it works and how can someone make sense of it.
[00:07:11] Speaker A: In our new publication? I'm also looking forward to how it's going to finally turn out, because it's not out yet. In our previous publications, the categories were fairly categoric. To put it this way, we were very strictly separating translation management systems, audio visual translation systems, interpreting systems, quality management systems, and all of those.
The main categories are various, and even there are subcategories, and as it seems we need to keep adding new ones.
That's of course also due to the advent of large language models that are contributing to the language technology space.
And these categories are very often they speak to already language technology actors, not just as an overview, but also you're able to dig into a specific subcategory and find 120 different providers in there with all their various attributes.
Then again, if you're an outsider from language industry, at first it may just seem overwhelming, which is fair enough. There are a great many tools out there in the market in all its various categories. There's a proliferation of platforms. In general, we are observing what is called a paradox of voice. There are way too many similar looking or sounding or smelling options out there. If somebody is looking for language technology for their technology stack, would you be again a language program, a localization program on the buyer side, or you're a language service provider and you're looking to update your translation business management system? There are just way too many of those that you can actually look and choose. But we are providing all this information because we find it important that you actually have an immediate view of all the options out there and you can narrow it down further to your specific need, either on your own or with the use of.
[00:09:09] Speaker B: So how is AI and large language models included in your radar?
[00:09:15] Speaker A: Oh, so let's separate those very quickly. Yes, AI is included in the radar, just we don't call it AI. So machine translation is of course, one of the prominent categories in the atlas. And let's just face it, since 2017, the birth of neural machine translation by Google. Machine translation is AI.
And not just that, speech recognition, text to speech and similar technologies, those are all machine learning, thereby AI technology, the new wave of AI tools, large language models, they're relatively new, relatively, I say, because they come from the same underlying foundational technology as the modern machine gestation systems. Of course, they are engineered and configured and trained and deployed differently.
And one of the great difference between large language models and almost any other tool that you will find in the atlas, sorry, I should say, radar, is that large language models are, by their inherent nature, general purpose machines.
So they're really hard to fit into a specific category that we have in the atlas because, well, yeah, actually they can do machine gestation.
By the way, if you have a multimodal system, it can also do speech to text or text to speech, depending. Look at GPD four o from OpenAI.
It's called omni because it's supposed to be multimodal.
That signifies that it can do a whole lot of things. Of course, what's happening underneath the hood of is that the large language model component is connected with different components with some kind of common architecture, and so we can do all those multimodal things at the same time. Large language models, on their own, they also deserve a category. They're so seemingly and arguably a fundamentally new way of using language technologies, not just because they are general purpose, but because they can actually be fine tuned to be useful for a specific purpose.
And there are many of those since the past, okay, since, let's say, the release of GPT four, which was right about a year ago, a little bit more, may, I think, of 2013, the technology reached the maturity level where basically everybody started playing with it. And in the industry, both language technology enabled language service providers and language technology providers, they're all trying to figure out how it fits into their workflows. Their technology stacks to their offerings. The same is kind of happening on the buyer side, which is because many of these large language models, they are released either in open source mode or rather open model mode. There's a distinction that I'm not sure if we need to get into at this point of time, or they are very easily accessible. Just think chat, GPP, they are basically democratizing the access to language technologies.
And buyers also recognize this.
They can, with the use of large language models through the cloud, build various language related applications.
They have this opportunity to do that. They don't necessarily have the capability as such, because dealing with language and language data is very different from traditional data engineering.
We in the industry, we understand and recognize this. It's sometimes really hard to sell or to get the buy in of the chief executive officers on the buyer side. But the CEO's also recognized that, well, large language models are not only language models. In other words, they not only can do multilingualingual work, say, translation, competing essentially with the traditional, by now traditional neural machine translation systems, but they also can do a whole lot more. They can help you create an email or create a presentation or summarize a document. They do also productivity kind of work, which is typically not included in the language industry as such.
And so, of course, for a CEO to see, hey, I can increase productivity, that's a big deal.
And so the c level, on the buyer side, they're looking at large language models, generative AI, if you want, if you will, as this new wave of fundamental new tools that can help with a great many different things. And actually, the language problem that we can help solve is just one of them.
So language teams on the buyer side, they're in between. Well, we've already been using AI in our language work, at least machine translation, maybe even text to speech, or speech to text.
But now there's this new AI wave. What do I do with it? How do I talk to my boss and my boss's boss?
And so we created the separate category for large language models because we believe that these technologies have a fundamentally different nature of interaction. As you can see, everybody who has been handling language work is somewhat familiar with the basics. But bringing it to the wider public, who have a business interest in language and multilingual communication, for them, it's something brand new, even. There was a CEO of forgive me or remind me to look it up, there was a CEO of one tech company who said, oh, I was just told that there's this chat GPD thing and it can do translation, but I was told that we're already using AI for translation machine translation. I didn't know that. That's amazing.
The fact that the language industry has been using AI for a number of years is a big advantage to us. But large language models can do more than any other AI tool that we have used before, more in the sense of more different things.
And so creating categories for large language models, even in our radar, has been a challenge. We had lots of discussions around it, how to best map them out, and you can look at them on the foundational level to say okay, there are open models, there are closed models, proprietary models, how big they are, parameter size in technical terms, how many languages they may support, and all of that. But there's also a productization aspect of this to what they are being used for. So there may be large language models that are fine tuned for translation or specifically created to support the specific set of languages, or there are large language models that are actually created for creating computer code, helping software developers that will not make it into the language technology atlas or the radar, because that per se, it's not language technology.
We are aware of it. And when we in our team, including myself, when we research and we read about language technologies, especially large language models, we face this influx of information and data that is sometimes really hard to, or at least takes very careful thinking, how to understand, how to interpret, how to box it in into the boxes that we have previously thought we understood. But these large language models are challenging the way we also categorize language technologies as such. Does that make sense so far?
[00:16:40] Speaker B: Yeah, it does make sense. So I assume as more large language models start that process of adaptation, or as technologies adapt and you start seeing all these niche applications or specialized applications, more categories are going to find, maybe you have a radar focus only on this and it makes sense. It's going to just branch out. Are there any that are not happening right now that are very obvious to you? I'm just trying to think where you see opportunity as well. You see large language models, you see some applications, are there any categories that you're like, I don't see it right now, but it's going to become its own thing. Perhaps you have a group of large language models right now that could be split into two or three or even more. I don't know.
[00:17:29] Speaker A: Okay. Yeah, I think, number one, at this point of time, there are very few large language models that are specifically geared for translation or translation related work.
I can name a specific one. For example, our LLM from unbabble. It has been created and is being updated so that it supports a specific set of languages for translation or for quality estimation, even automated post editing and different tasks. Within the translation work itself, the expectation is that there will be more models that will be created for a similar purpose. We can also look at the bell who announced, hey, we also have a large language model that we also use for translation.
And LiLT also uses essentially smaller large language models for translation work. Immediate load. The large language models that are specifically created or fine tuned for translation, they're much, much smaller, more targeted creations than jet GPT, which, because it's multipurpose, it has been created to do almost anything, in fact so much anything, that some of its capabilities are still being just investigated, what are called emergent capabilities.
But it can allegedly do so many things to varying degree of quality, that it has to be really big.
And it is, it's impossible to run on a home computer, even a small server. They require massive data centers to run. And that's true for chat GPT. That's true for Google's flagship models or anthropics cloud.
The models that are being used, that have been specifically created for translation related work. They're smaller models specifically because they have a specific purpose. And for that purpose, you can create a much more smaller, more efficient, more targeted system than the big one. You may lose some of the other capabilities, but if you don't need them, why have it? It's not exactly like cutting out a piece of, it's not like you're just dropping the feature and therefore some of the code will disappear from your software driver. It's a very different kind of pool as such.
But the truth is that what we expect to see, that instead of perhaps that instead of using these very large models again, Claude or JetGBT or Gemini from Google for lots of different purposes, it is very much conceivable that in the next year or so, smaller models will emerge created, fine tuned, pre trained, additionally trained on more open source or open model large language models that are smaller, more efficient and targeted for specific purpose or purposes.
So we expect that there will be a plethora of applications of large language models that we previously probably haven't thought of yet, and the industry is still grappling with what kind of use cases there are more useful in. We know of a few that apparently large language models do a pretty good job with, at least in the language realm, in the language industry. And would that be quality estimation, or even automatic quality assessment or automated post editing? They require very specific engineering, not just prompt engineering, but additional engineering and orchestration, maybe with some additional mature language processing work to make it actually work.
But there are use cases where they're really useful. They can also be used for terminologies extraction. They can also be used for paraphrasing. They can also be used for style change in the translation.
There are various use cases again, and perhaps in the future. One way that there are two key scenarios. One of them is that the new model, say whenever GPT five comes out, or clock four, that these models will be able to do with some level of maybe fine tuning, they will be able to do all these tasks in one single machine.
Perhaps, though it will be smaller targeted machines that will do all those use cases, all those targeted tasks, and perform better than the large models that are more generalist, not necessarily the best at anything, but they can do more variety of tasks.
So that's something that we're looking forward to. How it will evolve. Will it be really the largest ones, which apparently billions of dollars are being poured into for creation, training and even deployment?
Or will it be the smaller ones? Maybe the large language models that we have that exist in the market now, would it be mistral or lambda models or similar ones can be created for specific, can be retrained, fine tuned for specific tasks, and it will be much more efficient workflow than using the very large models, which are expensive to create and to maintain and to run.
And then next year or two years from now, the Atlas may again look a little bit different. So Atlas radar and the radar may have new categories. Maybe there will be specific applications for automated post editing that you can just pick off the shelf and buy separately as a standalone piece. Currently, automated post editing is not really available as a separate product. It's built into, say, translation management systems. So automated post editing tools that could be a category, it's rather, currently, it's rather simply in the translation management system category because that's where the work is performed. Perhaps later down the road, there will be separate products that will be called automated post editing tools, depending how the industry evolves with these capabilities, that is really good.
[00:23:24] Speaker B: Thank you. And of course, you've seen many use cases in the industry and outside the industry for AIH, in particular for large language models. What are the most attractive use cases for you as a professional?
[00:23:38] Speaker A: You mean in my personal work or from a language industry perspective in the.
[00:23:43] Speaker B: Universe of all questions you can think about.
[00:23:45] Speaker A: Okay, let me start from the language industry's perspective. So even though, just as at the introduction of neural machine translation, there were all the fears that MT will take over the industry and there's no more need for translators, the same fear exists with large language models. We don't agree with it. What we do see is large language models are being built into existing workflows which require humans and the end of the pipeline, especially for machine political content. Basically, large language models and AI, they're productivity enhancing tools. We of course will say, oh, but you can do machine translation with them too, in various ways and formats, sure. But then it's just basically it's doing automated translation, just like neural machine translation can. It's a productivity tool to help a translator translate faster via post editing instead of translating from scratch. On the other hand, there is also a whole bunch of use cases which don't relate to translation work as such, rather to natural language processing. And there's a great variety of them, and personally that's where I use them. Would that be for meeting summarization or for creating ideas for when I write content, then creating or generating ideas? Or I even use tools like perplexity or you.com and their say, research functions to ask questions and try to gauge more natural language and natural sounding answers than doing tens of Google searches and trying to figure out from all the searches from there. I'm not saying that in my work they are 100% working.
They're not.
Actually, when I use AI tools, I use multiple of them to even to compare the results. I will go to one, ask a question, and I will go to a second one and ask the same question, maybe a little bit differently, and I will compare the results and I will try to get the best of the responses.
When it comes to meeting summarization, I actually conducted a little experiment last year which said you can do summarization of documents, even of reports, but it becomes the challenge of is it worth checking the output of a large language model, or should you create the summary yourself? It sounds very strange at first because large language models are supposed to be really good at creating text, and the new models especially, they're really good at allegedly really good at finding pieces of information within, say, a PDF or a document.
But if you have a hundred page report and you need to summarize in two pages, how far can you go to trust the model that it didn't omit any critical pieces of information, or that it hasn't added information that wasn't in the 100 page report in the first place. Which ultimately comes down to have you already read the 100 page report?
Because at this point of time, I can tell you I do not fully trust large language models to complement my work. I need to check and double check, and there are instances where I literally think this is not going to worth it, I'm not even going to start. And sometimes I feel more experimentative in mood and I go out and play with three different tools and see what comes out of it. But unless you already know what the good outcome looks like, it will be really hard for you to vet the output of the language model. There's this phenomenon called hallucination. It doesn't seem to be going anywhere. Maybe in the past, more than a year and a half of public development of large language models. They have been existing for a number of years already. They just never came to the public's attention. But hallucinations as well as the ability to capture information and translate or transform them into, say, a summary of a document, that's something that they seemingly can do, but you can only trust it so far as your own expertise. So if I already know what's in 100 page report, I can tell if the summary is good enough and it will save me time if I haven't read the report, the 100 page report, and I get the two page summary, I'd better remain skeptical and I need to go back and check and at least understand what's in the document to say. Has he structured it well and did it do the job?
The expectation, of course, is, well, better, and larger and more performant models are being created. Perhaps the next generation and the next generation will perform even better. And so there's a continuous need to experiment and test these models and play with them to see is it getting anywhere, is it going to help me with this kind of task this month and onwards? Or maybe I will wait another three months, six months for the next generation of tools and try again. That's my work life and how I work with large language models.
[00:28:32] Speaker B: Great, great. And I assume, of course, as all of these, as new solutions are going to be popping up, you're going to be using them, other professionals are going to be using them. Do you have any intuitions in how do you think our industry will change within the next few years because of this dynamic that you are mentioning right now?
[00:28:53] Speaker A: Yes, it's very clear. What we see is that especially tools like automated post editing will essentially per project is reducing the need of human input, of human effort, which is the prime reason why the industry, at least some segments of the industry, is afraid that their revenues will be jeopardized. Then again, when we did the NIMSI 100 study, then the vast majority of the respondents actually responded that outlook for 2024 is better than it was for 2023. Regardless of large language models or specifically, maybe even because of them, the outlook is better. So even though there is fear that revenues per project may drop, there is the good news that by reducing the barrier for localization, by lowering per project costs, there may be more projects that actually can go through the human in the loop variant of the localization workflows.
[00:29:55] Speaker B: Thank you. And the hype is not over, right? Or do you are you feeling the hype is over? I don't think the hype is over, but it does sound from your perspective that there is some consolidation happening around the use cases, around the technology.
There is a level of consolidation.
I want to ask this last question. The hype is not over. New use cases are happening. Are there any large language model related news updates that have been interesting to you, have a strike you as something that keeps the hype going at least? Because when the hype is dying, then something happens and there is this new use case that gets everyone excited again or something like that.
[00:30:39] Speaker A: So I would like to go back to that hype is going away question for a moment, which is it seems that globally there are many voices that are challenging the hype. And on the Gartner curve you would say, we're already entering the trough of disillusionment again in the language industry, that's not exactly happening. Quite the contrary, language technology providers are very often very bullish about their large language model implementations.
Of course, many of the providers say yes, but it's going to eat into our revenues, but the margins will remain stable and you can win new clients. The truth is that no matter what happens with all the artificial general intelligence AGI talks outside of the industry, these tools already exist and they are being put into use. And what we see is that last year, similar time, there were kind of what we would call vanilla integrations, especially with OpenAI's large language models.
Now there is a level of productization, and productization means that there is a very good attempt of finding a product market fit for bundling large language models into technology stacks. And that's exactly what's happening. One of the key things that I was very happy to learn at the European Commission conference as well was from Silo AI, who are a finnish AI company, and they started developing a finnish and english large language model. Then they developed a second version for nordic languages, and they were on the path to create large language models that would support all european official languages, which is great news for me and who lives in Europe and Silo AI, they said, well, actually, it's not that they're looking for solving the language problem as such. Their opportunity with large language models is to build AI into software products, and there are many challenges with that as well, especially in Europe.
But the reality is this is exactly what's happening in the industry as well. AI is being built into dmss, AI is being built into quality management systems, AI is being built into dubbing platforms, and all of that.
And one of the, the great news that I heard was there are attempts to support more and more languages equally and equitably. And that's fantastic. Knowing that many of the large language models, they are either very heavily english biased or they are close circuit, they're proprietary systems and you don't really know, there's no clear indication of how well those languages are supported. So even though a large language lawmaker may say, oh, we support 50 languages. Yeah, but how? Well, similar to if somebody says, I have a speech recognition tool that supports 100 languages. Yes, yes, but only ten of them are at reasonably good quality. The rest of them are actually much worse than anybody else's, then how many languages does it really support?
And so language equality and equity, I think, is a really important question when it comes to large language models. Not just because they can be used in machine translation, but because they are really useful companions in everyday productivity work and being able to support people in and outside of the industry in their own language, in a natural language interface, I think that's really important and looking forward to investments in that space. And actually, we will have in our radar language specific large language models that were created specifically because of this, to create a level of language equality and equity in specific regions of the world.
[00:34:18] Speaker B: That is fantastic. And we've had different organizations reach out to us because they are working with different companies to try to create large language models for endangered languages, which is fantastic.
I don't have an ultra pessimistic point of view, but in the worst case scenario of languages disappear, you could still talk and learn those languages from a large language model, which you know, to the best of its ability, it would keep that language alive at that moment. I know that conversation.
[00:34:52] Speaker A: That's, that's, that's. I'm not sure if I fully agree with that.
[00:34:57] Speaker B: Well, I could foresee a future where there are Moris who don't speak Mori, but they could be able to learn their mori ways and languages from a.
[00:35:08] Speaker A: They will learn AI mori, not mori. Maori.
[00:35:12] Speaker B: Sure. It's better than no Maori.
[00:35:15] Speaker A: It's better than no Maori, in all honesty. For example, there was an article that said, okay, you can do, for example, with image AI, image generators, you can create watermarks in the images so you can tell if it was AI generated or not. With text, it's not really possible, especially because text is relatively easy to edit. But some of the.
I don't know who did the research, but they found that there are specific words that became much more frequent in online publications and especially in research, publications and archive and similar Google scholar.
Some words became much more frequent than they were in the previous years, or ever, for that matter.
So words like Delve, to delve into something that's a word that previously was not really used much. Nowadays it's almost in every single research paper.
And here comes the challenge with it. It doesn't necessarily mean that AI generated the report that has the word delve in it. It's just if you're a researcher and you read a lot of reports, you will begin to see that word delve very often and your brain will say, yeah, that's the word I'm going to use. Essentially, large language models, the way they have been trained, they change the way we use language.
It's not just that they learn from us, we essentially learn from them. And as AI machine translated content as well as AI generated content is flooding the Internet, it's inevitable that at some point of time, the interaction between the machine's learned language and the human's language is interacting, it's going to be a dynamic, fluid interaction. And so saying that you can preserve languages with large language models because they can recreate, yes, but to what extent, yeah.
[00:37:15] Speaker B: Someone to speak it will start relearning certain things. So, yeah, I think, I think we're on the same page on that one. And I do think that it's already shaping our culture, how we structure emails, the way in which I've used AI, I've noticed that sometimes you're really scattered in thought, and AI has a great way to structure things and make sure that you're using categories when needed, and then it emphasizes things that you maybe are mentioning, but you really don't want to mention. It's just like, you just learn to use it and now you are more aware of those things because of, um, because of artificial intelligence. So I'm, I'm really glad to be able to get your perspective today. Laszlo, are there any final thoughts you want to share with us before we go? Is the 8 August 2024?
[00:38:07] Speaker A: No, I talked way too much.
[00:38:11] Speaker B: Fantastic.
[00:38:11] Speaker A: I digressed heavily from the radar, but this is such a passionate topic, not just because of the language industry, but because everybody has their language and their culture and their views and their opinions, including myself.
And I feel that I'm being influenced and I don't know if it's good or bad. All the more interested I am in these technologies. And so researching them, reading about them, understanding them better, talking to stakeholders, talking to users, talking to creators, is helping me better grasp what these models are capable of, what they could be capable of in the future, what they're not to be used, what to stay away from, and giving me a more comprehensive picture that I can use in my everyday life and my professional life. I can also help our clients with or without the radar.
[00:39:03] Speaker B: Great. So we wish you all the best in your research that you find out many more things to share with us. We hope to see you here in a few months to see what's been happening with artificial intelligence, large language models. What the.
What. What's the. What's the word I'm looking for? What's the reception for the language technology rater, and what things you. You'll be making changes on for. For this new wave of. Of information you're sharing with everyone. Thank you so much. Laszlo.
[00:39:36] Speaker A: Eddie, it was a pleasure. Cheers.
[00:39:43] Speaker B: And this was our conversation with Laszlo Varga, senior research consultant and Nimsi Insights. My name is Eddie Arrieta, CEO at multilingual. Thanks for listening.