Episode Transcript
[00:00:03] Speaker A: We today we've been training machine translation on what? Just on this one is the translation of this one. You don't teach kids in this way. You teach kids by saying, you should not do this. And we add that information because when a translator is working, we provide a suggestion, they correct it. So we have a wrong translation and we have a good translation. And you know something? Models can learn faster by discovering what they shouldn't do.
[00:00:33] Speaker B: Hello and welcome to Localization Today. My name is Eddie Arrieta, CEO at Multilingual Magazine. Today I have the honor to talk to Marco Trombetti, CEO and co founder at Translated. We've had conversations before, so I'm not going to dig deeper into his profile or anything like that. We're going to be talking about Lara.
Marco, you had the launch yesterday. How was it? You had a lot of people present at the event.
What was the atmosphere like for those that were present?
[00:01:09] Speaker A: The atmosphere was great. You know, Treslev doesn't do a lot of product events, lunch events, and so for us it was an important milestone. So we worked a lot from the preparation. We invited people in Rome in this beautiful theater, and we had many people also connected live. And we wanted to give an update about what's happening translated and also the vision that we had. And so very happy the amount of people that came that wanted to share that moment with us.
[00:01:42] Speaker B: And I think it was a fantastic way to showcase the innovations and the vision that you and the team has. So I hope to see it happening again in the future and to be invited as well in the future. So I think. I think everyone is thinking the same. So. So thanks for doing that. And let's get into it. Let's talk a little bit about the background and the vision for Lara.
How does Lara fit into translated vision? And of course, we'll have to look into how you are looking to advance global communication. How does it fit there?
[00:02:17] Speaker A: You know?
Well, I'd say that Lara is an integral part of the translated vision in general.
AI translated mode was funded in 1999, and immediately with Isa, my wife and co founder, we started working on how to use AI to improve translators, life can make them more productive, better quality, etc. So it was always about this symbiosis between human and machine. Okay. And honestly, it took us seven years to make it work. So we started in 99, the first time ever that we were able to give something that was useful to a translator, something that was helping translators rather than slowing them down. So that editing and correcting the Output of AI was actually faster than doing the translation from scratch. It took us seven years from 99 to 2006. And then we started discovering that this works and this thing builds is we started working in how to optimize the AI for human computer interaction, which is a very unique field because everyone for all these years have been focusing on how do we make machine translation better in general. Something that will give you a quite good result. And our problem instead is that, okay, how I can make the interaction between human and translator better. And we thought that that was always a more interesting problem for the nature of the problem, because humans are involved it replace it. And for the second reason that, you know, is a $65 billion market and machine translation is only 500 million. So if I can make translators 20, 30% faster, and because the unit economics of the translation is the world and so the cost per hour by consequence, so maybe now we can create a $20 billion value by making people 30% faster. So it looked like even a bigger opportunity. We started with that working many years in that direction. And at one point we've been collecting so many feedbacks from where the machine was wrong and how to improve it that the model was learning, learning, learning. Every time a translator corrects an error of our AI, the AI improves and also will give a better suggestion to the translator. The translator will have to focus on the details more and more. So we make translators grow as the machine intelligence pros. And so at one point this system became better of any generic system. And so, you know, we said, okay, we should release it. And this became Modern mt. So we started releasing Modern mt, not to end users, but really just to big enterprise. So many of you listening are probably using Modern mt. You simply don't know, simply because it's embedded into a big organization that is using it as their engine that is behind the scene.
And so over time then we realized that something very, very weird happened. And that was that since the beginning of machine translation, we were using since the beginning of machine learning in machine translation. So statistical machine translation, we always have been using something called parallel data and monolingual data. Okay, the two were very important with neural. Basically it was so good neural that only with parallel data you were getting a better result.
People forgot about the importance of monolingual. So we said, you know something, maybe we should solve this problem. And also the fact that we're training machine translation just on sentences and not a dog legal context, you know, maybe there is a problem there. And you know, also this machine translation System I'm not able to reason, you cannot provide instruct. They're not able to explain their choices why the model did translate A instead of B. Is this a rational behind it was a good reason or it was just an error? And we had those questions as translators because we wanted to know if we can trust the machine at all. And so we started building all these things. The reasoning capability, the context, legal and the fluency that you have today in language models will say, why? Look, we can actually create a new architecture that combines the best of large language models together with the best of machine translation research and try to solve these problems. And that's what we launched yesterday. LARA is a new, very big model, super reliable model that combines this architecture. It can explain its choices, it translates, it can explain you why it did A or B.
[00:07:13] Speaker B: Thank you. Before we delve into some of those intricacies for background, for those that are listening, added to the launch yesterday, and added to the conversations yesterday, there are some significant partnerships that were mentioned.
How significant were partnerships with Nvidia and Syneca in developing LARA and what are the advantages that they bring to the project?
[00:07:38] Speaker A: Both are very important. One was important for what we did already, and another one is important for what we're going to do next. So translated has always been investing in infrastructure to claim models.
But our budget for training was about alpha millionaire. An alpha million, believe me, can allow you to train decently large models, and for one twice per year, sometimes you can make an upgrade of the model, et cetera.
But OpenAI and few other companies started working on something called scaling laws, and they started asking themselves, if I make a transformer. So the architecture behind machine translation, like also transformer, if I make a model bigger, will the quality increase? If I provide more data, more quality data to the model, will the quality increase? Or really the question is how big should I create a model and how much data should I provide the model so that this model will do what I, what I want to do. And so OpenAI started doing these scaling laws and the most stainless is the capping law. And so by doing that, they had the courage and said, look, it looks promising. Maybe If I train GPT4, a very large model, and I spend $100 million to do it, it looks like that it will do extraordinary stuff. So nobody before that had ever tried to do that with the scaling laws. You simply get the courage to say, hey, it looks like it may work. And then you throw an investment which is 100 times bigger, 10 times bigger than what he did. In the past. So same thing happened to us. So we started working on scaling loans and we realized that as we make the model a little bigger, that's the speed in which it's improving. Same for the same amount of data. And so we knew that there was an opportunity to create a larger translation system that was better on the other side, it was many, many millions of dollars of investment of a model. But then how much, how long would it last? Maybe a year, A year? Half before you need to refrain because it's obsolete. And so what happened is that Nvidia has been always a partner of us. So working with Nvidia and helping them with machine translation, speech recognition, text to speech, and also Megatron, their language model. So we work deeply together from an engineering standpoint, but also operational standpoint for machine learning operations. And so at one point, strangely enough, and that's, you know, and in 2022, Nvidia had a little financial problem. It went down in stock market, et cetera. So they wanted to cancel our contract. They called me and they said, Marco, I'm super sorry, we need to cancel the contract. And I realized that they really didn't want. It was really financial moment, temporary financial problem. And I said, look, if it's only financial problem, pay us in GPUs instead of cash.
And so we received this large amount of infrastructure that was 10 times more what we were using the previous year. So it's crazy what we do with that. So that gave us some courage together with the scaling law to say, you know something, I think that's our opportunity, we should try it. And we did, we trained the model and it came out that the prediction of the scaling models were right. And so LARA was definitely a better model. And so the partnership of Nvidia is a partnership where we work together on some of the most advanced models. And then no, they provided us the computer infrastructure. That's, you know, otherwise if we're going to buy at market price, that could have been like $6 million, for example. So it's probably not an investment that would have done just for digging in a research experiment in a single model.
[00:11:52] Speaker B: Excellent. And of course there is a whole element of training and getting more into the technology and the innovation conversation. Can you tell us a bit more about how trust, attention and any other proprietary techniques really contribute to lara's ability to mimic accuracy or the accuracy of top tier professionals translators.
[00:12:17] Speaker A: Yes. So they just realized that you mentioned three partners. Oh, yes. Only mentioned one. It's one. So how we traded Lara the current Model that was shipped yesterday by the way, not sure that everybody understood you can actually use it now on lara.trustvata.com is there is for free. So it's available not just for large enterprises now we made it available to anyone.
And so the past Nvidia helped us but now and will still help us in the future that we made this scaling rules a step forward and we asked ourselves what is the amount of data and how big needs to be the model so that we will deliver something on par with the top 1% of the best professional translators in the world. So can we make a machine that makes one error per thousand words?
How big is to be the model? If you follow the scaling law first, if it's possible and how big needs to be. And the scaling laws told us 20 million hours, which it's a crazy amount of money if you, if you buy the prices out there is many, many, many tens of millions in some cases, you know, if, if you're going to have a lot of discounts could be up to 80 million. So it's a very big amount of money. But you know, was, was the first time we get a number for similarity. We get a number to say oh, here is how big this needs to be so that we can actually reach similarity and not artificial general intelligence, just the verticalization of it into translation. So he said, you know, I think we should, we should work on it. And now we orange. And so you know, we wanted to do again and what it came out is that Chilega, which is the fifth largest supercomputer in the world, they have an incredible program. You know, other companies in our industry also have used Chineca for training models. So we were able to make a deal with Chineca so that we're going to release open weights, open source, completely free, the Italian, English, very big model that reaches singularity.
And so in exchange of that they basically gave us almost half of the training hours that are required to reach the goal. And so we announced yesterday that we also started the training of this incredible model that given the prediction may reach language singularity, you know, reaching AGI. It looks like there is something taken many, many years now and probably a trillion dollar here. We're saying if we verticalize this in this specific topic with the data unique data that we have, which is very, I think which is the secret source, together with trust, attention and connecting with that, then we can reach that. So that how important is the deal with Europe with Chinica? Basically we're getting an interest, a super nice grant of GPU hours of infrastructure. In exchange, we're giving back to the community the model Open ways with a very similar license than LLAMA from Meta. So very, very open also for commercial use, to be available to anyone. And we thought that now given the language, the impact of language and importantly this and also the nature of translated. We started with open sourcing mat then in Modern mt we felt we want to give the community even if sometimes though in the past some large corporation has abused a little bit of open source. We think that we should not judge it by just what views of the open source technology we create. We should also see the users that are using it, the passive element. So we decided to do it and we're super happy that she is helping us to do that to connect to the trust attention stuff you were mentioning instead. So Trust attention is just one of the many innovations that we introduced into the model Trust attention actually we released that even with Modern MT this year and the idea is. And so we would now use it all these improvements in larato and the idea is that not all training data are the same. If you teach a kid, you know, there is certain things the kids will learn very quickly. There are noise and they should not learn from pet friends. And maybe you're telling, you know, that is something interesting that you should learn that because that is important, that's a good source of information and how there was no way for us to do it in practical terms with machine translation. If you throw data to a model, it will learn in the same way from every data. So some people started weighting data differently. That's what LLAMA does. For example, they will weight Wikipedia more than Common Crawl, which is basically copy of the web. But you know, we wanted to do something more sophisticated for translation, especially designing for translation. So we know the performance of the translators that have generated the data. We know how many level of reviews that data has. And so the system can learn while he's training how to wait using an attention mechanism, how to wait the data for the training. And that is a nice solution, but with this one there is many little things. Basically we took the state of the art LLM and we started developing verticalized solution for the translation space where we have very specific information data points that you don't have when you train in LLM because you don't have that level of feedback of richness of data in the training data labs.
[00:18:29] Speaker B: You have tons of. And you've mentioned it in this conversation, but that's kind of like the secret sauce, right? Translated Vast historical data set. What's going to be your approach to sustaining and sourcing these unique high quality data? Because this is also what's going to inform trust, attention, I assume is dynamic. Right. And it's going to continue changing over, over time.
[00:18:53] Speaker A: And I'm not telling you what we're doing, that's one reason. But I give you what we did.
So the data that we're using Today to train Lara, the collection of the data was designed 15 years ago, 2010, 14 years ago. And so there we knew in 2010 that it was wrong to train machine translation just with sentences. There was no context. And so we needed to collect document legal information, not just sentences like a translation memory. Translation memory is a set of sentences and that was the wrong way of collecting information. So we started in 2010 to collect complex little information. We started collecting not only the positive examples because we today we've been training machine translation on what just on this one is the translation of this one. You don't teach kids in this way. You teach kids by saying you should not do this. And we add that information because when a translator is working, we provide a suggestion, they correct it. So we have a wrong translation and we have a good translation. And you know something? Models can learn faster by discovering what they shouldn't do. Okay. And so 15 years ago we were collecting the negative examples and we collected every bad examples we provided for translators. Crazy. Because if you think, are you collecting from stuff? It looked very, very wrong to some people when we started doing and so context, wrong information, and then we needed reasoning data. When a reviewer and a translator do not agree on the translation of something, they start a debate. And that debate at the end resolve.
While they discuss, they need to articulate why they think the point is valid. And by doing that, you're explaining the reasoning behind translation.
And that data is extremely important for training the models that we're creating now.
And so this is a technique called chain of thought. And it's also what was released by OpenAI recently with O1. Okay. Very similar approach. So these were the secret zones that we started collecting 15 years ago. Then five years ago we started collecting some new data.
And I'm not going to tell you until we will use it in 10 years.
[00:21:26] Speaker B: That's really good to know. That's really good to know that you've said it here at Multilingual Localization today. So in 10 years we'll have another interview and we'll talk about that as secret sauce. But of course there is some competition in the market. Right.
And there is a race, but it's also an open competition. How do you plan to maintain the competitive edge with players like OpenAI, DeepL, other major tech firms developing also in this space. Right. And I'm so glad that you're competing there.
[00:22:04] Speaker A: So first I think that at one point there was too few competition. When Google released Google Translate and the statistical model in 2005, probably they killed basically the research because the tool was there, it was free for everyone. There was no incentive for any commercial company and anyone to do research or create apps for doing translation.
And so for many years there was no competition. And then competition started again. I have to give a big up close to DeepL.
You know what they have done is that they improved that even if someone is offering something for free Google, if you make it something designed for people slightly better than Google at that time they were slightly better than Google. People are going to pay the. It's the end of the free Internet if you create something that people like will contribute and pay for it. And so they demonstrated that there is a market there and they have, they had an enormous success in that direction. So the fact that DeepL did it then created a more interesting competition for everyone because other people say, you know something. So the reason of it's not their only Google, the fact that it's free, it's something that kills the markets. There is still an opportunity and that was a strong signal there. So we hope that with our product we're going not only enterprise, but now we list it also for consumer consumers, SMBs.
We think that if people have more options, I think we will accelerate the race to create the universal translator. And I love so much the future. I love so much the idea of solving the language problem that I love the fact that we have many people that are going with passion in the same direction. I think it's more exciting for me. And the market is getting bigger every single day. So it's just very nice.
[00:24:18] Speaker B: Yeah, and you spoke about this, you spoke about, you know, 1 million consumer users and you know, there is a growing number of automated translation users. Can you talk about the strategies you foresee to win these 1 million consumer users?
[00:24:35] Speaker A: So no 1 million. So machine translation, let's say the current generation, the new one. You know, I don't even think we should call it machine translation. We're calling it translation AI because there is more than a single translation in the machine translation model is an experience. Now the critique model that explains you the translation, the option, the instruction, the context so it's becoming something different, way more sophisticated, way more. There is 3 billion people today. They use translation tools is by far the biggest application of artificial intelligence that we have. You know, ChatGPT is 200 million users. So we're talking about something which is 15 times bigger in terms of usage. And sometimes we forget was the very first link in the, in the Google homepage was called language tools. There was no other thing other than the search button. And the Google Translate button. It was, has always been a significant part of the Internet. And today I think Translate is still keyword number five.
Few brands, Google, YouTube, etc. And then there is weather, then there is Translate. So just after weather, that's the most searched keyword. So it's very powerful and it's a very big market we want to solve. And so the users are there 3 billion. So we need to understand what their use case are and we have to deliver solution to their problems. And for us we're starting first we started with the enterprise world because they had a need and because, you know, for us it was, we thought it was the best way to start because we're already selling them the localization, human localization services. But to SMBs, you know, there is an incredible big market of users that today cannot afford a professional translation for speed reasons, time or budget. Okay. And so they're, if we don't provide the tools, basically they're not even trying to interact with international customers. The more they interact with international customers that they have, the more they will need more localizer, better localized website, professionally translated. So actually machine translation goes very, very well together with our localization services because the more we help people to be connected globally, the more that we also need high quality preview content. So one is a communication tool and the other one is a way to be understood and also spread your product services values. One requires human, the other one can be done with machines. And so that's the strategy. We want to give it to everyone that wants to interact global and they will use the service, they can use it for free, they can pay a small subscription fee. And we hope that these people also will create great international business that would help bring it all.
[00:27:46] Speaker B: Thank you for sharing that. And of course there are some implications, broader implications to the conversation. There's the whole conversation about human connection and translator has been very vocal about humans and how humans have direct impact in how technologies are built. How do you see Lara affecting professional translator roles in the industry? Will their expertise remain central or is the model moving towards full automation yeah.
[00:28:20] Speaker A: It'S a tough question.
So, and it's the simple answer that I have is that I foresee an incredible interesting market for both professional translators and machine translation. And I think that the market for professional translation will actually expand a lot.
But it may look a little different than what we have been experiencing in the past. So why the market will grow, will grow because translation is a market with extraordinary big lat on demand. Lat on demand means that if you make the product, let's say free, almost free, instant and super high quality, people will ask, actually translate more. This is not true for all market. If you think about customer support and you have a problem with your telco operator, the fact that the interaction with the telco operator is faster doesn't make me willing to call the pel cooperator more. It's a problem. I have the problem, just want to solve it. I don't want any problems. But if I give you a tool to communicate globally, you will not translate. And this is almost free. You do not translate perfect quality. You not translate your website just in two languages or three. You would like to do it in 200 languages, every single language you can. The portion of the Internet that today is translated is minimal. And if we create that tool, the amount of content that will be translated is incredible. I don't see why. In social networks your content is produced only in your language.
Your content that you produce should be available to anyone in every language. And so I think that we have a factor of 10,000, 100,000 or maybe a million times more translations than we do today. Okay, so this will create a much bigger market. But this is AI translations. We haven't used it human yet. But you know what happens when people communicate globally. They will need that high quality translation to be done. And that will be two, three, ten times bigger than what we do today. If we succeed in creating the universal translator, we will have a market for translation services that will be 10 times bigger. That's what I deeply believe. And in that market it would be. We still have some problems to face if we make, if we want to make people happy.
And I can list you the things that are in my brain that it's waking up me at night. And, and one thing is that not only we need to create a job opportunity for the translators, the market, we want to create a job that they like. And this is not obvious because what's happening with AI is that we are automating everything.
And then basically we're translators. Linguists are becoming just problem fixer. So they only intervene if there is a problem. And in your life, if what you get trolled every single day is just problem, problem, you don't have the upside. You don't have the sense of purpose, of the satisfaction of creating and crafting an incredible product. Because instead of translating a book at one point, you will just receive the 20 sentences where the machine has got a problem. And that is not.
It's not giving you satisfaction. And we have to understand that the future of work is a future where we put the humans in the center, creating jobs that give satisfaction to people, not just efficiency. And so that's one point that is breaking my head trying to understand how we design the future. And I don't have the solution yet, but I'm working, Eric, to find one. Then the second thing that I'm worried about is leaving people behind. And so what happened at Translated in the past five, 10 years is that some translator adopted every single piece of innovation with it. And some of the people say, you know, Marco, I love my job as it was in the past. I don't really want to get into this new modality of work. You know, what happened if the average leader was making $40,000 a year working from home, random average, some of them started making 100, 120,000. The one that adopted AI and the one that did not adopt went to zero because they were too slow. The quality was not great because they were not getting the support from the machine. So people that use AI are actually killing a business of people that don't use it. And that thing why it's a big problem, I think because we're leaving people behind because we're not spending enough time to explain to people the opportunity. And that's why I like this interview, because you allow me to explain why I think there is a great opportunity in the future. Okay. And the last point about translators is that text translation is just the tip of the iceberg.
And I know text translation is changing very quickly, and I know that the translators are under a lot of pressure about being faster, lower cost, higher quality, all the time on text. But I don't think that they realize that there is a market of audiovisual subtitles and dubbing, that today they have only a 1% market share of the work.
Linguists are not involved in the dubbing at all. Really. There is some special kind of translators that doing the work, but they not professional translators that we work with every single day, the translation industry. So one thing that we presented yesterday was made up a tool that Would allow the linguists on their own dub content directly without the need of a studio, without need to do voice actor casting to find the perfect voice for each. For each initial speaker, without the need of doing complex mixing. Because the AI is rebuilding the soundtrack, the music, the effects in the background, they can actually enter another market from their own. And this is where we want el translators trying to say, hey, there is a bigger opportunity. There is market that you can make the difference because you can deliver a dubbing solution in hours instead of a month. And you can do it in a way more efficient way with perfect voice match means that the voices would look like exactly like the original actors. And nobody can do that. And that uses between human and machine is working great in that element. And we think that we want to help translate capturing opportunities that are today outside translation space. So I see a lot of change. I see a lot of opportunity for those that embrace the change. And that's why we, you know, we wrote we believe in humans in the side of the boat because we believe in humans is about believing in people that are willing to accept the change because they take on challenges that they think are a little bigger than what they think is possible for them. And so we want to celebrate those people. We want to help those people that are moving because they give examples to all the other people to say, hey, this was possible. And I think that if some have success in doing that, many others will follow. And so we want to celebrate these people.
And yes, I think that's the vision about fascinators. I hope we give some hope to some.
[00:36:33] Speaker B: I hope it does too.
Because there needs to be an evolution of the craft, and you see it in so many other industries. And there is this technical accuracy that machines are helping us to cover in language. I can tell you from experience, I'm from a small city in Colombia called Sin Selejo. And if we were to do reverse translation, I doubt a machine could actually translate how people speak locally, because if you were to write it down, the level of nuance in the pronunciation would make it really difficult. Certain words would have to be spelled differently just to make sure that there is this level of cultural accuracy. So I have a question on cultural sensitivity and how Lara incorporates cultural relevance into translations. I personally believe that the future of the linguists and the translators will have to be related to cultural relevancy, and then moving into different locales will have to include cultural relevancy to a much deeper and sophisticated way in a much different, sophisticated way. So how does Lara Incorporate this in the conversation.
[00:37:43] Speaker A: So Lara starts incorporating some of these nuances. So because in the way it's trained, by using monolingual data, almost 10 trillion words, start analyzing what culture is, what the world talks about. And differently from a standard language model which is trained, for example, llama 3 incredible tool is trained 95% on English data and 5% on everyone else. So the model can learn the intelligence from the English language, learn the reasoning from the English language. The problem is that it's not learning the cultures so you can generalize reasoning. And so when I'm using Lama in Italian or Spanish from Colombia, I can still get the reasoning because he learned the reasoning from English, but does not have the culture.
And so what we did with Lara, basically we pre traded on way more diversified data to try to bring more cultures sensitivity in the original model, the baseline model. Now the problem is that no language, no culture has produced as much culture, as much data as the US okay, so we're not, this is not a race that is starting now, equal for everyone. You know, some countries have the opportunities to produce way more data and for that reason they're influencing also the culture behind. And so I think exactly like we do with gender bias, or we have to think about how we can revert that by creating models that balance correctly on the cultures and are able to translate, preserving those cultures. I think that the technology potentially is there, but the amount of work to make it work in terms of organizing the data, testing it, et cetera, is a big effort. Now I hope we can include some of these in the next model too. But I think this overall for everyone would be a long left turn thing and probably would need some years. But if we succeed, we can preserve not only languages, as you said, we can preserve cultures. And that's very good.
[00:40:22] Speaker B: Excellent. Thank you so much for sharing. And of course, we're coming to an end to our conversation, unfortunately. I hope next one will do it in rome.
But for 2025, 2025, new models expected from, you know, various tech giants. What do you think is going to set Lara apart in the coming wave of translation AI advancements?
[00:40:47] Speaker A: So let me say the obvious.
I don't think that any model from any company will get worse. Okay, so everyone will make a step ahead and some step ahead will be very visible and some will be less visible. So in certain parts we're reaching very good results. So making an improvement is very hard. And so in translation there is a lot, a lot of work that we can do. And I think so we Will release in the second half of 2025. We hope to release Lara Grande, the new model, which is 10 times bigger than Lara, what we just shipped. And that should achieve language singularity at least. I mean in the conversational task, the easiest task in translation, not generalized in everything and just by the way for text translation. So then we have in the next years, we have to solve audio and video because remember, you cannot dab a movie if you don't see. If the dub, the voice actor doesn't see the the original movie, they need to see, they cannot do it. They cannot dab high quality attack. So this is attacking top tier, top 10 languages next year. And I think this what you will see at least from us.
Then we will have to move this also to audio, video, to capturing the full expressiveness of the human beings. So there is a lot of work. And then we will have to make this available for every single language. Okay, So I think we have another 10 years of work in front of us, but we'll make progress. And I hope to see language models, general language models with more reasoning capabilities. And I think they may become slower maybe at reasoning, but they can provide better responses. So it may take maybe 10 seconds, 20 seconds to answer, but they can always now work on more complex tasks. You can delegate more to the model. More and more energentic, I think is one of these way of increasing the reasoning capability so different models interact to try to solve the problem which is too complex for a single moment. This is going to happen. So these are the on machine translations is what I mentioned on language model. I think that reasoning would be big next year.
[00:43:32] Speaker B: Great to hear. And of course we can infer some of the answers to my next question. You've talked about what excites you about AI in the language industry. And you've also mentioned some of the things that keep you up at night when it relates to translators and humans in our industry. Tell us about that. Tell us about what excites you about the future of AI in the language industry and what's keeping you up at night as well.
[00:43:56] Speaker A: Well, I think that what excites me is that we are in a transition point and so things are happening very quickly, which is very exciting.
And so because you feel that if you do the things right, you can have a positive impact in the future. And actually we're controlling our future, you know. And that's the exciting part. Okay. And the scary part, what doesn't make me sleep at night is the same thing.
Things are happening too quickly. And I feel that if every single day I don't wake up early in the morning to work hard to do it, I'm missing out. So the fear of missing out is the bad part.
Making sure that we design a future for people, for humans, that humans would really enjoy, that will give us a great sense of purpose of what we do. I think this is the other element that does not make me sleep. But I think that the fact that it does not make me sleep, it means that in a certain way we're working on it. And I feel that not just us, many people around me feel the same. So I think it's a shared thought, what we have. Yeah, some people are evil, but in general, I think that I see around us more people that wants to solve these problems in the right way for the good. And so I'm quite confident that at the end everything will go well.
[00:45:30] Speaker B: And I think you're right. So what advice would you give us young innovators and entrepreneurs who are looking to make our mark in AI in the language technology and language industry space? What would be your advice to us?
[00:45:46] Speaker A: Well, first, let's see the obvious one, use it.
Because the future is for the doers, the people that do stuff. So, and the first thing is that we should not just talk about artificial intelligence. We try to use it as a user. Okay. And once you're user, once you are a user, something magic happens. You start understanding the limits of it. And as soon as you start perceiving those limits, then inside you there will be a urge to try to fix those problems. And so once you feel that urge, well, I think the next thing is I go out and build that solution for the benefit of all the others.
[00:46:31] Speaker B: All right, Marco, thank you so much for your time. Do you have any final thoughts for everyone listening?
[00:46:39] Speaker A: Enjoy the future.
[00:46:42] Speaker B: All right. This was Marco Trombetti, CEO and co founder of Translated, talking about lara, artificial intelligence and the future of our and many other industries. My name is Eddie Arrieta, CEO of Multilingual Magazine. Thanks for listening.