Episode Transcript
[00:00:00] Speaker A: Foreign.
[00:00:07] Speaker B: Hello, welcome to Localization Today, this time once again with the wonderful Marco Termbetti, CEO Co Founder Translated we're here to talk about AI agents, we're going to talk about language, we're going to talk about the impact that this has and what other technologies have in our industry. My name is Eddie Arrieta, CEO of multilingual magazine Multilingual Media. Marco, welcome.
[00:00:37] Speaker A: Eddie, thank you so much for having me.
[00:00:43] Speaker B: And it's wonderful to have you back. We have been catching up a little bit of the recording, which has been great. It's always great. And hopefully we'll see one another in one of these events that our wonderful industry has.
Of course. Thank you, Camila, for making sure the production of this runs smoothly.
And Marco, let's get to it. I'm gonna let everyone know, for those that are watching the conversation, what we are talking about. So we are talking about the latest release from Translated, which you can check on their LinkedIn profile. I'm gonna find it insightful right now and it's a very, very exciting announcement and we're having many of these happening and the race is on. We love it.
And they are releasing 32 languages in the world and Marco is going to tell us all about this. It takes Lara from 60 million speakers worldwide, now reaching 2.8 billion native speakers. That's a huge jump. And of course we're going to talk about Team Plan, which is the API and the new AI agent from Translated. And you can see that multilingual magazine as well. Without any further ado, Marco, tell us about what this entire conversation is about and welcome again.
[00:02:17] Speaker A: Thank you for the opportunity. Abdi. So first let me say we're very happy to have this new release of lara. And for those that didn't know that, do not know what LARA is. I mean it's the next generation machine translation technology basically is combining large language models with translation models. So you get the fluency of a large language model and the instruct capabilities of a language model, the flexibility of a language model and fluency together with the low hallucination of a translation model. And we kind of need both.
Before we had to choose if we want to be precise and accurate or if we want to influent. And then so LARA combines these two in this new architecture, large language model based architecture that is allowing all this. And we shipped LARA the first time in November 4th in, in a nice event in Rome. We had one update in, in February and that's the. We're talking about the release that we did just a week ago, around a week ago, two weeks ago, in. In April. And so what we did was basically the eye level is that we released 32 languages, number one.
And so we went from 10 languages supported to 32. For those that have worked with us on Modern MT, you know, Modern MT support 201 languages. So yeah, it still looks not a big amount compared to what we can.
But the reality is that this is a totally new model and we're training the model to do very special stuff. So these two languages do something that were impossible before. So first they all support adaptation.
So actually you have 32 languages with full adaptation capability. And adaptation means, you know, you have a few examples, you have a translation memory, you upload it and the model is adopted. No need for training the model. If you find an error in production, using machine translation, you add a sentence, you fix the model in real time, no need for retraining. And so this is for the top 32 languages. We support adaptation, we support context, so that you can provide context to the model and that will be used to do the translation. Imagine now managing a conversation between two users.
You can provide the name of one of the speakers or the previous conversation. And so the model can now infer the gender. And so you can know if to use the proper adjective in the language. That was very common.
And the last thing that Lara does in 32 languages, basically manage instructions.
To give you an example, if you want to do things like search engine optimization, years before you had to take a machine translation model, you do machine translation, then you add editors, copywriter and search engine optimization expert to say, please change these translation so that I'm using these keywords, precise keywords I want to use or. And so. And so today with Lara, you do the machine translation, you provide an instruction to Lara, you say, please. These are three terms I want to make sure are there. And LARA is actually doing it without the need of multiple processing steps that you will have to do. Combining multiple AIs to do the step everything steps centralized and integrated in the translation model. So you have the flexibility of a language model, but you're paying for language model. And it returns results in 500 milliseconds alpha seconds. So the speed that is needed for translation, not the 20, 30 seconds of a language model. So that's the first thing, the 32 languages. And then we have something more.
[00:06:26] Speaker B: And it's a bit more. Last time we were talking here about Lara in this conversation today, then we're talking about team plan. So it's an API and it's also an AI agent. I confess to Marco before we started recording that there is a lot of confusion in the industry about what's an AI agent.
And I myself sometimes struggle to understand it at a very high level. So I'll let Marco also engage the conversation with us.
Those that are afraid of saying publicly on LinkedIn that they don't completely understand, you know, what the high level is of this in terms of what it is and how to build something like that as well.
[00:07:11] Speaker A: Okay, to connect the points, you mentioned two things, the team plan, the API and agents. So let me say, in order for the team plans. So with this release, we're adding team functionality to lara. It means that if you have an enterprise, you have one billing account, you can manage the user, add, remove the user, but most importantly, you can have a centralized terminology style and adaptation. It means that the owner of the account creates the account. Basically, the model can be adapted to that specific customer need and you can make sure that your entire team, your entire team of employees actually are using the proper terminology while translating. So it's not only about not going to a third party website that is not safe, it's also about controlling and making sure every employee benefits from the adapted model. And now if you think about the localization teams, what it means is that in the industry we have been doing human translation a lot and that's what we really do work.
Many machine translation applications in large enterprises are still managed by another department, the tech department. And so we kind of lose the localization team, lose control. Now, with LARA teams, imagine your localization department can give LARA to all the employees inside the company so that you propagate the great work you're doing in your organization and you make sure also that they have access to the latest technology. And so what it means is that everyone that every single professional translation you're doing in your localization department is instantly improving the model that then your employees use to do internal communication and understanding and everything they need to do autonomously with machine translation. So before they were going outside, you know, great technologies like Google Translate, deeplift and now the organization team, you can give the same tool, the same access to the same powerful technology that you're using for professional translation. You can give it to any employees inside the company as a communication tool. And that gives visibility to organization people and also improve the experience for these people. This is theme. Okay, then the other thing you mentioned is API.
And yes, so we ship at the new release of the API.
The API is a Machine Translation API, super high performance API. So it's not designed just for localization professionals. Typically it's high cost and slow because, you know, human is slow. So that was never focused.
So the Lara APIs and the model are optimized both for large amount of user generated content to be processed, speed and cost, and also easy to integrate in the localization.
Well, regarding the API, the Machine Translation API, the LARA API, now we have a new release and not only is easy to be integrated in localization workflow because it's designed for professional, but now also is high performance for a very large amount of user generated content. So is git. Very low cost, very competitive cost, incredibly competitive cost and super low latency to the point that no large language model can compete with. We're talking about 500 milliseconds, 400 milliseconds of average latency and about P99 latency for those engineers following us of 700 milliseconds. So it's something where you can use for real time application and integrated also in the localization process. And you will find adaptation, context and instruction directly in the API.
And then your point, eti the big confusion that we have today, but also the biggest opportunity that we have today, AI agents. Okay, so before I tell you what we did in lara, I think you want to know a little bit about the framework, right, of what's happening with agents.
[00:11:35] Speaker B: Absolutely.
[00:11:36] Speaker A: Okay, so you know, agents is basically a term to indicate an artificial intelligence intelligence that is able to interact with other intelligence.
So until six months ago, why this is becoming exciting. So obviously the concept is exciting because everybody is building something and then we can have all these AIs interacting, all these agents interacting to create things that are more complex than a simple transaction, unique transaction. So why it became cool now? Because for the last two years we've been talking about it, but there is not a protocol. There was no way for AI agents to talk each other. Okay. And nobody was agreeing on a language. So yes, I built my agent, but then there is not a marketplace for agent, There is not a place where I can find other agents. But most importantly, there is not a way an AI agent can talk to another AI agent. And then something magic happened, very unexpected. And I think this is why we got the hype. So Claude Antropic, the company Antropic was working on a protocol, open source protocol, Claude mcp. You know, this was something proprietary for Antropic. And so they released this protocol as open source and some people started using it and the traction said, ah, Everybody was excited. And by the way, this MCP agent, this agent was designed not in a very broad way, not super well designed, I have to say, was just designed to do one thing. I'm using clothes from anthropic, so like ChatGPT, the entropic version and I can interact with something else that will give context to my prompt.
Okay, for example, you know, let's see the example of translation. I want to translate something. Well, you know, there is an agent that goes there and takes some information from the website I'm translating and provide that as a context to do a better translation.
So that was the idea, just to add power. They released the protocol and people didn't want really to do context. They started building tools so cursor and ide, so something for developers to write code. They said, I want to use an NCP agent so that when I write code you can also push it directly to Amazon. So they created an agent that was managing the AWS servers.
And so basically, oh, I code, boom, I start the code and then I get executed those codes, those agents so that my application is now live and people can see it.
So it was a nice project. People starting using in a different way than was expected. And then the super incredible thing happened. OpenAI out of the blue announced that they will support NCP. OpenAI, the competitor of Anthropic, says I will support the Anthropic protocol.
And so the same day Satya Nadella from Microsoft said, oh we Microsoft, we're going to support MCP2.
And Google said, yeah, yeah, we're going to support MCP2. But then the week after they released A2A their own protocol, okay, trying to get some traction also there. But you know, at that point it was already big enough. The three major player of AI decided to have a single protocol for these AI agents to talk each other. So now if I want to say a machine potentially, let's say what is potentially possible. I can say, you know, please, please go there now. Check the best destination out there where I can go in vacation for a long weekend that is not more than four hours. Book me a flight and book me a nice hotel in that destination. I don't want even to know.
Let me do me a surprise. I don't want to know what you're booking for me, but please go and do it. And so potentially you have an LLM that does a query, can get some nice idea about traveling, then interact with an agent that does the booking of flights. And then there is another agent from Expedia maybe, or Skyscanner or booking that basically gets you a hotel or Airbnb, if you want an Airbnb. And so, and so now with a single common, you can get stuff done.
So from just managing content, organizing content, doing search, et cetera, we're getting into execution of task in the digital world.
So they cannot go out, they don't have robots, so they cannot really build stuff for us in the physical world yet, but we can have something. So it was super exciting. Now the problem is this, this is all the idea. Now what's the reality?
There is not an app store for agent yet. So where do I find this agent? You know, there is some project, et cetera, but no one has already delivered a great visibility to agent and there is a very small and limited amount of agents.
And by the way, the MCP protocol is not truly designed in a very good way because it started with another application. So people are trying to hack it and get things done. But you know, it's still very in the early phase. So what we did a lot of.
So we said, you know something, this is promising, it's not the big change yet, but we want to experiment, we want to give our customers the opportunity to experiment with agent to see what can happen. And so we created an agent, an MCP agent that basically exposed LARA as a translation capability. And so if you are in Claude, you activate the LARA agent and basically you can say, lara, you know, I have this spreadsheet. You know, very often in localization someone comes with a spreadsheet and they have a bunch of strings and then they ask you, hey, can you put the translation in every single column? Okay, in tier one languages, 10 languages, they don't even tell you which languages, a just the top 10 languages. And then what we have to do is in translation, you take these, you send back to many translators, they do the work and then someone has to copy and paste all these things to create the final spreadsheet. Instead you go to Lara, you say, you know, Lara, hey, please take this translation memory, adapt yourself to this translation memory, because this is what I want. And take this Google sheet and please translate, adding our top tier one languages for every column. And then LARA will simply take the file populate and work for you. All the copy and paste, all the project management work that you had to do, boom. Happens automatically. Then that's the start. You could say, hey, Lara, please, I would like a human review or please do some quiet assurance and tell me if you find big critical stuff.
And then please go and do a human review of it.
And then at the end you say, you know something, convert that into a JSON so that the engineers can actually use it. And I would like something of this JSON format, which is the format my engineer use. And LARA will do it for you. So this basically LARA MCP agent is basically an automation of localization project management at a truly early stage. But we wanted to experiment with that just in case this explodes and we have agents everywhere.
I think this release of LARA for localization professionals and teams is giving a couple great advantages. And, and so first you can get visibility in your company. So LARA now has got the team plan, so you can expand and give a communication tool, global communication tool, to all the employees working in your organization. And so we give visibility to localization teams with a team plan.
And then you can also try to experiment the AI agents and so you can try to see not only how to do translation, but also how to automate some localization project management.
[00:20:19] Speaker B: It's definitely an exciting time and surprising to hear that there is a limited amount of agents that probably speaks to the difficulty of it. And also probably the bottlenecks are still similar. So for you, what are the bottlenecks that you're seeing there in this entire conversation?
Is it privacy concerns in the way these protocols work? Is it the sophistication of the protocols? Is it the quality of the agents and what they are able to execute because the agent will be as powerful as the tools it has access to? And it seems like there are bottlenecks in many different directions. What do you see?
[00:21:00] Speaker A: Well, I see many structural little problem. But also when I say there is not many agents, I mean there is hundreds of millions of great websites with services and there is only few thousands of great high quality agents. So when I mean there is few, I'm not saying there is three. I mean there is no. And they're growing every single day.
But many of them are very simple prototype. They don't really bring the true functionality of the service behind. And so we need to have more agents for every service if you want actually to be able to do tasks that we do in the digital life for real. Because in the reality we use many, many different things. So this is one limitation. The second big limitation that I see is that everyone, anthropic, Google, everyone is trying to bring capacity to their own platform. So agents are symmetric, should be symmetric. So Entropic talks to Lara and Lara talks to Entropic. So the first release is just that you need to use Cloth, entropic cloth to invoke Lara. So you I. They haven't, they haven't created their own MCP agent. So I cannot use gloat and answer. So I would like that in Lara I can access Claude to do many other things that are needed. Instead they haven't opened up. So even the company that has created the platform has not exposed their services through an NCP agent. So clearly there is some strategy in trying to gain markets.
Everyone on our platform, so we're not in the face of the Internet, which is free, everyone is collaborating and is a big great family. There is in this round of the Internet there is still a lot of optimization and strategy. So that stops people from building those. So many people I know don't want just to support Antropic. They would like to say, look, if this becomes an open standard and I can use Anthropic too and then I'll probably build an agent that everybody can use.
[00:23:14] Speaker B: It does feel that some of that conservatism is needed just because of the speed.
So you really have to make sure that there is financial focus, otherwise there'll be a huge dilution in many of these markets. But I'm sure we've seen it. We cannot expect from our last conversation to this conversation, we. We can't really know what's really going to happen. We've been happily surprised.
Marco, we should have another conversation in the future.
I'm sure we will probably a longer one if we can.
But for today, is there anything you want to share, say with our listeners, with your partners and those in the industry that are listening?
[00:24:05] Speaker A: Well, first I would like to say thank you for inviting me, number one and a big supporter of multilingual and I love how you're helping the entire community to get together and also be updated on the latest trend.
And to the, to the people that followed us today, I would like to tell them, look, a new release is coming very soon of Lara and we love to hear about the feedback from the users. So, you know, feel free to contact us. We need to engage us with as many people as possible in localization industry because I think that LARA is getting out of. The localization industry also sees a spanning upside and so I want to make sure that the voice of the people from the localization industry is well heard and we propagate our professional nice way of working and conquer the external market together.
[00:25:01] Speaker B: And with that we're coming to an end of our conversation. Marco, thank you so much for talking to us today.
[00:25:07] Speaker A: Thanks to you, Edith.
[00:25:09] Speaker B: This has been our conversation with Marco Trombetti, CEO at Translated, and myself, Eddie Arrieta, CEO at Multilingual Media.
Very, very insightful conversations. We know more about AI agents, we know more about protocols, we know more about the politics and business, let's say, of the entire development of technology in our industry and the world. We are very lucky to be part of this technological revolution where we are at. I think the industry is in a front row seat and I think we have to make use of that. Marco has been an amazing guest today and we'll hope to have him very soon. Later, Marco. Ciao. Bye. Bye.