With that. But, yes, let's get started. I did wanna mention as well. If anyone has any questions while while I while Oscar is presenting, you know, don't hesitate to ask them in the chat. There is a Q and a function you can use. I'll be actively monitoring that, making sure that everyone can, that you if everyone is able to see it, that, and, you know, is able to I'm able to see it and I'm able to answer them as well. I might as well ask them directly to Oscar inter interrupt him. So, really, if there's anything, you know, especially there's a few sections in the agenda that we'll cover. So, you know, we'll make sure that that we can address those questions at the right time. And, of course, there's gonna be a Q and a section at the end for any remaining questions that that might not have been addressed directly. Oscar, if you can, click next. I think I'll I'll talk about the Yeah. So, we had a fun disclaimer. We're a public company, so don't take, you know, like, we have time to read that later. But, you know, it's it's the typical, don't make business or marketing decisions, sorry, not marketing, financial decisions based on what you see here. So yes. But in terms of the agenda, right, we're going to go for, what CRG in action. Those guys are gonna show us what it looks like, you know, when sort of in live environment, that he's gonna give us a bit of an idea of how does it work, a bit of the architecture, how can it be used, and how do you set it up. After that, of course, we will talk about how much value does it add based on, you know, we we have customers that are alive with this already. So what are the metrics that we're able to pull out of this? What what business value that it actually bring to them. And, also results from our Better program. So before December, our customers that were using it, what what came out of that. Then we'll see a bit of the road map. I know a lot of people are excited about the road map. So that's something that that we're gonna be looking forward to. And, then finally, before the last q and a, we have, how much does it cost? Cause I know this is something that is on top of mind for a lot of people. So what's the the, you know, what's the license got look like from the Coveo site? And then finally the last Q and A session where where we can really answer any remaining questions you may have. So I think on that note, I'll pass the mic to Oscar and and you can you can go and give your, give your presentation. I'll be in the chat if you need me. Thank you, Alex. And, yeah, feel free to interrupt me, after each section so we can address our questions if there are any. So, welcome everybody. We had some, enablement session before. You have seen you might have seen our product in action, but we also wanna be mindful of people who are just getting some awareness of, of CRGA and or general events during capabilities. So, let's just jump in and see what it looks like, in in action. So we have, this feature enabled on our doc as well as our partner website. And the way you would see it is through, queries as you were doing queries and searches, get information, you would see the document and the answers being display in front of you. So here we see four, one of our machine learning model, how to question, or or what is question, and we're pulling from three different documents, to generate that answer and and give you the right context without having to search through all those documents. So That's, that's the basic of what it does. It looks really simple. There is it's a lot more complex under the hood. We can also ask question about it. Differences, how to or troubleshooting questions. So everything that exists within our documentation and will be scanned and and pulled as, like, potential material to answer, a question. So That's the very basics. Now there are a couple of things on the component here, that I'll I'll point out. So you've seen the citations, so that that shows which, snippets or chunks of the documents we've used to create that answer. So you can verify the links and the accuracy of what being generated. You have the ability to, reformulate so you could, if you want it as a bullet list, because that's more your style of reading things, or you might want an exact view, like a summary, very short, very concise, just so it's really easy for you to get the, the answer in the format you want. We have some feedback option that say that answer wasn't great. You can bring some detail. All that is, you a report on the, admin console, so on my interface. You can copy the answer and paste it maybe in an email. And you've generated kind of, like, bothers you to, and you're, like, when you really know what you're looking for, you can, hide it, turn it off for the following queries, so you open it only when you need it. So we've built that component, Now maybe going back to the deck, I wanna kind of, like, explain, why we build this product in the context stuff, JPT, LLM, and JNA, and CapAC word stands, from, like, a high level perspective, where we'd live in, but also from like a technical perspective. So you have the right tools to, position that, that product within, your own customers or, colleagues. So at the very core, we really believe that, we're we're shifting from, like, a search only experience to an AI experience where everything that is gonna blurry, whether it's like a search box, generate events or anything, a chatbot. All this is gonna Coveo blend in to that input box, or intent box that's provided to the customer. So That's our deep belief. And we think that regardless of what's your question, that system, that's how it that's your point of entry, and it should be answered to it should be able to answer you whether it's with links, with personalized documents, answers, or even, like, follow-up questions or clarification questions. This is the entry door to knowledge, and we have to, to own it. What's interesting in in the Coville position is that we already unify the knowledge, and we can distribute it, in various forms and and shape and flavors So we really think generative adds another layer of distribution and engagement, with content. At at its core and all this leveraged by powered by AI. Now if we look a little bit under load, that's how Coveo works at a high level. So we unify the content at the bottom. We use secured connectors and and we bring them into the index. We apply some, AI already, whether it's that personalization, behavioral AI, or ranking. So we provide you results to a query. And that's Kaville what Kaville has been doing for a while. Being an expert in search. And what we've seen pop up last year on the market is through the LLMs, the, and the vector search capabilities, people creating POCs or trying to get like a knowledge source, embed it, into a vector database and use the LLM on top of that system to create answers. The there are several problems we see when people are doing this, and some of our customers have tried this, is that you get two different search boxes, one with your unified knowledge, and that new shiny box that's powered for the LLM. You also duplicate a lot of infrastructure and a lot of content because you have to maintain two systems. And the amount of security, ability to manage that system is a lot lower than what Koville can provide or what your unified search, can can provide. But more importantly, You only get a different set of facts because you're not using the same, source of truth based on the two. So it creates frustration for the user. It creates inaccuracy for the users, and we think that's a big no no. So At a high level, what we've built is adding that, capability of doing retrieval and search and pass that relevant context onto the LLM. So we still rely on the index and our secured connectors. And we still rely on the, AI that exists in Coveo, so the re ranking, the personalization, to provide the LLM the right context to generate an answer. The LM are and and so we don't use the reasoning, the intelligence of the LM, because we don't think it's grounded in anything. We want the search to ground that reasoning, and we wanna pass that context, the right context to the other end to provide answers. And this combining the two system, instead of having two system, you have your search search system that also leveraged the LLM capability means that you keep the audio content and the freshness of it in the same place. You keep all the security layers that are important at the enterprise level. You keep the administration tools, the analytics, and you know it's optimized for cost and scale. Because we support, like, very, very large indexes, at the high scale. And the the results that you're getting are relevant and accurate, because they're grounded on the right element. So if we also wanna understand that approach in the landscape of LLM engineering or engineering, We optimize for context. That's the the idea of using search, and you might have heard about retrieval augmented generation. Which is one way to use the LMM. And it's complimentary to other element. All other ways to do it, which is, like, fine tuning. Which is more. Okay. Now you optimize the SLM itself. So for now, we're investing and betting a lot on that approach. So basic pop engineering, plus the contextuality or contextualization of, the chunks into a retrieval limited generation approach. So we really optimize for the right context, for the right user at the right time so that the LMM can generate an answer. We will explore fine tuning, this year. But we we think there's a lot to, harvest from just the right approach in general. So what we built, we built, a feature, that's out of the box, works in English, support HTML document, PDF, And I've shown you the citation and reformulation. It scales up to, like, a one one million document. So you can can load it with a lot of, knowledge, and, and that's, like, key for enterprises. We have guarantees on SLOs. It available in HIPAA as well. There is a ton of tools, for integrating it into existing search interfaces, whether it's simple, Telmic or quality components, or build their integration, which I'll I'll show, later. So, like, no code solutions so you can within a matter of clicks, get that model ready and integrated into your pages or websites. And it has all the, admin and configure, configuration options. So you can debug it. You can see the analytics, and, and and check what's what's going on. Alex, any questions before I I don't think we have any questions right now for this specific topic, but I'm sure we'll we'll get some by the end of the presentation. But again, I'm gonna I'm gonna say it a lot for that. I know there's some people who joined us since if you have any questions, don't hesitate to ask them in the in the Q and A. I'll be I'll make sure to interrupt Oscar for it. Please. Alright. So going a little bit more technical, under the hood, and kind of like what happens. When we have this model when this feature, ready. So, with three key moments, one is that, hey, you gotta build that model. So we get the content, going and creating embeddings with the NLN. Then at query time, when a query comes in, is, like, the retrieval phase and degenerate fades. And in the retrieval, there's a a few elements. We embedding that query and doing two stages of retrieval. I'll go in the details, so buckle up. The first thing we do is build the model. So you will go in your index and scope documents that you know are factually relevant and, and and will offer solution to, common questions. And from that scope of document, what Cabail will do is we'll, we'll parse and chunk those, those those text documents into vectors. So vectors with, different layers of dimensions and store that in a vector space. So we're doing a vector representation of the content, in a in a latent space. And that is kind of like the amount of knowledge that the the the model knows, and and that it has to answer questions. Once that is done, and we we have a query that comes in, it will go through the current Coveo pipeline. So we will do the query understanding, apply the various model machine learning models that you have in the pipeline, whether it's like a re ranking, or whether it's, like, it could even be like manual or boosting all the rules that exist in that pipeline will be applied. And we'll also, look at semantically what what does that query means and how how does it relates to some of the documents we've we know So we can do a hybrid search. At at this stage, we do a lexical search, and with a semantic encoder, it does, a semantic search on top of it. And all this is passed on as information for the index to to do the final re ranking. With that, you get a list of results on your search interface, right, but we're not quite there yet. You still don't have an answer. So what happens, in parallel of the index returning those ranked result, is that it passes the the best documents to the relevant generated events running model which will look at all those documented a second time compared to the query that, was provided. And find and pick the right chunks of text within those relevant documents. That's why we say it's a it's a second stage retrieval. So that we match the best document but the best chunks, with the with the query. And we take those elements that are those context, those chunks, and that's what we give. We inputting the prompt that's provided to the large the open AI large language model. So that's kind of like the step that goes in between here. We pass on the best context, send it to open AI, an open AI is able to end to, generate the answer that appears and that's trained to the search interface. With that, we've got different layers of security, at various places, whether we it's at the ingestion of the documents and, or when we call, open a on, we have all those, Right? CTPS TLS endpoints to make sure not that anything that gets out of Cobail is highly secure. Azure OpenAI has that zero potential deal retention policy. So none of the data is stored on OpenAI. It's just in transit, and we get the answer back. So just as as far as that touching on security at a high level, but we could go, a lot deeper, tacured content retrieval, grounding the context, the, having the ability to look at the answers, for disability and that, dear, dear retention policy. To mitigate risks and and apply best in class practices for governance. So, again, Alex, before I move on, to the next page, just making sure checking with you. Yes. So there there is a there's actually a few questions that came in. So we have Andrew to ask a question regarding, so he said your AI feature is currently English focus. So he's asking when are we expanding to to a more robust multilingual support? I believe you'll answer this question a little bit later on during the road map section. Yep. So we are working on this right now, multilingual. So I expect to see the first languages coming up in the next, calendar quarter. And throughout the year, we'll add, languages. So very much, a priority for us. I'll answer it right now. Perfect. We also have another question. So should we expect a bit of a longer load time and and asynchronous load for RGA answers compared to standard results search. And what does this look like? Again, I think we can show a live example here. Yeah. Definitely. So, yeah, the answer is trained. So, the We did two separate flows so that the, the results appear really fast. We don't wanna have, like, an empty page with nothing, in there. So the result load load as fast as before. Yeah, because that was, like, really important for us. But then the answer will appear given, like, that stream that we get back from, from OpenEI. So, I mean, what's the So you could see just, like, streams, and I'm, like, in concise mode, so it's, like, pretty slow. I mean, this is pretty, pretty short. And sometimes, it can't Sometimes when we're unsure, like, it doesn't answer, and that's totally act that's totally normal, we we would rather not answer anything. If we don't have the confidence, then provide that an answer that's inaccurate. So but as far as, like, streaming, yeah, it could take, like, one or two seconds for the the full answer to load. We limit it to about, like, five hundred words. So it's it will if that's the length of the answer, it it could take, like, a few more seconds. That's, that's about whatever to say for them, not answer on that question. Yeah. I think the one thing that the the highlight here is that RGA is not preventing the results from loading. So the results are still going to be shown that the normal speed that you'd expect from Coveo is just RGA sort of, again, loaded asynchronously. I hope this answers your question. And then we have another question, for Vcas who's asking, so do ranked result only use ART migrate student, keyword based search, or did we use something else? And I believe that's semantics, right? We Yeah. So that's that would be that stage where, just the rank result will take into account the semantic, semantic similarity as well. Of the document. So, that's applied as a boost. I will show you later where you can see the impact of the semantic in the ranked result. We have this visible into console. So, yeah, so that's conflict. Part of the equation to get the rank results. Perfect. Thank you very much, Oscar. And I think that's it for the questions that we have now. But they can double his date as as we go deeper into this. Cool. So where can it be used? Well, we wanted to make it available everywhere and it's available currently for all Coveo products, whether it's service or commerce, website, and workplace. We built it with the focus, in with for service and support organization. So it will work within in product experience to avoid issues, any community knowledge, knowledge base, support portal, increase the self-service success for the case defection, but it will also work, for agents that are answering cases, whether it's, like, web cases, phone cases, so it'd still work with the agent resolution and save them time. With that, a couple examples, so you've seen, like, or doc, which is, like, support portal, and we also have it in our community. That's an IPX. So within our own product, we'll use it. So when you look at the top of the screen, you are looking for the help button, you have a core you have a question, we can answer right here, without leaving that, the product interface. We also have it in the agent inside panel, so for agent answering cases. See on the right. And the we also have the atomic quant tech component that are available, like, if you wanna play with it and integrate it in other places as well. But I think more than, like, where it's, where it's it can be used and plugged What's really interesting is that we have tools, that help you get set up really quickly. And we've seen that we can build a model, from, like, content selection to model building to deployment within ninety minutes. Because creating a model, is automated through, like, a a very simple, UI flow where you select the content that exists in Cavell and and scope it for the model, then you will see stats about the content that you have, you have selected. Model build, like the average time we've seen, for model build is about, like, sixty, sixty minutes. So it's really fast and we've optimized for it. And then once you have the model, you can use tools, like, or, or builders and host whether it's in hosted search pages or in, in IPX, we have, like, no code builders for interfaces. So you can just check the box and generative, the component will be added automatically to your page. And from that, you save and you deploy, So you control everything from within Coville, and you have your page that's ready within ninety minutes. So it's really a few clicks, to to get it going. The same checkbox, exists for IPX. So any almost any interfaces that we have, through the snow Coveo builder, builders as the checkbox to enable generative answering if you have a model. And that's, think. Yeah. That's just the other one. Now we also have, like, supporting tools and features. Reports that are built in. So you just once you're in production, you just say which pipeline as the model, which turn in interface, and you only get all the key data points, in there. For Vicus, we've added the backed up's semantic scoring into the relevance inspector. So for for any query, you can inspect and understand how much semantic boosted that query. And then you have the ability to configure much you wanted. If you could you could say, I want more semantic, the weight of semantic to be higher or lower, in that semantic model as well. Any questions? Yes, actually. So, yeah, so we have two questions. The first one is about, sort of the order of going live. So when you let's say you have a brand new project would you recommend going live with RGA after? You've already gone through sort of a phase one with search or would you recommend going live with both at the same time? I would say it depends on what you see when you look at the the document and the search that client. I won't I won't hide that the the more hygiene and the more, in in knowledge management in general that customer will have, the higher the RJ impact will be. So, if you're using it on the customer that has, like, document that are all over the place, no clear structure and a lot of, like, depth in in the knowledge management and not following best practices such as KCS or, are you probably wanna do, like, a search, making sure search works well first before going into, CRGA, because CRGA taps into the best, the, the top results and and what's retrieved. So if your your search is not good quality or is not at least a decent search, your answer will will not look really good. And, so I'd say double check, but it because of it, it's so easy to to set up a model, nothing prevents you from just spinning up a test page with the component, like I showed you, and see what it looks like for yourself. So I would say it it really depends on the content and the search hygiene of that that customer. Yeah. If I may add to that one sort of from an architect perspective, it depends on also what the customer is expecting. If they're if they bought Cavell for the sake of RGA, they might want to have it go on the first goal. But if they bought Kaville, and then bought Archie on top, I think it's it's fair to do a phase one with with your sort of typical Coveo search, let my shear in England a little bit, then go live with RGA as a phase two project. But, yeah, again, this is something to be discussed with, I think for on a project per project basis, I'd be happy to to help, you know, if you if you need help regarding that specifically, I'd be happy to help on that. And the last question we had was about, so you showed us the UI what it looks like. And we had a question about how much flexibility we have to customize, what we see in the UI? Yeah. So, bring this back and kind of make sure you have a query that was, while we had an answer. So you can stylize this, this this component, pretty easily. We've added some, some additional fields and and flexibility. So you can because the the in the citation because the answer you see what you see on that Coveo and even the the citation titles are coming from, like, index data with metadata. So and we know some customer, not all customer have the same field names, etcetera, etcetera. So this is, like, yeah, configurable as far as, you know, which information is gonna be displayed as title, hyperlink, etcetera. You'll have the options also to decide which, reformulation option or rephrase option goes first. All this is pretty standard. You could change, you can change the copy of those if you don't apply them. Would rather have, like, other type feedback. You will be able to customize, all that, like, titles, anything that's in here, all that text, like the placeholder text type thing, and you'll also be able to control that's more like a back end thing, the the amount of document that the model considers. So that's probably getting a little bit too technical now, but as far as the UI, you've got plenty of options and a lot of, like, default, and and options in the component itself. Right. We've documented yeah. We've documented, all the, options of the the components in our docs. So, that could be something to look into. Thank you very much, Oscar. And I think that's it for the questions we have right now. Alright. So now that's great. Build the feature, but how does it work? What's, how do we position this to customers? Because they want they wanna see all why. They wanna make sure that they get something, out of it. So I'll I'll start by explaining what we've seen in a better result and kind of, like, walk my way through, like, the towards the money and the, the, the kind of, like, business cases. And I think what's something that's really important to consider with this feature, is the accuracy or rather the inaccuracy rate. When we output an answer, is it accurate? Because that's really what matters to, to to our customer. They would rather have no answer rather than a wrong answer, wrong answer of, like, creates cases, creates friction with their customers, and that that has cost, whether it's that repetition or support cost. So that's really bad. And the idea is to minimize the inaccuracy rate. And through our beta program, last fall, we had over twenty customer testing. And the result vary because not every not we can it's really hard to compare the implementations because they don't use the same content. They don't use the same queries. They don't have the same cup, like, pipelines, hygiene setup, and some customer are more complex than others. So The numbers that are are are I'm I'm giving you here are an average of things that are not equal, by by nature. So I'll say, I'll take it for granted with a grain of salt, but we've seen a low inaccuracy rate at some time as low as, like, three or four percent, which is really good. And that that would be the primary metric to consider because that's what means that you are deflected a case or you solve someone's problem, with with an answer. Now once you have that accuracy rate, in mind, you wanna understand, okay, I'm I'm able to answer to a very good answer most of the time. Now what's how does it impact my bottom line, and that's when you wanna look at the answer rate. And right now, or answer rate in production with, like, all types of queries, like, even, like, one word key, one keyword query, for very long questions, we are, like, about forty percent. Which means that, hey, we don't answer on all queries, which is totally fine because we don't wanna mislead people but that the that forty percent is your multiplicator of your accuracy rate and your goal to give good answers. One of our objectives is to increase that answer rate. So we multiply the benefits of having a very good, accuracy. And but but people usually tend to look at, how often do you answer because chat g p t answers me all the Well, yeah, as Chajupiki answers you of only every single question, but there is a lot of inaccuracies and we there are plenty of studies out there saying, like, hey, the chat GPU's inaccuracy rate is, like, fifty, seventy percent, it's sometimes higher depending on which type of question you have. So looks good. But there are a lot of, like, misinformation or, wrong information in, in their response, and we don't wanna be like that. We would rather not answer when we answer, we know it's right, and then work on the multiplicator effect of the answer rate and the volume. So I think that's a little bit of like what we found And before I go into, like, I see question. Is am I seeing stuff in the chat, Alan? Yes. There is a question. The questions about how do we troubleshoot RJ results, you know, apart, there is a data inspector that we can have a look at, but, you know, apart from that, can we troubleshoot maybe which documents have been, you know, I've been added to, chunks added to them or, like, you know, how can we really figure this out? We have a we have a a recipe for that where we look at the chunks. The scoring, the one that were sent to open AI, visualization of those chunks, even looking at the what's in each document, just to look at what the chunking looks like, and which element of that document were considered. It's all internal right now, and we're working to expose this to customers because, that's the next after getting live and seeing the results that the next question they ask. So our team is working on this as we speak. And we're building the APIs. So, that can be used. And then we're gonna build the UI on top of it. But, yeah, that was very much the troubleshooting and the continuous improvement once you have it set up are very top of mind for, for most of their customers. So in progress To pick it to pick it back on that question. So in the meantime, if you have if you are a partner at Coveo and you have any questions while you're troubleshooting, with GenAI, please reach out to me specifically. I'll be your point contacted Coveo to to understand this. And help you, troubleshoot this. If you're a customer of Coveo reached out to your customer success manager, or in fact if you're if you're live with, JNI or you're you're playing with it, you likely already in touch with someone that's, heavily involved in the project. So, you know, reach out to us. We'll make sure that that we can help you, while while these tools are being developed more externally. Alright. So now the business strategy framework that we've, we're using at Covalent that we were using, even before CRGA, looks a little bit like this. So from, on, on to the left, and, alright, there's, like, one slide, an element that didn't work, but the first two boxes are, like, self-service. And the last one is, like, agent, use case or assisted service. But if you look at, from the left to the right, it's kind of like the level of granularity to, like, That's the events we are tracking, case submission, case affection, number of searches, average annual time, what do they mean the value driver? Well, I mean, they they they translate into self-service success or customer, satisfaction score, origin proficiency and all the way to the business outcomes. Hey, that's those are the business metrics that the leaders care about. So we can draw a parallel between Hey, when we see less cases being submitted. So less clicks on that, on that button, on that submit button, or which is also improving case detection, it means decreasing the cost of that, organization to support their customer. So that's how we kind of, like, see it and the same with agents. So the faster they go at solving the cases because they see an answer, the more, savings and the less cost for their organization. So that's, we have a business value team that could conflict. Drill down into that framework, but, from KPIs that we track on to our component to the business outcomes, The one that are in blue or in red, for the self-service are the one we've looked at with a customer like zero And because they've been live for now almost like six months, I'll, do a quick demos. So they are like an accounting company out of New Zealand, but they are, like, across the the the world. They have about, like, three point five or four million subscribers. So it's pretty large. And they have, support side that's pro powered by Coveo. Query suggestions gives me some some recommendation of what to click on. And you could see their generative answering, at play here. So they've been live with this for, you know, six months. And but within the first few weeks, of their implementation, they already saw, like, an increase of, deflected cases. So on, on the pipeline, when they, when they did the AB on the pipeline that add, generative engineering and for people who saw those answers. They were twenty one twenty percent less likely to open a case. They've also noticed, that the time spent on the search page and the number of searches, drastically went down, meaning that the experience was improved the overall troubleshooting experience was improved because people were spending less tons doing less searches, when they were presented with an answer. So really good, really good metrics. We are seeing the same type of, case deflection metric. With all the customers, I I can't just yet name, but that own marketing team will, will, share when all the, legalities are, done, but, we see the same number with a lot of other customers, which, makes it really exciting for us to to say to the market that this has a lot of value, and, and the ROI for that feature is high. We've got, like, a full case study, on this. Or maybe one thing to mention is that, some of the, the, the result that they're getting is also because, they're following a lot of best practice, whether it's like in knowledge, in search, so it was really easy for them to add CRGA and reap the reward. Out of it. So I'd also one thing to consider. They were in a really great position to benefit from, from CRGA. Question, Alex, or keep going? Yes. There's a question. And it's more of a forward, looking question, so more about the roadmap. So do you you have any plans or recommendations on on leveraging, general events during any sort of virtual agent consult via the API, I assume, meaning something like a chat bot where you could ask a question and get an answer from Coveo back in that regard? Is that is that something that maybe through the API instead of using like an atomic, component? Is that something that's in the plan, or is that something possible? Do we recommend it? It would be I mean, it would be possible to get, to get the the answer from, like, the stream and contact the the the search API plus the stream already, we we don't have plans to make, like, another API. And something very specific and really simple to use for that specific use case just yet. Although we are we're hearing some of those concerns and just like, well, especially when we get into conversational, how do how much do I need the chatbot if all my document, all my knowledge, and that ability to ask and answer a question, stay within Caville. So, that's those are some of the questions that we are throwing, within Cavell product management. So you could still do it. It's not as, probably as easy as you'd like because you still have to manage the search API response, etcetera. I understand it could be, like, a simpler. But you could do it, and it's possible that we invest more into it, to make it even simpler. So I hope this answered your question, sort of, maybe, help on that. So we know earlier Oscar talked about a ninety minute before go live kind of almost get, you know, project. If you were trying to do this, you have the API, of course, it it can add, it can turn into, more than a week. Relatively easily just because of the, you know, you're having to call the API, having to make the analytics call, having having to handle the v two call without yourself definitely possible, but not currently something that is, a sort of, a drag and drop kind of project, but something that that we can look at for sure. And that's two of the questions? Nope. That that was the only one. Cool. So Now, I'm I stole a couple of slides for more business value team just to just to kind of like see how that conflict translate for, other organization and imagine your and we don't need to look at all those numbers, but imagine, we have a, a new, an existing customer or a new customer, and that's the way we would look at it from, like, hey, we need to understand your metrics for your support metrics, whether it's, like, agent metrics or traffic on your community, and out of this. So they picked, like, enterprise size clients, that's high-tech support organization, very much average of what we see in the market. It's it's a fake example. But it was just like a it looks like some of the customer we see and and the, the average in the market. And they were doing, like, all I analyze this. And I only kept the conservative estimates, because they're they have, like, I mean, very positive scenarios as well, but on the conservative side, if your, Coveo, CRGA in investment was, like, half a million, across three years, you would get, like, almost like a ten, four x, return on value based on some of the numbers and and we said twenty percent case deflection, even, like, being conservative, compared to that, so they are looking at, like, ten percent and even more conservative, like, with the agents. That's a lot of savings over three years. And they can, like, apply the recipe that I was showing, earlier. And They did this for, like, a new customer that doesn't even have Cavell or that doesn't even have search, unified search and AI search in general. And here you'd see the for the same investment, the, the rewards are even higher. So our team can really help, get share some of that, those those frameworks, or help you guys with the, mechanics and estimations. But we we do see like a lot of, like, cost saving, even with, like, a conservative, approach. I didn't wanna bore you to death with, with those numbers, because it's, like, heavy and, probably one and gone, and we're We've got another fifteen minutes. I wanted just to touch on, like, how you position this, how you position Prevail, what's the story, if, like, that elevator pitch, where where they should be exploring, Jenny with Coveo, I think first, it's because we come from that AI search background that's very much at the center of the retrieval augmented generation, trend. You can do retrieval augmented generation, but if you're, you're on your search, is not good. Your answers are not gonna be good, and it's gonna be more detrimental than, beneficial for your customers. So We've been doing search and retrieval for ten plus years, unifying our our clients' content And that gives us a very strong advantage, to to, get into that generic events during field. We also have, like, a two very focused use case. One is in support. One is in commerce. So the solution we design will be, cater to those two, line of business or, or fields, but we also have, like, a platform support all this. So you will be able to tap into your vectors, your, and your content just diagnostically from the whole line of business. And we have all the legal privacy security, compliance, that's really, really important and really high for us. And we've seen a lot of, deals where this was this mattered a lot and a lot of and all competitors where, disclose or not selected just just for the lack of, of security and and not the same level of compliance. That's what we what we add to, work on, but, we we're positioning ourselves as, like, the GNII champion of the future because not only do we have the retrieval, but we also have, like, the freshness of the content, the personalization algorithm, the ability to scale and to configure this, really easily, on top of providing, a cost cost efficiency, that scale, for companies, for customers to build those type of, of systems and maintain there is the cost might not look so higher now, but as you have to maintain and update and increase the scale of those systems, that's where Coveo is really, an interesting play because we'll manage all that complexity and making sure it stay within, like, budgets, for, for the customers. And lastly, we're we're always innovating. So we have, like, better programs, POCs on, like, the new stuff all the time, and we can, we can, we work closely with customers, for innovation, but also We have a lot of, like, cross functional teams that will support the customer through implementation and optimization, after that. So a little bit of, of the of the wrap up. Any questions or I keep going to the road map? Go to the road map. I think a lot of people are excited about it. Cool. So, well, it could probably do a deeper session on the road map, but He would be or or key, or key themes for this year. We wanna improve the experience. And you can see that Both line of business have, have different interest. And then and there will be features for both line of business where reach answers are, like, So where we have, like, links and images and code blocks, etcetera, etcetera in the answer. So it's just more than just the wall of text. There is conversational search that I'll show in second multilingual. We touched on it. But that's the experience. That's what's gonna be the most tangible and the the most, visible also in our, in the changes we release. Now as far as relevance, relevance is key, for Cavail, they will be more work invested in, in the semantic encoders, and making sure they are, like, highly relevant and highly scalable. We're also gonna improve the answer coverage. So that answer rate or that forty percent, that multiplicator the accuracy, we want this to go, like, to fifty, sixty, seventy percent because we know that like, zero, for instance, that's seen already seen twenty percent case reduction, with only, four answers out of ten being, or four answers out of ten searches being presented. If we were to give them, like, six answers, seven answers out of ten. Their case submission will probably go even lower. So, That's why we're very incentivized to do this. Commerce will do is also investing into generative answers based on the catalog. So through the catalog, and the product, product description product categories, they want that ability to, to also answer customers with products. And, combining now we're getting a lot of signals, into a pipeline, whether they are lexical, semantic, behavioral, or even just manual rules, the we're gonna improve the ability to, to have an AI system that knows how to rank, your pipeline for the best outcomes. So that's also both, you know, very important for both NOBs. And the scalability and config capability is more like the platform and just the mechanics content refresh, incremental content refresh, scaling in further the vector search, having the ability to support external l l m use cases. So you have access to chunks. And, maybe that also falls into that, chatbot API we're discussing. And and also, like, a more unified question answering the experience from the admin perspective. With that, I've got a couple examples. I'll show them also a bit live, reach answers, mean that it's just super clean when you look at it. Suggested question is just another way to it's the what's at the bottom here, generate generating, queries or questions that are within the same topic, that will lead to an answer. So people, get their answers, faster. They don't have to mess around with the query so much. And the conversational aspect where you can do multi turn within, specific topic or question. And I wanted to show you this, quickly because we're testing with customers as we speak. Where, well, it's a search interface, and, I'm wanting to install a Google for Salesforce. So I get an answer on all the steps that I need to take. Just great. And you can see, same citations, but now I also have, like, a follow-up box. Or some suggestions. So how do I integrate this? What are the features? So let's say I just go with my own questions, how do I install this integration? So now it gives me all the details on which are, like, based on some of the Salesforce material on getting this ready. And I wanna know if there's a built in integration. So I I'm not referencing Cavell or Salesforce or anything anymore. I'm just, like, going with the flow and, like, making sure that that LLM understands that understands me and CapEx helps me dig further. So you could say yes, there is, and it explains a little bit, like, how it works, And, and now along with my use case was really about the agent inside panel. So I could go about Okay. Does that integration also include the agent inside panel and say yes, even for Salesforce, we have the, hosted inside panel or the the quantic, and how to have the quantic, quantic inside panel. So That was a little bit of, and I could have clicked on, now I wanna, if I get to the administration console, and I can keep, keep on going like this for a long a little while. So I wanted to show that it has also a better formatting, which is some of the rich formatting stuff that we've discussed. Yeah. I think with that, oh, yeah. Important pricing. Can use it on all Coveo product, all Kavil SKUs. It's consumption based pricing. It's an annual fee of ninety five thousand dollars, US, And it includes two hundred g q p m. So those generative, calls and a hundred, thousand indexed items. And you can add up that, that entitlement. If you wanna use two two hundred thousand, documents, well, double that and you get, like, twice the amount of GQPNs as well. And you will reach out to Alex and to our sales team if you, if you wanna know more, about that. Absolutely. One question just quickly that I have regarding that. So when you talk about a hundred k items, that's a hundred k items consumed by the RTML or in total in my index? That's, scoped for the model. So let's say you Your knowledge base is eighty thousand documents that that fits within that. So, we don't look at how many time we've refreshed it or like, we give you up to a hundred thousand, document to feed to your, model as knowledge base. Alright. Perfect. Thank you. We had other questions in the chat that I did answer. So there was a question about data hygiene, and I said it was extremely important with RGA. So just wanted to to to I know you said it before, but I just wanna highlight it. It really is important when you're using RGA, your data has to be clean. You can't have your headers and footers and, you know, you need to make sure that that the data is is in a a nice format, because it it's more important in RGA than it is for other machine ring models, or AI models that we have. Go ahead. Yeah. Sorry. I was just gonna because I did already answer those, but, yeah, the other one was, about semantic encoder. Available for all data sources. And I said it's it's currently only available with a GNI license. And the and finally, the last one was about how do we jack score, do we use semantic in it? And I said, yes. We'd use a hybrid model where semantic is is used alongside other ranking rules like Cavell. So it sort of forks together in hand and in hand. I think we have a follow-up question. So does the pricing include the AsiaI Azure or Azure OpenAI model, or do we have to provide our own? I believe we include it. It's the there's no bring your own It it's yeah. It's included. So, that's why we have that, g q p m, twenty thousand g q p m entitlement. So it's it's bundled within that. So you can your consumption of, answers and calls to open AI is is included. You don't have to bring your own key. We're discussing also. We've had a few requests on, like, customers wanting to swap, like, their model, all model, for either security or for commercial reasons, but, so it's being discussed. I would say that right now, the the prices we're getting with, with OpenAI are fairly interesting. And advantageous because, we are, like, combining and pulling all the requests from various clients together as as we are, like, one reseller. So it's just probably I I won't say that, some customers are larger than than us, but we we're pulling all those GQ PMs and requests together, so we get a a really interesting point. And I see we only have a minute left. I think this this covered it for the questions. If you have more questions, you can reach out to us. By the way specifically to me and you know, for especially for errors, I'm the main point of contact. So, if if I didn't get to your question, please let me know. There's a slide channel that I I really like to enjoy. I really like I really enjoy using. So technical questions you can ask there. I'll make sure to answer. I think I you have one slide remaining for me on my life's thinking? Star. Yeah. Yeah. So if, yeah, if you're a partner and you need, and you need a a partner organization enabled with RGA, let me know. And my email's there, but probably know how to reach out to me. And, finally, just a quick reminder, we have an ongoing promotion. That's specifically for partners. So before May fifteenth. So in the next two weeks, if you log an opportunity in our partner community, that's partners dot Coveo dot com. You get a nice gift alongside that. So just wanted to remind everyone that this is something that is still going on. Everyone likes some Kavil swag. So I'm sure you enjoy it. No. I think I I'm not even sure it's Kavil swag, but it's a nice that you're gonna get, if you log an opportunity there. So make sure to to keep that in mind. And I think on that note, that's it for the webinar. Thank you very much for everyone. Everyone for joining in. And thank you again for Oscar to give us a nice for giving us this nice presentation. Coveo Swag is included. Thank you, on the line for confirming. Thank you for giving us this presentation. If you have any questions, follow-up well, you know, you know how to reach out to us the Slack community or the connect community connect with w w w dot com or partners with w dot com, you can reach out reach out to us there. We'll make sure to, to get to you and to answer any of your questions you have there. So I hope you have a good rest of the day and, have a nice, you have a nice day.
Register to watch the video

Coveo Partner Enablement: GenAI Deep Dive

Join us as our Senior Product Manager, Oscar Péré, provides an in-depth view of Coveo Relevance Generative Answering. From product features to pricing structure and additional resources, he’ll cover it all. Additionally, our Partner Solution Architect, Alexandre Moreau, will be available to address any technical questions that may arise.

As an exclusive benefit to Coveo Partners, we’re offering early access to a test organization following the session. This is a chance to familiarize yourself with the new offering ahead of its release. (In the meantime, you can test it on docs.coveo.com and partners.coveo.com!)

Expect to walk away being fully equipped and ready to harness the potential of this cutting-edge solution with your customers!

drift close

Hey 👋! Any questions? I can have a teammate jump in on chat right now!

drift bot
1