Flux. A podcast by Movify
Flux. A podcast by Movify
Ep. 28 | Inside BNP Paribas Fortis: AI strategy, agentic automation, and the race for responsible scale
In this episode, We sit down with Kristof Meganck, CDO @ BNP Paribas Fortis in Belgium.
We explore how to turn AI from buzzword to business value with BNP Paribas Fortis’ CDO, moving from NLP chatbots to LLM-powered assistants, from RPA to agentic automation, and from siloed experiments to group-scale services. Along the way, we talk leadership under pressure, data governance, and why guardrails matter more than speed.
• EQ–IQ leadership shaped by crisis and delivery
• Personal influences and the change mindset
• AI strategy pillars: client experience, efficiency, augmentation
• Rebuilding SAMI with LLMs for voice and text
• Guardrails to prevent hallucinations and bias
• LLM hub, model selection, and sustainability
• End-to-end process redesign beyond RPA
• KYC, AML, fraud, IDP, and speech-to-text minutes
• YARA for all and vertical tools for roles
• Data governance, BCBS 239, GDPR, and Colibra
• Responsible AI, testing, and safety practices
• Local innovation vs group “as-a-service” components
• Mistral partnership, sovereignty, and agents
• Five-year outlook: internet of agents and paradigm shift
• Scaling impact across tribes and measurable P&L value
If you're interested in knowing more about Movify, don't hesitate to visit our website on movify.com
Stay tuned for the next episode and don't forget to follow us on LinkedIn and Instagram.
Hi everyone and welcome to Flux, a podcast by Movify. I'm Isabel, marketing manager and partner here at Movify.
Louis:And I'm Louis Cornet, the founder and CEO of Movify. Today we welcome Christophe Megank. Christophe, you're the chief data officer of BNP Paribas Fortis. We worked together in 2014. You were then the head of Project Management Center of Excellence of BMP. And I was one of the project managers in your team. Was then busy with the easy banking mobile applications. I'm glad to have you with us today, Christophe. Thank you very much. Thank you for having me.
Gaël:You're listening to Flux, a podcast about design and development. Our crafters will take you on a journey to discover more about digital innovation.
Louis:Christophe, you have uh led teams for nearly 20 years in the banking by now. How do you describe yourself as a leader?
Kristof:I would say more as a coach, combining, in fact, the soft and the hard skills as much as possible. It's always the good balance between IQ and EQ, I would say, that makes a good leader, according to me.
Louis:Anything specific from your past that molded you as the leader you are today?
Kristof:Yeah, and then I refer back to the time when I worked at Sabina, Sabina Technics, where we had a very uh difficult period. As you know, the carrier, the national carrier Sabina went bankrupt in 201. And so we had receivership with Christian van Bugenault. I was working for Sabina Technics under receivership. And at that time it was very, very hard because we had to monitor really every euro the treasury was followed on a on a daily basis. And when you're in such kind of situations, together with a very strong union base, it shapes you as a manager. It it really makes you think about efficiency, about how can you get to the next week, and how do you make sure that the people working with you are getting paid. So that's something that really marked me and that made me the kind of leader that I am today. Combining, as I said, always this emotional part, the EQ parts of a coaching, together with the delivery part, the IQ part. Interesting. And what is the ambition that drives you today? For me, it's it's it's mainly working together and bringing people to a higher level. It really gives me a lot of satisfaction when I see people growing. I have worked with a lot of talents, I have worked with a lot of younger people, and it gives a lot really a lot of satisfaction when you see them take up themselves, new positions in a higher hierarchical function. So that's really what drives me on the one hand side. And on the other hand side, of course, as I said, I'm an engineer as a background. So I I really go for delivery. I want to make it as tangible as possible. I want to see when you do some actions, the results resulting back in the P ⁇ L. That gives me a lot of satisfaction. That is when you do actions, that you really see the result in the financial P ⁇ L of a company. That's really what's driving me.
Isabelle:You've got really interesting background in uh aviation, data, AI, and you seem to be a very results-focused person. I'm really interested in knowing a bit more about your personal background, who were your main influences in life, any podcast or a book, maybe, or a particular person that's had an impact in your life?
Kristof:Yeah, when it comes down to a book, when I was at Vleric, an MBA, one book that still uh helps me, I would say, and reminds me a lot of is the book titled Who Move My Cheese. It's about uh a couple of mice, and so they have a daily habit, and suddenly things change in the labyrinth where they are. And it's a metaphor, in fact, for the daily change we are encounter with. Every one of us in our daily lives, both professional or both uh personal life, we always face with change. We always face with change, uh, with change. And suddenly now the pace of the change that we are uh have to encounter is is is even higher than before. So so this book is is is very good, and I I advise it to the audience to it's it's a very small book to read it because it applies more and more, I would say, in our daily lives. That's for a book and for a person, maybe a bit cliche, but but Elon Musk, although I know there's a lot of fuzz about him, and uh but I envy his creativity, I would say, because in as one single person being active and not only active but being on the on the forefront of all those different sectors, cars, yeah, space, AI, etc. It's really unbelievable. So I envy, in fact, his creativity, to be very honest.
Isabelle:Just his creativity then. Do you have any hobbies or something that you do apart from data and AI?
Kristof:I would say for the little spare time that I have. Obviously, the children trade already some time of that one. I have we have two children, a boy of 17 and a girl of 19. And then the time that is over, I try to do some sports. Classicals, classical sports, the running and paddle. That's the high sports of the moment indeed.
Louis:So so there's Isa for the running and myself for the paddle, so we can just each other out after the recording. We have a few questions for you about AI uh as it falls within your current scope as chief data of BMP. And we have asked one of your peers to ask you the first one. And here's a question from Barak Shizi. Barak is the chief data and an analytics officer of KBC. Let's listen to his question.
Barak:Hello. My question is how AI is supporting the realizations of Band's strategy? Thank you.
Kristof:Thank you for that question, uh, Barak, first of all. It's a very good one, I must admit. And obviously, I suppose you you deal with the same kind of challenges as we do at the site of BMP Berheba Fortis. So we have, of course, an AI strategy that is there to support the bank's strategy of BMP Berhiba Fortis and the group's strategy. And let me guide you a bit through through what we have set up as an AI strategy. It's based on three pillars, as a matter of fact. The first pillar is focusing on the client experience. And to be very specific, we want to close the gap, in fact, with you, KBC, and the Kate application. We are now working a virtual assistant for clients. And when you talk about AI, you talk about personalization, you talk about proactivity. So what we will come up with is a conversational chatbot assistant for our clients, making it much more personalized than is the case today, and also making sure that we can include proactivity in that conversation, giving specific messages to a client. For example, if he is going to New York, etc., and he's on holiday, etc., that we send proactively messages to him that are of importance to that client. Those are the kind of things that you have to uh imagine. But the important part is that it will be both text and voice, so you will be able to interact with the virtual assistant on the basis of voice, and more and more we will offer additional transactions that you now do through the EasyBanking app. More and more this will be replaced by the assistant that you will have. That's so that's that's the first pillar, client experience.
Isabelle:Is this the chatbot or be referring to SAMI? Yeah, indeed.
Kristof:Is it the SAMI's chatbot, Isabel? Although not yet in production, the second version of our the first one is in production, but that's not a real conversational chatbot. Right. It's based on NLP, natural language processing, natural language understanding, which is more the traditional side of AI. So now we will move to the generative AI side with LLMs, and that will allow us to be much more conversational, to get much, much, much better understanding of the intention of the customer's question. So that's the evolution that we will strive for.
Louis:Okay. And are you satisfied of the current quality?
Kristof:No. To be very honest, yeah. To be very honest, the current quality isn't is not that good. And that's why obviously we are urged to come up with this second version, which is much more natural version based on large language models, and which will allow us to go much further and to offer much better quality to our customers. The downside is that you don't get a lot of chances. And when you put something like that on the market, clients try it out, and when it's not satisfactory, you risk of you have a risk of churn, you have a risk of losing those clients. So it has to be good when we will bring this to the market when we release this indeed new uh assistant to our to our customers, it has to be spot on.
Louis:And this assistant for our non-technical users, can they imagine that as a Chat GPT within the app or the web banking? Indeed.
Kristof:No, you have to imagine that you have a kind of interface such as ChatGPT, your Google bar, your search bar, and then you type whatever, uh please block my cards, my debit card, for example, or can you transfer uh 200 euros to that beneficiary, for example, that you can you can type it in and then you start a conversation, in fact, with the assistant. So that's the objective, that the user experience is similar to what he has or she has within chat GPT slash Google so it's a bigger question.
Louis:So mostly a new way of interacting with your channels, with your app, for example, through voice, through text in a chat way. Will it also offer new features, new services that the users don't have today?
Kristof:Aaron Powell At first, and to be very transparent and to be very honest, it will be the traditional transactions, mainly linked to the daily banking environment, uh, which is cards, which is uh transfers. It will not be immediately the credits or the investments that will be at a later stage. So will there be new features as such? No, but the whole the whole experience will be new for our customers. And that way we want to avoid that they have to call our easy banking center, queuing, etc., to get an advisor on the line so that they themselves can be helped by our virtual assistant.
Louis:And what is guiding such a project? Is it to improve the experience of your clients or is it to reduce costs of uh client service or both?
Kristof:Both. To be very uh blunt, both, first of all and foremost. That's why it's the first pillar of our AI strategy. It's it's it's client experience, to improve that experience, because if you have to we all know these these these these calls where you have to dial and you have to listen to for that, three, for that, that's a burden. So it's really the experience delivering the good quality that is expected from a bank number one in the country. And let's be honest as well, together with that, there's also efficiency. Because if you if you have an assistant that can replace the number of employees, that's also I remember that, and because the question is coming from Bahaq, I thought that Johann Tess during the presentation of the uh first half year results, he mentioned that Kate is replacing the equivalent of 300 FTE. So also at their side, uh they are striving for both, I suppose, customer or client experience as well as cost efficiency.
Louis:So, in the process of uh upgrading SAMI with new next generation Gen AI and LLMs, what have you learned so far?
Kristof:Well, the thing is that you cannot just throw in LLM as such. It might seem very easy, but with that comes a lot of responsibility, in the sense that you have to make sure that when you have an assistant that faces a client, certainly from a bank point of view, it's watertight for 100%, not for 80%, but for 100%, meaning that this LLM cannot hallucinate, that there cannot be any bias, that the answers are spot on, and it's not maybe it has to be spot on. So so this is really difficult, making sure that there are these guardrails as we call them for the models, because the model is one thing, but we all we all do know and we all have seen examples of companies that have a bit prematurely put into production such kind of assistance, and that had quite some, I would say, uh brand image problems with those assistants. So before we will really put this into production, it has to be tested not once, not twice, but a hundred times to make sure that all these guards are in are in place. That's for me the most important, I would say, hurdle to make sure that when you go into production it's watertight for 100%.
Louis:All right. And what's the model you have chosen for the client chatbot?
Kristof:In fact, we do have a library, we call it an LM Hub, where we have different models. The one that is currently chosen that we balance between is Lama from Meta. But as you know, we have also a partnership with Mistral, the French company, the French startup company. So we're switching between those models depending on what's the matter that we have to tackle. All right. But I repeat, we have a library of models, other ones as well. And and and because you do not always need the very large models, and fortunately, more and more of the uh producers of models have a range of models ranging from the very large ones to the medium to the smaller ones. Also, from a sustainability point of view, that's very important. Of course. These models start to range in size because not ever not everything requires the huge models.
Louis:We've interrupted you answering to the question of Barak, and you need you only mentioned the first uh pillar, which was client experience. What are then the other pillars?
Kristof:Thank you, uh Louvi. I come back on the other two pillars indeed. So first pillar, client experience. We elaborate on another one. Second pillar is uh efficiency. So as you know, a bank has quite a lot of products, yeah, meaning a lot of processes. And you all know the the the the the birth, I would say, of RPA, robotics process automation. You all know different ways and tools to automate processes. You have the leanway, you have the six event, etc. All those kind of tools which which you can improve and automate processes. So, of course, the the arrival of AI and how Gen AI is is a given, is is is is a present, I would say, and makes another dimension, another face in this automation track. So this is our second pillar to review uh processes. And the challenge the big challenge is that you when you take a process from A to Z, that you do not uh pick out a small part in that process on which you will apply AI, be it traditional AI or generative AI, or sooner now the agentique, I'll come back on that one later. But that you uh take the whole process from A to Z and that you review it with a mindset of okay, as a customer in 2030, let's say, how do I want as a customer, or how do I want as a bank that the customer lives that process, making use of all these new technologies, traditional AI, gen AI, agent tick, and that's the big challenge. And that's why in this pillar, it's it's it's it's it's a cornerstone because it's easily set, generating efficiency, but it's not easily done. And because it's it's it's the core of your bank that is that is being handled there, that is being tackled there, and so you need also the people that are very familiar with those processes and that's very well know those processes from A to Z.
Louis:Yeah, I like it that you really take it from the the the full the the bigger picture. I think that makes it then possible to really serve the business or actually the user or your customer, even with one problematic instead of coming with a technology with a solution. Exactly. You really start from the business.
Kristof:Exactly. I can give some examples if you want. So so we are we are tech I have already discussed about KYC, now you're a customer. And one of the things we have to do as a bank is next to the onboarding, you also have a recertification that you have to do depending on the risk of the client, yearly or bi-yearly, etc. So during that recertification, the relationship manager he has to update the transactional profile. It's called TP transactional profile of that customer to see that what's in reality the customer, the client is doing corresponds to what he has been saying. This is uh a completely manual work today, and it takes quite a lot of uh time of our relationship managers to do that. Now with AI, you can perfectly uh uh automate all those things. You can see did he do payments to abroad, does he have a safe? Does he have is he active in does he use a lot of cash? Is he active in the trade, etc.? So all these activities you can now centralize these, you can now based on AI put them forward, and so also gain some time from this relationship manager. Other example is speech to text, where we uh use it to create, I would say, the minutes of a session that was taken place by a client and a call center agent. So now today, of every session that the call center agent has, we have he has he or she has to make some minutes, an end of session as we call. But as of now, we can do it with this also with AI using also large language models. How does this work? First of all, there is the translation from the voice to the text, it's done with whisper for those of the listeners who are familiar with with those technologies. Mind to the dialects in uh Belgium, that's not that easy. That's a good point. The hit rate is not always 100%. So that's the first step. You translate the voice from a text and then use LLM to make a summary of that text, and obviously you present it to the relationship manager, and instead of him or her making a summary of that conversation, he only he or she only has to acknowledge or not or to adapt this text. Also, that generates quite some efficiency, quite some time savings. So those are kind of examples of what I mean with the second pillar, uh, efficiency creation.
Louis:Yeah, indeed, AI proves uh particularly effective to automate repetitive low-value tasks. According to you, what is the biggest fish to catch here? Is it KYC, AML, fraud, something else?
Kristof:For me, there's not one particular one. The fields of application are diverse, both on the now your customer or the financial security in its entirety, AML as well. AML up until now, it was much most of the time it was rule-based, but thanks to AI, you can recognize patterns, you have graphs, you have other other tools thanks to AI that pop now up that allow you to detect much better scenarios than before on the rule-based. So I was mentioning key C AML frauds is another one. Very important for banks, obviously, because there is an upcoming liability shift. So for banks, very, very important domain to invest in in that one. Also, there AI, AI, without AI, we cannot follow the fraud strip, I would say. So we have to make sure that we are always one step in front of them. But there's not one single domain, not one single process that deserves to be looked at from an AI point of view. So I would not here mention one single uh process. The IDP, which stands for intelligent document processing. Once you have a document in a process, it's already good that you review it with AI on how you tackle it, it's summarizing, it's uh it's transcription, uh all the these actions that you do with the documents, you can do them now also on the basis of AI.
Louis:What about RPA robotic process automation? That's actually a kind of efficiency that every bank has put in place years ago, way before Gen AI even was uh popular. What's your take on this? Would you upgrade every automation that was already done in the RPA traditional way and upgrade to Gen AI, or would you really start from scratch on new new cases that are now possible treating unstructured data?
Kristof:Yeah, it's a good question, uh Louis, in the sense that foremost we've maybe you have to tackle something first, that is the new, I would say, uh wave coming up with what is called AgenTic AI. So so we have started with predictive AI, traditional AI, based on probability, let's say. Then as of 2022, the launch of OpenAI ChatGPT, we have known the uh generative AI, and now comes the new wave with AgenTic AI, which is in fact LLMs with a couple of hands, if I if I can express it like that. So meaning that not only it will give you information you are asking, but it will also be able to do some actions, to perform some actions, and to interact with other software applications. And why do I say that? Because uh linking that to RPA, robotic process automation, will become agent process automation, APA according to me, or intelligent process automation. So yes, RPAs are good because they indicate where there is I would say uh opportunity to look into. But referring to my previous answer, it's it's not as if you have to redo the 200 RPAs that we have at our side, I would say, and you have to upgrade them. No, it is a matter of reviewing the process in its entirety, because also at these RPAs they are often only looking at a part of a process and not at the entire process. And that's why it's complex, but it's the way to go. It's to review an entire process from A to Z and look with these new tools called agent TKI to see how you can upgrade not only the existing RPA, part of that process, but the whole process its entirety. So it can be that you kill the RPAs and you put in place one, two, three agents that will take care of that process as of now. But the the the big thing is that as opposed to RPA, because they couldn't, can't reason, they don't have a memory, and they don't have a kind of autonomy as you can give to agents. That's the characteristics of agents. Today, you can give them autonomy, high, medium, low. They can reason, they're learning by doing, and they have a memory, a short and a long-term memory, because they are able, when they exchange with other agents, they know what they have said and they know what they have had as a repl uh a response. So they can build on it later onwards. So that's how I see the evolution of RPA.
Louis:All right, very interesting. And and how far are you and VNP already uh in the implementation of some of uh Agent TAI?
Kristof:At the beginning. To be very transparent on that, Ramiro, at the beginning, seeing the fact that Agent Teak is rather recent. Uh when I was at the Gardner Summit back in May, I think it was of this year, Agent Teak was all over the place. Uh it was uh during the the the two or three days, it was agents, Agent Teak everywhere. As I said, is it the hype? The the the famous Gardner uh quadrant or the the Forester raves. For me, it's not a hype, it's there to stay. And the speed at which this is going is comparable to Gen AI, even a bit higher. Genai had a very, very exponential uptake. AgentTeek even even more steeper, I would say, than Gen AI. I'm convinced it's there to stay. Where are we at Fortis? At the beginning. We we we are true we are discovering, we are exploring, we are experimenting with AgentTeek, but the potential that is in there, and certainly for banks, because banks have quite some heavy processes, is is huge.
Louis:How large is the team that is really about research and taking care of the innovation and making sure that uh you guys uh catch that wave?
Kristof:Good question. So, all in all, our AI team, composed of data scientists and analysts and ML engineers, makes 9T 90 FTE. The unfortunate part is that few people are really busy with innovation. And I mean by that people that have a window on the world and that are capable of grasping what is coming. Yes, we have now AgenTic, but we have too little people capacity to really make this a moonshot at this stage. So on that side, we have quite a good team, we have quite a sizable team, but still with the focus on uh predictive/slash generative AI, because we have quite some expectations there as well. We have quite some initiatives running, and it's not you can find you can only use once NFT, as you know. So it it's balancing between on the one hand side delivering what we promised in the bank, because there's a link, of course, with the business and the PL, and on the one hand, on the other hand side, making sure that we are up to include this agent tick wave in or BAU if I may if I may call it like that.
Louis:Yeah. 90 people in your AI team. That's uh that's really a large team. And and compared to other big players in AI in Belgium, that's certainly in the top three. Would you consider it to be an indicator of m maturity, or is that not a good indicator?
Kristof:Yeah, as I said, that's quite a sizable team, 90 people. And for me, it's an indication of indeed our maturity level that we have today. We have grown over the last few years quite importantly, uh, not l not in linear, not in an exponential way, but somewhere in between. Also indicating the attention that is given to AI by the bank. Within the group, by the way, within the B Bepper High Group, we are one of the most important AI offices as well, compared to the French or the Italian colleagues, and with quite some ambition. We also, by the way, I think or amongst the top two, three companies that can say that all 10,000 employees are equipped with an internal virtual assistant. It's called Yaha, which is the kind of ChatGPT, but on the internal side on our on-prem, you know obviously why. All of those colleagues are equipped with it, use it in a daily way, and are trained on this application. So so that's why I say both on the size of the team, both on the challenges of the team, both on the ambition of the team, as well as on the results that are produced by the team today. I think that we are top two, top three certainly in the banking sector and maybe beyond.
Louis:Are we touching the third pillar of your strategy here?
Kristof:Indeed, indeed, Louis. So does the need the third pillar, which we which is called augmenting our employees or accompanying our employees, because it's very, very important. That all of our employees are being guided on AI. The the thing is that with the arrival of AI and certainly Gen AI Chat GPT, the risk of having different levels of knowledge on the one hand side, people that are very much familiar with AI, that play with AI, both on the personal as well as on the private side, compared to people that are, I would say, AI averse, that say, okay, AI is not for me, the risk is very high. And this risk can create in the future problems for those people that say I do not want to work with AI because you will in the future will have you will have people that embraced AI, and by that that can have a higher productivity compared to people that will be lagging behind, and productivity-wise, there will be that much difference that it becomes dangerous for them in the long or medium run.
Isabelle:They probably change careers. Yeah, probably.
Kristof:But but the thing is the thing is, Isabel, that whether you're in the banking sector, in the media, in the marketing, in other sectors, everywhere I'm afraid that AI will be applied. And so for me, it's not an option of do I embrace AI or not. You have to you have to embrace AI and you have to make sure that you can you can deal with it. That's why we adhere that much importance on this third pillar of augmenting our colleagues by equipping them with IARA, but not only giving them IRA, because giving them is one thing, but you because you have to train them, you have to constantly make them aware of what is possible with AI. That's also why we have what we call an advanced prompting team in place that is going to see our colleagues on the floor and that is exchanging with them and saying, Okay, what are the kind of things that you do daily? Can you explain to me a bit your activities? And these advanced prompt engineers, people of our teams, sit next to them and say, Okay, let's make an advanced prompt together that can help you doing things in a different way that will save you time compared to today. So that's how we, next to the classical and the e-learnings, etc., obviously, that we offer, that's how we want people to develop. Another example is our software developers. We have, as a bank, you have quite some software developers in the house, uh the Java and the Python, etc. Also there, developers with and developers without AI, there will be a huge difference and more and more each day. So again, there all developers should embrace AI. They like it or they don't, but it's there to stay. And so that's why this third pillar is really a fundamental pillar to make sure that within a company as ours, all our employees can thrive on AI.
Isabelle:Do you have a team of people who motivate your developers, designers to progress towards higher goals in AI? And how do you how do you do that?
Kristof:Yeah, that's a very good question. In fact, it's a collaboration because just to situate our data office is next to the IT department. So we're not part of the IT department, we're next to the IT department. So it's a collaboration between our teams and the IT teams where the software developers reside, I would say. So we put at the disposal the AI for developers application, and then it's together with people of the tribes to make sure that they get acquainted with it, that they use it in a daily way, or the key PIs currently put in place, not yet. But the thing is that, and that's a bit top-down, I would say, that the IT on the group level, so for all entities, expects that if you have at your disposal such kind of tools, you can reduce the number of software developers with a number of some percentage. So that's a bit of the key PI.
Louis:What I think is interesting also in uh employee augmentation is that you you have many flavors of uh Yara at BNP. There's the Yara for everyone, there's an internal chat GPT, there's the Yara for developers that you mentioned. Uh I believe the name is Yara code. Correct. But there are many Yara also for bankers. Can you maybe explain the other flavors?
Kristof:Yeah, so that's that's a good question, Louis. It's very important to mention it because next to the I would say foundational Yara for all as you call it, we have what we call the verticals. Uh the verticals referring to okay, we take a specific population, be it HR, be it compliance, be it the architects, for which then we start developing a proper Yara, their own Yara. And as I said, it can be for architects, for example. Today we we we have a document which is called an OPAT, an Opus Architecture document, which takes quite some time from these architects. Well, we will put in place a YAHA.architect, if I can call it like that, that will help this kind of colleagues, architects, to deal with these OPATs in the future based on generative AI, etc. Then it's what we called the ARAC, uh retrieval augmented generation technology that we are using there for those kind of populations. But we have it, as you said, also for the investment colleagues, where we allow them to, when they communicate with the uh clients, to give a personalized there again. We have with the P of the personalized the the flavor of do I is is my is my client economically savvy, or is he far away from everything which is deal which has to do with economy, and so he can adapt the way in which he explains the investments to the the client. That's nice. So so so so all all these kinds of possibilities are really taken into account when putting up these Yara verticals, and as I said, there is almost every activity domain ranging from HR up until compliance and legal can take benefit of such YARA vertical.
Louis:Same question as for SAMI. What's the model you use behind? Is it the same library hub with Lama and behind or is the same different setup?
Kristof:No, it's the same hub that we used, and just to give some background there. So when we started managing LLMs, we decided to create this LLM hub based on open source, because you can download the LAMA models from Meta. So and that's the way we started it off. And it's composed of a library, I would say, of models, and there again we we we we we look at the case at hand to match the best model, not always the biggest one in size, some sometimes small models, etc. So it's it's the same approach than for SAMI.
Louis:Okay. We covered the three pillars: client experience, efficiency, employee augmentation. Do we have now the full perspective on your AI strategy or are there other elements that we should have in mind?
Kristof:Yes, indeed. So so I described the three pillars of our strategy, but there is indeed a I would not call it a four pillar, but it's a foundational layer, I would say. No good AI without good data. And so at our sites, the importance of good data, what do I mean with good data? It's data of good quality. I must say at Fortis site, we had the luck, being part of the financial sector, that we have put in place already a data governance team back in 2016. So I know within Fortis we cover all domains. We have data quality analysts, we have data managers taking care of the issues we have with our data. We use a tool called Colibra to manage our data in a professional, in a professional way, in a professional manner. So that's the advantage, certainly, that we have at our site that we have since quite a while now, read 10 years, a proper data governance in place. We have a data COE, a center of expertise, data gov in place, which daily, on a daily basis, is handling data quality, data privacy issues, etc. So all this is in place. Next to that, also foundational, is what we call responsible AI. I already talked about it. You cannot take the risk of putting these things on the market, both internally or or let's let's even site externally, without having these guard rays, without having an entity in place that takes care of hallucinations, of ethics, of uh bias in everything that you're doing. So these parts, and also the sustainability part, of course. I I cannot forget that one because wherever we have a presentation on AI, we always get the question, and what about the energy consumption of AI and the GPUs that consume an awful lot? We should not forget that it's not only the the upper part of the the house which is the most visible part, but it's the foundational part that makes that allows you to go into that direction. So and that's why within the Fortis also we give it a lot of attention, both on the governance side as well as on the responsible side for AI.
Isabelle:So the quality is of the utmost importance. Do you is that would you say that's the biggest hurdle, or would it be the technology or the the culture also?
Kristof:Certainly not the technology. That's not a hurdle. It's there and it's there to be to be applied. The hurdles I see indeed are data, availability, quality of data, and luckily, as I said, also at our site, that's professionally managed, certainly since quite a while. On the other hand, the biggest hurdle for me is culture as well. When you when you look at at the implementation of AI models, AI use cases, it often goes with change management. You cannot forget change management as well. It's of the utmost importance. So for me, it's both data and culture, okay, not technology.
Isabelle:Now we have a question from one of your peers called Thomas Hering. Is that how you say it? Correct. He's the chief data officer at ING. And this is his question to you.
Barak:Hi, Christoph. This is Thomas Hering, CDO in ING Belgium and global head of data delivery and platforms in ING Group. I have following questions for you regarding EI or rather Gen EI. And it's the following: in a large organization like BNP, with a local entity and then a strong group head office, how do you get organized to get, first of all, the aspired business and cost benefits? And secondly, the reusable Gen AI components for skill deployment across the organization. I would be interested in your views.
Kristof:Thank you, uh Thomas, for this very interesting question, which I assume is also a bit linked to your situation between uh local situation and a head office at the Netherlands. So I would split the question into parts. You have the first, uh at least that's how I understood it, the financial part. So, how do we deal with the ambition from a financial point of view and link with the group? So there's one figure that is published also to the investors and on the markets. There's the figure of 750 million euros of value creation. So the group has expressed the ambition that on a group level, BMP Paribas, AI slash Gen AI, but mainly Gen AI, should bring forward a value creation recurrence of 750 750 million euros. So that's the figure that has also always also been shared and published with the investor relations with the market. Now, the 750 million euro value creation consists of I would say three parts. You have the what we call the top part, the NBI, net bank income. You have the NCS part, net cost savings, goes without saying what it is, and then you have the CA, the cost avoidance part, which is the most sorry, can you maybe explain?
Louis:NCS, net cost saving?
Kristof:Yeah, net cost saving as opposed to the gross cost saving is what AI brings in the PL on the cost side. Reduction of costs, and the difference between gross and net is the run cost that you have by putting in place the project or the initiative, as well as the depreciation of the investment that you did. So that's the difference between gross and net, and cost saving is obvious, I would say. So that's the second one, and the third one is CA cost avoidance, which is the most tricky one because cost avoidance is I would say is a bit of Mickey Mouse money, if I can call it like that. If you have an initiative and it's generating a cost avoidance of 1 million euro, it's not as such visible or tangible in your PNL. And I will give you an example. For example, the usage of Yara by 10,000 employees. If every employee saves, say something, 10 minutes a day multiplied with 10,000 FTE gives you a number that is not re that is not linked to your PL impact. You will not say because of that that you will reduce your costs by an X amount. Because all those 10,000 employees are still in your company, you will you still have to pay them on the end by the end of the month.
Louis:But they've been more efficient and they've done more at the end of the day.
Kristof:And that's what you the tricky thing is the tricky thing is how do you translate the CA afterwards? So when you have a cost avoidance, an efficiency, a time gain, what do you do with it? There you have three options. Option one, nothing. You just you just do it. Option two, you reduce people. A team of twenty people, you take out one, for example. If it's uh that ratio. Option three, you try to find new activities. You try to find with the freed up time, you try to be creative, just as Elon Musk, and you you try to come up with new initiatives that can generate new revenue, for example. So that's uh what I mean with CA cost avoidance. So those three components together generate the fifth 750 million euros, and obviously at the forty side, we contribute to this 750 million euros for a certain amount, which I will not quote. So that's the first part, and this is followed up by the group in what is called the watchtower, so it's very meticulously followed up by the group. So that's the first part of the question of Thomas, and then the second part is uh of the question is very interesting because we struggled with that quite a lot. And let me explain. So the group, when generative AI came into play end of 2022, was of course not ready next day with offering components. So that's that's why we decided locally in Belgium in BPORTIS to generate our LMH ourselves, to generate our speech to text ourselves, to generate our rack retrieval automated generation ourselves because you cannot sit back and relax and do nothing. You cannot say I will wait for one year for 18 months when the group comes up with these components, and then I'll start making initiatives. So we were forced, as other entities, by the way, to proactively start working ourselves on the creation of these components of these modules. The downside is that at a certain moment in time, and we are now there, the group, and with within the our group, it's the IT department, comes up with also with these components that can be shared, mutualized. The nice words, mutualization, can that can be mutualized amongst different entities, and now the time is there that they say, Okay, for this, we have developed our LLM as a service, meaning that they have a component that can be used by other entities, the French, the Italians, the Luxembourgish, the Belgians, and then now comes the time that we will have to migrate our LLM hub locally to the LLM as a service. So because of efficiency gains, of course.
Louis:And that makes sense, I guess, from a global perspective.
Kristof:Yeah, that makes sense. If if if I was at a group, I would I would have the same thing because you develop one module, be it LLM as a service, be it speech or text as a service, be it rack as a service, everything as a service, you develop it once and you make sure that all entities use this because also from a GPU side, you know that below those models or turning graphic process processing units, the GPUs, which are quite expensive and consume quite a lot of energy. It goes without saying that this battery of GPUs has to be used in an efficient way. So that's why you do not allow that every entity on its own does that, but that there's an LLM as a service, in which way it's optimized. So coming back to the question of Thomas, we were struggling indeed with that. So we were proactive, did it ourselves in a local way, and now comes the time that we will migrate to the group modules, which is can be seen as a kind of marketplace. And they offer all those bricks, all these modules through a marketplace, and we will migrate our use cases that were up until now based on the local LLM hub. We will migrate them to the group LLM as a service and the other components. So but it's it's a kind of I would say balancing on a small rope between what do I do? Do I wait? With what pace do I go forwards? Do I still do that? Knowing that within three months the group will come out with that one. So that's the difficult, difficult equilibrium that you have to find. And I'm sure that at the side of Thomas at ENG side they they have the same kind of challenges. What do you do on the local level and where do you capitalize on what the group is offering?
Louis:So if I understand well, the Belgian team were really pioneering and and advanced compared to the rest of the group, but I guess the the the the French or the or the HQ did uh quite some progress on their side as well. Absolutely. They did some they initiated the partnership with Mistral back then, right? Did it come through the group, this uh partnership?
Kristof:Yes, so so the partnership with Mistral it was concluded in July of 2024. Uh, you know, Mistral is a is a rather young uh company, French company based. And it's interesting for us because as you say, Louis, the partnership is between the group BMP Parabacha Group and Mistral, but obviously we can benefit from it as well. We do benefit from it on a daily basis, and let me explain. So so so, first of all, you you all know that due to the geo geopolitical uh tensions and and evolutions lately, companies and certainly banks tend to have more attention to the sovereign aspects, so sovereignty is becoming quite important, and that's why also the Bimpe Baha Group has taken the decision to partner with a French-based company, Mistral. So that's the first thing to make sure that also from that point of view there is a risk management, because otherwise, we're always at the other side of the ocean. Secondly, we have access to Mistral resources and we can influence their roadmap, meaning that if we have some particular needs, some use cases that we can share with them, they take into account our needs and eventually will work on that one. The good thing is also that as opposed to Meta, Meta today has a lot of lama models, but they do not have yet a platform that allows also the creation of agents, for example. Mistral, they have La Platform, which is the commercial name, that does not only entail models ranging from large, medium, low size models, but also that allows us to generate agents. So that's another benefit. And then the last benefit I I would like to quote is the sustainability part. They adhere a lot of attention to everything which is linked to the sustainable and responsible part. So they come up with reports and they guide us on how we all also can benefit from that part of them being very much aware of the sustainable parts. So partnership with Mistral group level, but every entity, and obviously the Belgian part as well, uh, benefits quite a lot of the Mistral partnership.
Louis:That's really nice. You said you can influence their roadmap, but can you really? I mean, they're also partner with Belfus. So I mean, there's we've seen so many announcements since VivaTech in in June, and so many partners. Is that still the case that you can influence their roadmap?
Kristof:Very good question, uh Louis. Can we influence the roadmap to a certain extent? Uh, we have to be very honest and transparent on that one, but at least there's an open communication line, and we can exchange with them on our needs, and at least they listen. We have now an initiative running with them on KYC, by the way. And the the the most important part is that we have a seat at the table. To what extent then this is taken up, and with the thing is with what priority it's being taken up, as opposed to a Belpheus or another company asking things, that's I would say a battle to make sure that you're ahead and that you're listened to and that you're put on the roadmap.
Isabelle:What are your key reasons for partnering with uh Mistral?
Kristof:Well, as I said, from a group's point of view, foremost, it's it's because of the uh sovereignty aspect to make sure that a French-based bank partners with a French-based company, which is is seemed to be one of the major players in Europe compared to the uh Asians or the um uh the Americans. So that's I would say the number one reason. The main reason. Next, BNP Baha very well knows that AI is pivotal for the years to come, and so they want to make sure that also there they can they can have access to the latest evolutions on the market. Yeah. There again comes in, do they have an equity stake or not? I'm not sure. Okay.
Louis:When you say we collaborate with Mistral on specific topics such as KYC, how should we imagine that?
Kristof:I mean, are are they training their model for for specific KYC use cases or the the thing is that it's not as such on the model, the collaboration I was talking about is for agents. So as I said, they have uh an application which is called La Platform, literally La Platform, the platform, and the platform allows you to create your agents. And as I said previously, we were uh we were at the very early stages of making use of agents, agentic, and we now have different tracks in place. One of those tracks is a collaboration with Mistral, and they guide us on the usage of the platform of creating agents, and here it goes about an onboarding package for corporate banking customers, okay, which is quite administrative, quite laborious for the corporate clients. Thanks to this agent putting in place, also there always the same reasoning, reducing administration, reducing the burden for the client. And this is one of the exploration tracks I discussed, or I said that we're exploring, we are experimenting. This is one of the experiments that we're doing with Mistral. But so it's not on the LLM because the LLM, we we do not have the capabilities, both from a skill-based point of view as from an infrastructural point of view, to engage in the creation of LLMs. That's full fully at the side of Mistral. So you have to see it really as uh accompanying us, for example, with with the usage of the platform and the creation of a use case based on agents.
Louis:So it's you, it's PNP building an agent, and Mistral is there to guide, to coach, to show you the way on how to uh Exactly how to make it happen, make it efficient. Exactly.
Isabelle:So now if we take a look into the future and we look at five to ten years ahead, what trends will define banking and data in your opinion?
Kristof:So I will limit it to the five-year horizon. And and there I'm as I said, I have I've discussed already or I've explained a bit on agents, and it will be what I would call a paradigm shift. And let me explain, uh Banks has all have always been used to deal with humans, clients, initially physically, more and more digitally. But what about as a bank dealing with uh machines, agents? This is completely uh different with with agents because I I think the the the example of an agent that has been given the mandate by a client to look for a good credit, a professional or a personal, doesn't matter, and the agent is screening the market. Suppose you're not accessible for that agent, you're out of business. So today when we do marketing, we we try to connect with clients. Tomorrow we'll have to connect with machines to make sure that the machine knows us. Another example, small and medium company. For them today it's it's quite a burden, all those invoices and this administration and the linkage with the exact software, for example, there for the accounting. In the future, imagine that you can say as the small and medium company, you have an agent, and you say, Okay, screen my mailbox or other SharePoints or whatever. You take the invoices, this agent comes up with these invoices and then connects through the agent-to-agent protocol with the agent of your accountant, and this agent takes these invoices and puts them in exact software or in whatever other application in an automated way. So the world will be different because of these agents. We will have internet of agents because agents will communicate to other agents, and the thing is that all this evolution for me, that's what I call the paradigm shift, is impacting the bank in a very severe way. Because, as I said, banks are used to deal with humans digitally or physically. They will have to adapt to these machines and on different dimensions. You have security, you have, as I said, you have technology, you have marketing, all these fraud. So that's for me in the next five years, the main challenge for banks slash all of us. Again, both in our I think professional as as well as our private lives. Also, in private life, you will be equipped with agents, you know, Visa Mastercard, also have a battle of. Giving these payment services. The example I was called is when you want to go and book a hotel slash travel, you will have your agents, and you will just say to your agents, I want to go to that country, I want to visit that kind of hotels, and I give you the mandate for spending and booking those hotels, and your mandate you cannot go above let's say 2,000 euros, say quote. And the agent will in an autonomous way start to deal with other agents. He will contact agents from the hotels, he will contact agents from the travel agencies, from the airlines, and he'll start booking. So that's the way how future will look like, according to me. And as I said, both impact on professional as well as on private lives. And for banks, a very, very major challenge ahead, according to me. So that's a bit how I would describe the five-year challenge ahead.
Isabelle:Exciting yet quite frightening, I find. Yeah, indeed, indeed, indeed.
Kristof:Because we we just talked about responsible AI, but these agents go one step further, even because they're as a as a physical person or a legal entity, you give a mandate to a machine that operates individually, autonomously. Of course, you can decide on the level of autonomy ranging from low to high. But suppose that's the kind of world where we'll live in five years from now, and then when I link to the bank, as I said, complex processes. Suppose you have agents being part of your team. The agents will think, will reason, will learn, will communicate with other agents. And that's a McKinsey study of this year that's quotes that two agents, you can do productivity times 20. One agent 15 to 20 FTE. So and if if you have a process, be it a complaints management process, you could imagine that you have multiple agents and that you have a human-lit operator that only controls the agents. So that's where I see banks going to within five years.
Louis:That means more time for us to enjoy the sun on the beach, no? Exactly, exactly, Louis.
Kristof:That's the good that's the good thing. That's the good thing. That's the good thing.
Louis:Where do you want to bring your teams in the next five years?
Kristof:Good question, difficult one. As I said, uh we are now 90, 90 plus FT, we're still growing. A lot of expectations from senior management, from executive committee, of course. The thing is that today AI is there to support the business, and that's that's the right way to go, as the business still has to indicate where AI can help them. It will become pivotal for the plan 2030, and I just uh explained the arrival of agents, and our main challenge is that the tribes that they start to incorporate themselves in their processes the building blocks that we have. You have to make sure that to have this oil stain getting bigger you can convince the traps throughout the bank to make sure that it's used. So that's where I would like to see that we are heading towards. Again, with all the new challenges coming up, agents, etc., because it will be very, very interesting to be on the front row of these evolutions. So that's for me the the way to go.
Isabelle:To close off gently the podcast that is super interesting. We have a fire drill set of five questions. Are you ready?
Kristof:Fire.
Isabelle:Agentic AI, hype or real value?
Kristof:Real value.
Isabelle:What's the biggest misconception about data in banking, in your opinion?
Kristof:According to me, the biggest misconception about AI and data in banking is that we will not be at the forefront. I think a lot of people will be very surprised on the technological skills and level that we have currently with regard to AI and data.
Isabelle:If you had to choose speed of adoption versus perfect governance, which matters most?
Kristof:Perfect governance, because it's the foundational part. Without that, you will not go far.
Isabelle:One book or mentor that shapes your leadership.
Kristof:Well, the book that I quoted. Yeah.
Isabelle:Exactly. Which was Who Moved My Cheese? Okay, who moved my cheese? And last one is if not banking, would we see you in aviation again?
Kristof:Absolutely. And I always said it's it it's it's it's it's a very particular sector, aeronautics. Yeah, it's one big family, if I may call it like that. It's it's the most tangible product that you can imagine, an airplane. And it's a fantastic sector to work in. The only thing is, as opposed to banks, they do not generate that much cash flow.
Isabelle:Okay, great. Thank you very much for your input. And uh, it's been a pleasure to have you on our podcast.
Kristof:Likewise, thank you very much for the invitation once again. And I hope the audience will take something with them.
Louis:Yeah, I'm sure they will. I really would like to thank you, Christopher. I think it was a captivating, passionating discussion. I was really uh captivated and honestly, the world could have crashed around us. I would not have noticed. So thank you for sharing your insights. Really, really, really interesting. Uh I hope our listeners will enjoy it as much as I did.
Kristof:My pleasure, Louis. Thank you for the invitation once again.
Isabelle:If you're interested in knowing more about MovieFi, don't hesitate to visit our website on movify.com. Stay tuned for the next episode and don't forget to follow us on LinkedIn and Instagram.
Gael:See you on the next one.