Get a free demo
Episode 39
Data for Subscriptions Podcast

AI-Powered Usage Data Management

Guest

Gaurav Dixit

Gaurav Dixit
Head of Data & AI Products at DigitalRoute

Watch:

Episode description

In this episode of the Data for Subscriptions podcast, we discuss how AI-powered usage data management leverages predictive and generative AI capabilities to eliminate inefficiencies in reactive and manual quote-to-cash processes. More specifically, we deep dive into two use cases of AI: Anomaly Detection, which continuously monitors usage data for unusual patterns that could indicate potential issues such as revenue leakage, overcharging, system faults, frauds etc; and Usage Forecasting, which is crucial for predicting future demand with precision, enabling businesses to optimize resources, adjust pricing strategies, and forecast revenues accurately. 

Highlights

Why AI in Usage Data Management 

Gaurav emphasizes the evolution of usage data management over the past two decades and the continuous aim to optimize this field through technological advancements, notably AI. He articulates that the principal advantage of incorporating AI into usage data management lies in its proactive capabilities. AI doesn’t just automate processes but also ensures the cleanliness and reliability of data throughout the system. This predictive quality of AI tremendously reduces the probability of errors and their consequent ripple effects downstream, ultimately protecting firms from potential losses in revenues or reputational damage. The idea is to not only catch discrepancies as they occur but to preemptively predict and mitigate them before they escalate. Gaurav underscores that AI’s capability to enhance operational efficiency and its strategic integration into usage data management is a pivotal advancement for any data-driven organization. 

What is Anomaly Detection and What Challenge Does it Fix 

One of the first specific use cases of AI in usage data management discussed is anomaly detection. Gaurav elaborates on this concept through a practical example involving electric vehicle (EV) charging stations. Anomaly detection in this context is used to monitor and analyze the data collected from the usage at various charging stations. The AI system is trained to recognize patterns and immediately detect deviations from these patterns, such as unexpected surges or drops in usage or uncharacteristic charging times. The challenge this fixes is double-edged; not only does it help identify potential operational failures or fraudulent activities quickly, thereby minimizing financial implications, but it also aids in maintaining consistent service quality and customer satisfaction. By proactively addressing these anomalies, companies can ensure efficient and uninterrupted service delivery. This capability is particularly vital in industries where even minor discrepancies can lead to significant direct costs or customer trust issues. 

What is Usage Forecasting and What Challenge Does it Fix 

The second key application of AI discussed is usage forecasting. Gaurav explains that forecasting involves using historical usage data to predict future consumption patterns. This aspect of AI is crucial for any company that deals with fluctuating demand and needs to manage resources efficiently. In a practical sense, usage forecasting allows companies to optimize inventory, adjust pricing models, or scale operations according to predicted demand levels. The challenge it addresses is the risk of resource misallocation—having too much or too little supply—which can be costly and logistically problematic. Effective forecasting enables companies to alleviate these risks by adjusting their strategic planning based on reliable predictions of future usage trends, thus ensuring that they remain responsive and competitive in dynamic markets. This is particularly impactful in the energy sector, where predicting usage peaks and troughs directly influences operational and financial efficiency. 

 

Transcript

AI-powered Usage Data Management 

Behdad 

Hello, and welcome to the Data for Subscriptions podcast, where we learn and explore how to run better subscription businesses. I’m your host, Behdad Banian, and today I have the pleasure of welcoming Gaurav Dixit to the show. Welcome, Gaurav.

Gaurav
Thank you.

Behdad
So Garav, you are Head of Everything AI Products at Digital Route. And we’re going to talk 2 exciting use cases that you’ve been working on over the last couple of months. But before we do that, why don’t we start with you telling us a little bit more about yourself so everybody get to know your background and a bit more about what you’re doing here at Digital Route.

Gaurav
Absolutely. Thank you. Thank you for having me here.
So my name is Gaurav Dixit, originally from India.
What have a Swedish passport I keep hopping between India and Sweden wife and a kid, 8 year old and professionally I’ve always been excited by inflection points in industries and companies and I’ve managed to place myself at the right place, right time in those inflection points. So, so I’ve seen the cloud adoption in the, in the telecom industry 2011 12 and OpenStack was happening and has been part of building that business product in commercially. I’ve seen the AIML adoption actually happening 20/17/18 at industrial scale in big and small companies and part of of building those businesses. And I see digital route and as such, the space or which digital route is in which is uses data management is also ripe for that inflection point because like the prerequisite for for anything AI is data, high quality data. And that’s what digitalroute has been doing for 20 plus years, like high quality granular data. The by product of, of what the core product does is what everyone needs for AI. So it makes like a very, very interesting place and time in, in terms of technology adoption to be here and trying to build on top of that data, AI capabilities for customers and partners.

Behdad
Now I’m really looking forward to going into those use cases because what I appreciate with them is AI is such a hype term and every other company out there is just blasting AI on top of what they are doing or they’re offering. It’s not always very clear what it means and what it does and what is the tangible benefits. And this is the part that I look forward to you kind of untangling for us. But just as a starting point, I think most people who listen to this podcast by now probably know when we talk about usage data, what we refer to, but just like a common default explanation of that. So we’re on the same page when we speak about usage Data is everything that speaks of how you track usage of products or services. We talk about the fact that it’s predetermined unit of measurements. You could look at it as an event log. Beyond the technical terms, maybe the simplest way we are would be to say, if you look at our mobile phone bills, every line item that says how many minutes and seconds we’ve been talking or how much data that has been consumed and where it’s been consumed. If you’ve been travelling, if you’ve been roaming, all of these things is about the data that you refer to. And maybe a more here and now example is for everybody who has a maybe an electric vehicle at home and you’re looking at charging that car, that metered electricity is the same equivalent, right? So when we then, when you talk about the fact that the starting point of AI is robust quality data, that’s what we’re talking about when it comes to usage data. OK, cool. So why don’t we get then to the topic of AI usage data management is what DigitalRoute has been doing for the last 20 years. What is the big deal with infusing AI capabilities now into the product?

Gaurav
Yeah, very good question. And I would say, right, we’re not trying to infuse AI capabilities. Taking a step back, what are the problems which we see, which are existing in the space, which cannot be solved with the way we are doing things? And what is the technology available to address those challenges for customers, for ourselves, for our partners? And so, for example, like everything starts with a business problem. There are certain business problems in this space. Can they be solved using AI? That’s a question. And most lot of when we, when we brainstorm these problems, then lot of these things, now we don’t need AI. We can apply analytics, we can apply simple dashboarding to solve this. But there are certain problems which need the capabilities of AI and machine learning to, to solve, to be solved. So that’s how it’s not a hammer looking for a nail, It’s a, it’s a real problem with real business value. That’s an application which we want to to build and solve using AI machine learning. So in this process, which we are, we have identified 20 plus use cases with our customers, with our partners, ourselves dogfooding and and two are the ones which we are talking going to talk about, but there are tons more. So, so usage data management like usage data as you say, it’s a very, very rich important source of signals because whatever you consume or or is being consumed is a leading indicator for a lot of things which gonna happen like your users behaviour, for example, right, which can predict churn, which can predict upsell cross sell fraud. And anything going wrong at the same time in usage has a downstream effect on whatever comes after because that usage is gonna be build and charged by someone. The different partners who are providing that service are gonna do the settlements between them. Anything gone wrong there has a downstream effect. And if those things are detected when the bill is getting generated, then the invoices are delayed. You put a lot of manual effort in fixing those and it leaves most importantly an unhappy customer. If you can detect those things in real time by the early indicators of usage and fix them and nip the nip the problem in the bud, that’s the best thing which can happen. That’s why this fear of of business problems in users data management space is very important for application of AI, machine learning to do things proactively, automatically with enough lead time to fix things before invoices are generated.

Behdad
Yeah. And I think the specific point that we want to speak about this, the proactive part, because you could argue that a lot of the value that you spoke about in terms of automating business and workflows, making sure that you have clean data, making sure that you don’t get the negative consequences down the stream. This is what the usage data management software is trying to do and has been doing for the last 20 years. I think there is the proactivity part that you probably want to over index on. And so this is what the enhanced capabilities through AI can do. And then the second aspect is it’s just simply much even more efficient. So should we just jump into the first use case perhaps then with anomaly detection?

Gaurav
So, so a good way to to bring this to life would be by an example like what you touched upon in the beginning of the chat, like electric vehicle charger.
So you and I, your own electric vehicle, we go and charge it at a charging station which is owned by a run by a company which has charging points there.
Some of them are owned by a that company and some of them are partners.
We do the session and then it records the usage, consolidate sends me a bill, does the partner settlement. So that’s a that’s a typical flow which would happen in this consumption. So, so a normal session is very good, right? But then kind of that’s not how life is, right? Charging points misbehave like data gets corrupted while while being collected, aggregate corrected and sent into into users data management tools. Things can happen and then that leads to a lot of issues in terms of delays, as I said, invoices or wrong invoices and a very big problem in the industry which is the problem of revenue leakage.
So on an average, a company of the size of I would say like 3 to 5% of the revenue is lost. I mean, just imagine the pain people will feel in terms of our customers feel that this is a usage which actually happened and but we never charged for it. So all this is stemming from can we predict these problems from happening beforehand the proactive angle.
So if this is a company of of EV charging company, if a usage is happening in real time proactively, can I predict if this is a normal usage or a fishy usage that’s like fishy slash anomaly in in data science language.
So is it in the bounds or it is somewhere odd here?
So, and then that would be probably the the symptom, right? That OK, there’s something odd. It might be because it’s a fraudulent behavior or the meter is behaving wrong or something else.
That’s the disease which needs to be fixed.
But you treat the symptom as soon as you see it rather than waiting for it. So if if a session happens and that session is fishy, it is flagged as and when if it happened on 7th of October.
So rather than waiting for the 31st of end of the month to for the for the bill to be generated and seeing OK what is going wrong and then manually fixing it using the tools in usage data management suite.
Can you proactively as the usage happened say this is anomalous?
Please look into this so you get lead time to solve it.
Manual effort reduces and a lot of the kind of things can be done automatically.

Behdad
So perhaps you can talk about one of the customers where we’ve deployed this and excluding the name of the customer, basically just tell the complete details of what was the scenario when it comes to anomaly detection and what was the situation that you dealt with? And also furthermore, what was the improvement? But we’ve taken step by step.

 

Gaurav
Absolutely. So again, as I said, like it’s not that we, we kind of build something through it on the wall, see what sticks. This is a real problem and we kind of Co innovative with the customer sitting with them. Are you facing this problem? Yes, we are let’s and we Co created it with them solving their problem and then embedded it into the into our offerings. So the scenario is pretty, pretty similar. The customer is in utility space. It does hot water hot and electricity consumption in in in a big country.
Yeah, and there were issues like this which were creeping up in terms of like using all the power of usage, data management.
Still they could see, OK, there are things which happened and now using the tool, we’re trying to fix it.
We have to throw more people, take our help, call partners what’s going wrong and so on and so forth.
So then we start with them.
OK, let’s, let’s try to see if we can solve it using these techniques we we added together.
If it works, it’s very good.
Otherwise we learn something and then OK give us past few months of data access to it so we can see and and try to develop these techniques which will predict things from happening.
We did that together with them and constantly validated the business value like every time like this issue happened. If you could, if you were using these capabilities or we use these capabilities, then we could predict these issues from happening beforehand. We could see them on the 15th of the month rather than you waiting for the 31st and then spending five days of five people spending 12 hours a day causing a manual effort or average delay of nine days in a year in issuing the invoices causing the sales outstanding and potential revenue leakage.
Despite all that, because bad quality data still creeps in, because it’s so manual effort. So kind of they were losing approximately I would say 3 to 5% in revenue leakage and other items. We worked with them, we validated the offering, deployed it for them to test it out, use it and we which we concluded that for a typical for the 100 million revenue U.S. company, they the net saving for them is in the range of 1.5 to 2 million a year just by doing this.
So it ind of pays for itself like even before it gets started.
But then kind of the biggest factor is that you are assured because it’s such the revenue is flowing through this, right.
So you’re so assured that the invoices going out to a customers are going on the right time and they’re correct as much as you can do possibly.
So, so that’s the use case and and we saw tangible value coming out of it.

Behdad
Approximately how long time did you say it took from, let’s say, initiation of the Co creation to value realization? And by value realization, I would use, for example, the number of days that we we were able to reduce in terms of days outstanding and the billing issues as well as consequently. Then we talk about the savings that they had about the one half 2 million.

Gaurav
Absolutely. So I think a big factor in this to consider is a prerequisite for starting and doing anything like this is access to the data. So you need because you need to experiment with the data you see which models suit best for this detecting anomalies. Do I use the SVM model? Do I use isolation forest model? And then you rate them in terms of like different scores and say like this one suits better. Like all of them do things in a different way, but solve the same problem. So for that, you need to have access to typical data science problem. You need to have to access to data, you need to prepare the data and then you start the, the model training and then you validate and then you deploy and test. So, so I mean, it’s not true that the more the data the major, but at the same time, like there are things, right? There’s seasonality in the data right there. There are certain parts of the year, like probably it’s if it’s in, if it’s in Sweden. Now what we are setting, we see already the weather going October heating will start, right. So if you only have from data for example from July till October for for electrical or hot water usage, October will be all popping as anomalies, right? Because it’s like it’s too high. But if you extend it over a period of time so that you because that your model has seen enough Octobers, then you level out that seasonality because this is normal October behaviour. It is not anomalies. So that’s that’s why there is a certain range of data which needs to be there for models to be accurate. So we started with three months of data because you have to start like you have to start in iterative approach. But then we kept see and with the disclaimer and with the understanding that OK, the model performance will improve as it sees more data on a certain point that diminishing returns. So let’s start with probably 18 months or three months and then we keep improving, keep improving at least 18 months when we are seeing seasonalities, it understands what’s happening and we’ve seen the model performance including and now we see, OK, this is good. Now we only retrain it probably every every six months when needed. So, so if you discount that part of like data collection and part of three months to have see some, some things coming up, then typically this sort of model development in a quick iterative approach, validating and so on and so forth takes probably kind of eight to 10 weeks kind of because there are, there are some experimentation which needs to happen. And then comes the bigger part of like actual deployment because deployment is not only technology to be to be planned. Technology is the easy part. The bigger part is how do we integrate into the business flow? I mean, technology is like useless if it’s not used, right. And for that, the human angle of change management comes in because that’s where 70%, 80% of the effort happens. I am used to doing it in this way. I look at some dashboards and look and look at my Excel and do this like now you’re telling me to do it in another way, like so now I that change management is something which takes effort. You have to constantly hammer, have the right stakeholders, have the right champions in the, in the customer to adopt. And that that is a slow and steady process. It is a asymptotic curve which will eventually happen, but it starts with a slow adoption. And if they see if, if we add value, and I’ve seen it myself personally that if you add value, then this adoption of course, of course.

Behdad
So if I play that back to you, because you said a lot of important things is the actual value realization when it comes to anomaly detection that now can be done proactively and at the instance it occurs.
That’s a really important point because you can argue we have been able to do anomaly detection for quite a long while because of the workflow automation and making sure that the data is robust and there’s quality in the data. But the big thing is that it happens at the instance real time or near real time, depending on the service, the value realization then to deploy such a a normal detection use case, somewhere around two months give or take for deployment. On top of that, of course, there is the implementation, there’s a bit of a cultural adaptation or adoption. But the other point you mentioned is the starting point of all of this, which is I guess a little bit the catch 22 is that you need quality structured data to feed the algorithm. And this is where the other point you made is. In the case of this customer, they already had that, which then allowed us to start with a pretty, let’s say, modest range of data of three months, knowing that it’s a bit too short because of the seasonality aspect. But then we could actually work in stepwise iterations of adding more and more data. And around 18 months you say that you start to see something of a diminishing returns because now the, the the set of data is so robust.

Gaurav
And then this it’s like, OK, you will never reach 100% like it’s the model will never be 100% accurate because then, then that’s scary if that happens. So, so you have to draw a line in terms of time to value versus like the judgement calls if it is reaching 80%, which we saw it’s happening. Yeah, 80% of the predictions and confidence level is there in terms of the predictions coming out validated in the test sets and all. It’s good enough for you to start using and keep giving feedback, keep labeling. It was right, it was wrong. We retrain the model between the parameters and so on and so forth.

Behdad
Now, unfortunately, we, we don’t show the the system here during the podcast, but I think that for everybody who’s listening is the really interesting stuff is we actually see it. And when, when the pop ups, I call it common, it actually shows you in real time that here you have an issue. And in the case of this customer will basically collecting data from multiple meters and sensors and being able to see that real time and be able to act on it. And even better, as you said, it’s that there are a set of automatic actions that are being deployed. It’s actually now we’re getting to a point where when we use the term AI becomes very tangible. It’s actually concrete understandable steps that is helping the business becoming better.

Gaurav
So one more thing which I would like to add. There is like till now we have only spoken about the predictive space of AI, yes, where we are predicting stuff. What we’ve also done is again like not adopting it for the sake of adopting and checking a box, which is called like generative AI washing. So it was never intended and we never did that right. But when we saw a point when we started getting questions in terms of can you also help explain it a bit? So like, like we democratize this usage data management space, there’s some anomaly happening, but I, I want an explanation why is this right Can is it, is there a natural language explanation for this right. And when that happens, can you also automatically create a ticket with those assets with those explanation language and so on and so forth. That’s when we realize this is a put in problem for application of generative AI. So then we combine the predictive AI space with the generative AI. So once the prediction has happened, not getting into details in terms of of data science in a natural language, explain to the person looking at it when a pop up comes that OK, why is this an anomaly or what I why I say the model, why the model says this is anomaly? Because this is the normal usage. This is what has been happening. These all parameters considered, this is an issue. So in natural language you understand, so we combine predictive space with generative space to to even enhance the value.

 

Behdad

And this is extremely important because. Often times you want the business owners to be empowered so they can actually act on it. Because one of the problems historically here has been that this kind of information is quite complex. And you typically end up sitting with an engineer or an IT architect of sorts to kind of explain what is this and what am I supposed to do with it. So for anybody who’s actually a product owner or a business owner, you are reliant on. So what you just said now is that it is instant in terms of informing you and allowing you also to take further action, which is extremely important. With that as a bridge, let’s go to the other use case because it’s all about forecasting. And I, I think that’s the Holy Grail. If there’s one space when it comes to everything related to usage data or recurring billing and what not is that everybody seems to be saying, ah, but you know, the forecasting element is a challenge. So I think this particularly will be interesting for people to understand. Let’s just again, as we did with the previous one, explain what the problem here is and why did we set about to solve for it?

Gaurav
Absolutely. So again, as you said, like, like forecasting is has so many use cases or like or I would say like child use cases, it can be used at so many places. And it’s not that forecasting is not being done today. People do forecasting in terms of their revenues and, and and and other things. But a critical element, that’s where like this sweet spot of usage data management, where we as this route set comes into play where like we see granular data, unaggregated data at the source when it is produced earlier than anyone else in the downstream, right? It other it, we aggregate and create it into a billing record, which is in for billing, right? But like that’s where signals are getting lost in that aggregation process. So, and then also in terms of time, that’s, that’s, that’s happens at a later point. So you’re sitting, sitting on such granular raw users data ahead of everything else. What we do is like based on past historical usage, there was a constant kind of challenge in terms of of, of from the customers and the discussions which we had that OK, we want more access and more granular data and more granular access usage forecasting. So say like, OK, that’s what we sit on. So if you look at historical data based on that, we train the models we tried with again, different, different sort of algorithms, as I mentioned before, in, in in time series fashion of forecasting to profit and others to see which one works better. And then we realise, OK, there is there is accurate predictions which can be made using this data. So that’s that’s how that’s what the genesis of it and the implications of it is like a multi fold, right Because if you want to do revenue forecasting, revenue is nothing but usage into into whatever rating it has to be applied to P into Q. So if you’re sitting on forecaster like sitting this month and say you can do very clear FPNA financial planning, understanding what is the forecasted usage. But at the same time what happens is like we are doing real time usage collection and monitoring as well. So this is the actual, this was the forecasted based on past X months or years. That’s what your planning is based on. Whenever deviation happens beyond a certain threshold which is needs to be flagged, that is flagged to the right stakeholders in in the finance department, in the customer success department that OK, there is something happening because again that degradation has to be fixed or or probably if it is going up in a certain way then also that that is a potential upsell opportunity. I mean is there an expansion or in a mini which has happened, we can sell something more there. So those sort of triggers is something which is captured via this via this signal of forecasting and under very important use case which actually when we discuss with partners that came from the partners that we require very rich information for pricing experimentation, right. So like in terms of like this is a forecasted usage. Can I marry it with pricing experimentation engines to see locate this color of pricing will lead to this sort of revenue under use cases, the renewals coming up or expansions coming for for customers. So what is the discount level I can give to this customer for the next two years renewal so that the LTV justifies it And that LTV calculation, the best source for it is like what is the usage based on the past three years uses which they have done. That is a Holy Grail for sales folks to have offers which the customers cannot use and the company also doesn’t lose money.

 

Behdad
By LTV you mean long term value, long term value?

Gaurav
Yeah, long term value.

Behdad
I’m going to go back to another thing you mentioned when you started to explain the forecasting use case, you said that signals gets lost. Can you explain so everybody understands what you mean by that? What are those signals and why does it get lost?

Gaurav
So, so for example, in the process of converting raw users data into billing data, we do a lot because the purpose is to have something which can be multiplied by the number and bill is generated, which means you aggregate the usage in different aspects. For example, again, going back to your telecom usage, right, there are different things which are getting measured. But ultimately what happens is a is is 1 golden record as you can call it, or multiple golden records. In that process, a lot of aggregation happens, a lot of normalization happens, different techniques which are typical usage data management techniques with every for example, like a good example would be think of smaller Lego pieces, right? There are smaller Lego pieces which all if you combine them, they make Iron Man or Lego Marvel there, right. But at the same time, now there’s Iron Man, which you see, you can only do certain things with Iron Man. Now if I get access to those Lego pieces, I can do much more. I can create a Hulk.

Behdad
You can derive more insights from insights from it exactly because I think the, the, the point you want to make is that so far we’ve been using aggregate structured quality data for billing. And for that, you basically massage it so that you get 0 errors, meaning 0 revenue leakage for the purpose of billing you and me for our mobile, for example, subscription. But your point here going back to as you were explaining forecasting is that there’s so much insight lurking sitting with those usage data. So if you go back to one of the scenarios you were saying, so when we see consumption suddenly ramping up, that’s one of those signals you’re referring to.

 

Gaurav
Or going down, which is a potential churn or there are a lot of support issues which are lying there and not solved by the customer is not able to use it. So these are all triggers to customer success, to sales, to kind of take those actions to either improve the bottom line or increase the top line.

Behdad
Yeah, but if we take it back to the core of the UCLA forecasting, what you’re saying is the forecasting, you know, we’ve been doing business I’ve been doing for a long while. Again, you know there are different methods, but let’s just say that it’s been done. The common denominator between this use case and the anomaly detection is actually the same is that the amount of structure quality data that you have access to, you can act on it immediately. And so if you do the forecasting based on your data that you’ve had for a period, let’s just say for 12 months, then the difference with this use case that it at the moment where there is any signal, you are able to act on it

 

Gaurav
Exactly. close the loop is what the data science language is, right? Like predictions are worth nothing if you cannot close the loop exactly. So you need to have enough lead time and the predictions need to be made enough in advance so you can close the loop. I mean, if, if someone’s going to churn, you tell me it’s it’s churning exactly. OK, what do I do? I cannot go back with the promotional offer. I can if the usage indicates that, OK, there is a decline and and the customer has been declining in usage since the past three billing periods. And this is again, maybe this is the kind of the the, the last straw which it is holding by you can still act on that. So that’s, that’s an example. There would be other such examples, right? Like the point is like to close the loop, you need enough lead time. That’s why being proactive in this space is so important.

 

Behdad
Yeah, because a lot of businesses have now tried or are on the verge of trying to go into some form of a usage based pricing model. And one of the factors, everybody can just probably do their own research. One of the factors that is holding businesses back is because you can’t really trust the forecasting when you move to usage based consumption base because one of the problems for any business that’s tried it, well, we have an idea what it ought to be, but then it can be very different. And then suddenly when you’re doing all your planning and resource planning accordingly, then you can trust it. This is one of those barriers that kind of holding business against it. What’s still a bit of a catch 22, We know that from a customer standpoint, there’s enormous value to be able to pay for what you consume. So I think that what you are solving here with this use case is extremely important because it addresses at the core of this barrier that is preventing companies to really deploy this with confidence, I say.

 

Gaurav
Absolutely, absolutely. It’s a right mix of being fair because I pay for what I use, but at the same time have visibility like is what is going to be from the from the service provider perspective.

Behdad
So again here I guess if I would ask you, you know, time to value on such things, I would assume that you would say that the scenario is very similar to how we discussed the previous usage with a normal detection because the backbone is the same, right?

Gaurav
So, so like this, the example which we discussed was the first case, but now for first, first time which we develop this capability. If I have to go and to another customer, right with the same capability, the part is done right. We with the anomalies in usage, we know the models are in place, the pipe, the data pipelines are in place. We know what data to expect there. We go to a, we go to a customer and then probably it will require some sort of data transformation effort, which is which is not huge because ultimately what you want is like what entity was consumed by whom, when, how much and and some other details and historical data. So for any usage system that anyone who is doing any sort of usage, that data is there. So some sort of modification of which columns to pick might be needed, but it’s a small effort to do it again and also to do it more use cases with the same customer. For example, this utility company which we’re talking about like now the plumbing is in place now I can go and do a churn, I can go do upsell, I can do fraud prediction, I can do customer segmentation based on usage. All those things are possible because the plumbing is in place which the initial setup which was needed,

 

Behdad
It just open up, it opens up a huge palette of additional value that can be tapped into by all of these kind of scenarios and use cases. But that’s really good. I think this, I think these two use cases to me are exciting because they’re very concrete and they’re also exciting because we’re seeing them in action right now with customers and it’s really helping them to take steps forward with areas that they’ve had on the radar for a while, but it’s been kind of difficult. So I think it’s, it’s really, really interesting. So just as a final question, as we’re wrapping up these two use cases and deployments, you’ve already mentioned a couple of challenges that we that we’ve faced beyond the technology and deployment and co-creation with not necessarily as a challenge, it’s just work that need to get done. You mentioned just the adoption of these. Are there any other challenges or obstacles that we haven’t touched perhaps that could be worthwhile highlighting things that is good to keep in mind.

Gaurav
That’s a good one. So I think technology being one, but technology is not a challenge.
You have to do it. That’s what we do. Slightly more bigger challenge is the cultural aspect of like making sure that change management aspect of culture, doing things in a different way. And then equally important is integration into the existing workflows and business flows. If someone is used to solving a problem in a certain way, going on Slack and posting something you cannot ask now you there is something else which you go and ask this question. That’s a lot of effort. It will take time and the option will be low. Then another one is it’s not a challenge, but it’s an important factor. And I would say like, that’s where things start. And I keep harping on it because it’s so important to put a business value on whatever you’re doing. Because otherwise, because of the focus on this area, lot of things get approved from a budget perspective, but they never go beyond being innovation. They live and die in an innovation garage. In order for them to be adopted, the ROI has to be proven, which means like you start every problem, this is a problem we are solving and align with the right stakeholders. This is the business value which it will bring. And then you see the and thereby that’s, then, then the adoption happens because then they approve and it and it’s adopted in business as usual.

Behdad
Yeah, I’ll just add 1 aspect from my side as I’m reflecting when you’re talking about these cases is a lot of companies now are rushing because they don’t want to miss the boat with using AI to improve their business. I think the advice that I would just send away is you need to make sure as early as you can that you set yourself up to get structured quality data because everything you’ve spoken about today essentially, I could even summarize, summarize it down to you can tap into that literally tomorrow, assuming you have structured quality data. And I think this is maybe something that is not being looked upon and stressed enough because as a business, if you’re going to tap into these use cases and you don’t have that, you have a, a pretty long runway of a costly project that is a little bit scary and filled with risk to get there. So make sure that that is in place, make sure you have the right tooling and software so that you can as soon as possible be able to get on the AI bandwagon.

Gaurav
I say, yeah, that’s a that’s a hygiene, right? Like AI ready data, like as close as it is, is a hygiene for for this to get started.

Behdad
Excellent. All right, Gov, thank you very much for the discussion. I’ve really enjoyed it and appreciate your time and thank you for everybody listening to this conversation, just for everybody who is curious to learn more because there’s a lot more depth than what we shared here today with Gaurav. Drop a comment saying AI in the comments field. If you’re following the LinkedIn activity, you can also reach us via LinkedIn. Just drop us a message or simply go to digitalroute.com. In either way, if you’re curious to learn more, reach out. We’ll make sure to offer you much more content and insights than we were able to do during this podcast. Gaurav, thank you again.

Gaurav
Thank you.