Artificial Intelligence for Payments most crucial requirement
In the context of payments, advances in technology are having a big impact in the Artificial Intelligence (AI) space. However, there’s a danger the industry tends to rush into declaring which algorithm, technique or technology is having the most impact in responding to the risks AI is designed to combat.
The sad reality is there isn't really anything new in AI and payments, what is new is the combination of algorithms, think of it like a recipe - the ingredients are being tweaked, and it does taste a little better each time.
Yes, the use of cloud and GPU technology makes a difference. Previously, the sheer bulk of processing could not be financially justified, now, because of cloud technologies – this avenue is open to all.
Giving AI good data
However, this doesn’t without one simple thing – good data. The issue is the source of all good data is still, at this stage at least, human!
If you are using AI and a fraudulent transaction is detected, further interaction and clarification with a human entity will be required to confirm it is 100% fraudulent. That takes time and creates a ‘thinking time’ delay to allow models to adapt.
Most of the risk-based AI in payments are supervised classification problems, e.g. is this a fraud or not a fraud? The supervised part means the algorithm is provided with a set of known states, i.e. fraud or not fraud. AI can determine from new data which classification this new transaction belongs to. So, the key to accurate models is accurate data. However, there is another dimension – the timeliness of that data.
A question I’m often asked is: Can AI models adapt in real-time? The answer is yes, they can if they have the data in real-time; but the time it takes to confirm fraud can be weeks.
Fraud patterns come in two types. Firstly, tried and tested frauds in which patterns repeat over long periods of time and are, therefore, relatively easy to detect. Secondly, there’s innovative frauds, where something new is tried. (And let’s be honest, there is always a grudging respect when the bad guys try something new that hasn’t been thought of before). This is the data that needs to be given to models quickly to prevent the losses of the innovative fraud.
So, what this means is that the most time-consuming aspect of the process for AI is getting accurate data for new occurrences so the data can be adjusted and remodelled. To achieve that, there has to be a feedback loop so when the human entity has decided there is an issue, that data can be flagged such that it can be classified by a model. Typically, the two slowest parts of that process are getting the humans to talk and secondly getting the new data into the models.
How good is your feedback loop? How quickly can it reach the modelling or remodelling stage? How long does that take?
This is the part of the chain with the greatest overall impact when it comes to system effectiveness, and arguably, is greater than the algorithms themselves.