Preview Mode Links will not work in preview mode

Mar 12, 2021

Hello and welcome to the new episode of the Risk Management Show brought to you by Global Risk Community.

This is your host Boris Agranovich and our guest today is Terisa Roberts, Director, and Global Solution lead for Risk Modeling and Decisioning at SAS.

In this interview we discussed the following questions:

How did COVID impacted risk models?

Based on your work with regulators and firms around the world, what are some of the kinds of AI/ML applications that are delivering tangible benefit?

With new technologies come new risks. What are the main challenges that firms are facing in the use of newer advanced technologies in risk management?

Are existing model governance frameworks sufficient for AI/ML?

and more...

___ More detailed description below____

Terisa explains in a few sentences what her team at SAS have been up to these days: SAS has always been a leader in providing world-class analytical solutions for a wide range of business applications. These include risk management, fraud detection and customer diligence as well. What is exciting for her at the moment is they started to move and make their solutions available on their SAS Wire which is a cloud native platform. So that opens up a whole lot of new opportunities to innovate.

But the COVID-19 pandemic has massively disrupted our lives. How did it impact risk models? Terisa believes that there is a pre-covid saying that essentially, all models are wrong, some are useful which now can be transformed into Essentially, all models are wrong, some are useless. And yes, most turned out to be useless. That is just because the model inputs have moved outside of the calibration range of these models. The models have never seen the narrative of a pandemic play out. So we've seen lockdown measures in many countries that had an impact on unemployment and on movement restrictions, et cetera. We as individuals then have been massively disrupted, but so have the business models of companies.

So we've seen disruption at a global scale. And what we noticed on the risk management front is that not all the industries were impacted in the same way. So some were harder hit, than others. And the recovery path is still uncertain of what will happen now. She just wants to add to that, that we saw both the traditional statistical and econometric models breakdown, as well as the newer algorithms such as machine learning.

Both models have had trouble in coping. So financial institutions had to apply their own human judgment and make adjustments and management overlays with those in place for the models.

Based on Terisa’s work with the financial and the regulators and the firms around the world, what are some of the kinds of AI/ML applications that are delivering tangible benefits? Well, what she sees with the use of AI and machine learning for risk management, it's really good at jobs that require repetition and that can be automated. So AI and machine learning is good at trolling through volumes and volumes of unstructured and structured data and picking up patterns that the human eye might miss. Possibly it's less glamorous than what we see in the movies, but definitely it's starting to pay off and deliver tangible benefits in the areas of credit scoring.

So being able to offer credit and access to credit, to minority groups where perhaps the credit bureau funds are not available and also in the area of fraud detection and cybersecurity.

And what did Terisa learn specifically as a risk management practitioner? Because with new technologies come new risks, what are the main challenges that firms are facing in the use of these new technologies? So in the area of risk management, she thinks there's still a lot of caution in the industry. These are known risks that we associate with the use of these more complex and sophisticated algorithms, say they lack transparency. It’s not so easy to explain to the various stakeholders how the model works but there's also great strides being made in addressing the explainability of all of these algorithms.

The other challenge that is often associated with AI and machine learning is that it may perpetuate and amplify a bias that might be present in your training data. So historically we might have some individual and societal biases, and that might be baked into, into the data that these algorithms are not smart enough to adjust for. So, we might be amplifying those biases with a user and we've seen this in the news. We saw it with facial recognition systems and in the credit space and approvals of credit cards, some big technology companies gave women a much lower credit limit because of the historical data. But by having said that, a bias definitely is a concern there always to address it.

And once she has addressed it in these models, the AI and the machine learning models might even help us to make more consistent data-driven decisions. So definitely not in an insurmountable challenge.

But are existing model governance frameworks sufficient for AI/ML or should there be new ones developed and approved by different bodies? The answer to that is yes, Terisa has had people mentioning: how about AI validating AI? So can we use AI models to also monitor these? Of course the shelf life of some of the models might be shorter than our traditional models because of the sophistication in the patterns that they've detected and maybe the robustness. So, she needs to up her levels of validation, to make room for AI and machine learning and it's putting a lot more demand on validation teams.

Also in the areas of feature engineering that needs to be validated as well as other hyper parameter tuning methods that have been employed. So putting a lot more demand on model governance and it's certainly not going away with the use of AI and machine learning.

So how can firms take a more strategic approach to the use of this new technology and the risk management from your opinion? What Terisa and her team has seen happen is that the process that takes to put models, whether they are traditional or more innovative into production is still long, on average it's six to nine months for the financial services industry. So, for banks to take new innovation and modeling seriously, they would need to rethink the model life cycle process, and look for efficiency gains perhaps in standardizing the data that goes into the models as well as looking at the deployment.

Ideally if you think about it, a traditional model might have a handful of input variables. If we look at an AI or machine learning model that can easily turn into hundreds and thousands of input variables. So if you have a manual process of deploying those models into your decision architectures, for example, that might not be scalable for you as the models increasingly grow in number.

So what should risk managers start doing in this field of a new technology of AIML that they are not doing, or maybe also another way around what should they stop doing that they are doing now? Terisa thinks that there’s a hesitation to embrace new technologies. She understands that because there's also a lot of hype around the use of computer vision and robots doing people's work, but that is not the case. That is more than science fiction. So if we look at the science fact, there are areas where innovation can make a real difference, we see it in customer experience.

By having these models to estimate income, you can gain efficiency and get much more accurate data. So, look at perhaps not replacing all your traditional models with machine learning. Look at where the AI and the machine learning model can give you additional value of perhaps in the auxiliary function, in your data quality processes, as well as in the feature engineering process, which is typically in the model of development life cycle, also quite a manual process.

Just to summarize the major takeaways, if someone who is listening to this interview, would like to walk away with one or two major takeaways, what would it be from Terisa’s point of view?

When it comes to AI and machine learning, we need to rethink our architecture, the infrastructure behind the data management, the Modeling, as well as the deployment of the models and making sure that the infrastructure is future-proof to handle the sophistication of these models and look for areas where the manual processes can potentially be improved with automation, for those efficiency gains and scaling because the operational efficiencies or quite substantial,