Bank of England paper finds deep learning models risky for financial firms
The researchers highlight the inherent risks of using these deep learning models in financial markets, regulation, supervision, and policymaking
The Bank of England (Boe) published a working paper claiming that deep learning models are “fragile” and could pose significant risks to the financial sector.
Increasingly prevalent in the finance industry, deep learning, a subset of machine learning, is an artificial intelligence (AI) method based on neural networks structured into multiple layers. Given the opacity of the models deployed for internal and consumer-facing use, there is growing concern over the transparency, reliability, and trustworthiness of their results.
A team of researchers from the Bank of England, Reserve Bank of Australia, and University College London studied not only deep learning models predictions but also explanations of how they produce certain outcomes.
The paper suggests that the nature of AI models relying on extensive datasets makes them fragile and sensitive to changes in the data.
Why does it matter?
The researchers highlight the inherent risks of using these deep learning models in financial markets, regulation, supervision, and policymaking. They suggest that more research is needed to design alternative methods that could improve the trustworthiness and explainability of deep learning models. Additionally, financial institutions should promote Human-AI collaboration to enforce and validate risk assessment processes and reduce systemic risks associated with the fragility of these models.