EU watchdog sets AI guidelines for banks

The statement highlights the potential benefits and risks of AI, with a focus on ensuring that firms prioritize the best interests of their clients.

Bank, banks, AI

The European Securities and Markets Authority (ESMA) has issued its first statement on AI, emphasising that banks and investment firms in the EU must uphold boardroom responsibility and legal obligations to safeguard customers when using AI. ESMA’s guidance, aimed at entities regulated across the EU, outlines how these firms can integrate AI into their daily operations while complying with the EU’s MiFID securities law.

While AI offers opportunities to enhance investment strategies and client services, ESMA underscores its inherent risks, particularly concerning protecting retail investors. The authority stresses that management bodies are ultimately responsible for decisions, regardless of whether humans or AI-based tools make them. ESMA emphasises the importance of acting in clients’ best interests, irrespective of the tools firms choose to employ.

ESMA’s statement extends beyond the direct development or adoption of AI tools by financial institutions, also addressing the use of third-party AI technologies. Whether firms utilise platforms like ChatGPT or Google Bard with or without senior management’s direct knowledge, ESMA emphasises the need for management bodies to understand and oversee the application of AI technologies within their organisations.

Their guidance aligns with the forthcoming EU rules on AI, set to take effect next month, establishing a potential global standard for AI governance across various sectors. Additionally, efforts are underway at the global level, led by the Group of Seven economies (G7), to establish safeguards for AI technology’s safe and responsible development.