FESE high level principles

FESE High-level principles on artificial intelligence

Digital finance | 27 Jun 25

FESE welcomes the European Commission’s innovation-friendly approach and the recently published AI Continent Action Plan (here), an initiative aimed at positioning Europe as a global leader in AI innovation. Financial market infrastructures (FMIs) are actively exploring AI to enhance its internal workflows, strengthen risk management, and improve market surveillance, ensuring more resilient and adaptive financial ecosystems. FESE recommends the following key principles that we believe should shape AI regulation and governance.

 

  1. 1. Principle-based approach
  • AI governance should adhere to a principle-based approach. For example, the “same business, same risks, same rules” principle is fundamental to construct AI rules and should be applied consistently. In our view, existing regulatory frameworks in the EU — such as the AI Act and Digital Operational Resilience Act (DORA) — are sufficient to mitigate any potential AI risks and regulate most AI-driven activities within the European financial sector.
  1. 2. Solid data governance & Cloud policies
  • As a prerequisite to developing AI models, financial industry participants need to establish a robust data governance. Policymakers should also approach cloud regulation in a balanced manner realising potential impacts on AI advancements rather than unintentionally restricting them.
  1. 3. Simplification & clarification of the AI Act requirements
  • Simplification of the AI Act is encouraged, with the aim of reducing administrative burdens and reporting obligations, thereby fostering innovation and enhancing the global competitiveness of the EU.
  1. 4. “Human in the loop” approach
  • To mitigate potential financial risks, it would be important to maintain human oversight, or “human in the loop” approach, to preserve autonomy in final decision-making.
  1. 5. Innovation-friendly approach
  • Regulatory frameworks should follow an innovation-friendly approach and enable responsible AI development and deployment without excessive barriers. As technology rapidly evolves, the regulatory environment should be designed with built-in flexibility, allowing for regular reassessment and adaptation of rules.
  1. 6. Ethical & ESG considerations
  • A comprehensive ethical framework should be integrated into AI regulation to ensure transparency, fairness, accountability, and the protection of fundamental rights. It is also important to enhance collaboration between stakeholders to tackle AI environmental footprint.
  1. 7. International cooperation
  • Structured cooperation between policymakers, regulators, and market participants is essential to ensure the responsible integration of AI technologies in securities markets, as well as a common understanding of their potential risks.