Home background

CSSF position on artificial intelligence innovations

Nov 05, 2021 - Newsflashes - Publications

header article

CSSF position on artificial intelligence innovations


Luxembourg has currently 6 open initiatives related to Artificial Intelligence . On the other hand, the legislator has been continuously adjusting the legal order to accommodate different technologies - the most recent of which being the Law of 22 January 2021 amending the law of 5 April 1993 on the financial sector and the law of 6 April 2013 on dematerialised securities. The financial sector regulator, Commission de Surveillance du Secteur Financier (the “CSSF”) also remained attentive to financial innovations, including to Artificial Intelligence.

On 8 February 2021, the CSSF published a 15 pages long document on its website entitled “Financial Innovation: a challenge and an ambition for the CSSF” (the “Financial Innovation Document”) . Its aim is “to further describing the involvement and the work of the CSSF regarding the Financial Innovation”, taking into account that the CSSF “regularly publishes documents on financial innovation topics such as Artificial Intelligence, Cloud Computing, Robo-Advice and Digital on-boarding” .

These topics are summarily addressed in the final pages of the Financial Innovation Document. Its six paragraphs on “Artificial Intelligence” are a simple reprise of a 3-year-old “Press Release” (Press release 18/41) , and do not provide any “Artificial Intelligence” definition. Practical use cases for the financial sector and discussions on possible opportunities or risks related to AI were also excluded from it. Rather, the Financial Innovation Document refers to a white paper on artificial intelligence prepared and published by the CSSF on its website.

CSSF’s White Paper


This White Paper, named “White Paper – Artificial Intelligence: Opportunities, risks and recommendations for the financial sector”, dates from 21 December 2018 and has never been updated (the “White Paper”) . Its 80 pages (with annexes) are only available in English and have no binding value for the CSSF supervised institutions . Consequently, their only purpose is to provide “the foundations for a constructive dialogue with all the stakeholders of the financial sector for a deeper understanding of the practical implementations of AI technology and its implications”.

The White Paper provides a definition as well as a short historical context for AI, the existing AI types (narrow and broad AI, AI subfields, machine learning), exposes selected use cases specific to the financial sector (robotic process automation, chatbots, robo-advisors, fraud detection, money laundering investigations, terrorism financing, credit scoring, etc.), briefly presents the explainable AI (model agnostic explanation, explanation specific to the learning algorithm), describes AI security and robustness (data poisoning, adversarial attacks, model stealing, video forgery), opportunities, risks and recommendations.

However, the said White Paper contains unfortunately rare references to scientific/technical sources and, given that it dates from end-2018, some websites referred in the footnotes seem currently broken. It also does not cover any of the numerous Resolutions related to AI that have recently been adopted by the European Parliament, including on civil law rules on robotics , industrial policy , ethical aspects , civil liability , intellectual property rights, criminal matters, education, culture, and audio-visual sector . The European Commission Guidelines , its white paper as well as the Proposal for a Regulation on Artificial Intelligence are also excluded from the text of the White Paper, although their broad material and territorial scope equally concern the financial sector, prohibit certain AI practices, classify some AI systems as high-risk , create an EU database for such systems, and provide for penalties and administrative fines in case of non-compliance.

The same observation is also applicable to the works that have been published by the European Banking Authority (EBA) as well as by the OECD, including a document containing the “Recommendation of the Council on Artificial Intelligence” (adopted the 22 May 2019, updated in 2021) , an OECD Report on “Artificial Intelligence in Society” (11 June 2019), an OECD Paper on “Scoping the OECD AI principles: Deliberations of the Expert Group on Artificial Intelligence at the OECD (AIGO)” (November 2019), an OECD Paper on the “State of implementation of the OECD AI Principles: Insights from national AI policies” (June 2021) and, more importantly (since it directly concerns the financial sector), an OECD Report on “Artificial Intelligence, Machine Learning and Big Data in Finance” (2021) .

Artificial Intelligence data and liabilities


AI usage grows every year, both, in its capabilities as well as in its application to the financial sector. Its use is often synonym of increased efficiency (as it does not need sleep, nor rest, nor holydays, nor does it get sick) and greater precision/quality (where complex calculations are made in a fraction of a second). In turn, this translates into reduced costs and higher customer satisfaction for the delivered products/services.

Numerous challenges and risks associated with this growth remain, however, largely unregulated and legal uncertainties flourish daily, deserving consideration from specialised legal practitioners. Some of those issues have been identified and partially addressed by publications from national and international institutions (including the CSSF).

We may mention here two categories of issues, as examples: ‘data management’ and ‘legal responsibility’.

On the first issue, we need not to say that collecting data is essential for any AI. Yet, the AI has not necessarily been ‘trained’ to identify data relevance and separate accurate from inaccurate data. Furthermore, mass data collection used for profiling and predictive strategies may violate rules pertaining to data protection, privacy, confidentiality and cyber security. On the contrary, AI individual decision-making based on wrong or insufficient data may result in bias and discrimination between people.

On the second issue, it suffices to say that rarely is it discussed by institutions. The European Parliament explained, however, in its resolutions that the legal responsibility for some acts from AIs may not be traced back to a specific actor and it even called for a specific legal status for robots in the long run, so that the most sophisticated AIs could have the status of electronic persons responsible for any damage they may cause. The CSSF has, nevertheless, been actively participating with different initiatives related to Artificial Intelligence since the publication of its White Paper on December 2018.

On 24 October 2019, a partnership agreement was signed between the CSSF and the SnT (the University of Luxembourg’s Center for Security, Reliability and Trust) with the goal to “contribute to position Luxembourg as a European centre of excellence and innovation in the field of artificial intelligence applied to state-of-the-art financial data processing techniques” . This partnership is concerned by the “increased need for security and reliability” and by “automating investment-fund documentation processing”.

On 18 December 2020, a member of the CSSF team for “IT supervision”, who is also a national delegate for standardization within the standardization technical subcommittee “ISO/IEC JTC 1/SC 42 Artificial Intelligence” (International Organization for Standardization - ISO) , published an article on ILNAS website where it explains that the technical standardization promotes the adoption of artificial intelligence, the protection of consumers and the risk control for financial institutions .

Curiously, none of those initiatives were mentioned in the recent Financial Innovation Document, the content of which is summarily presented in an updated CSSF webpage .

All CSSF supervised entities using Artificial Intelligence for robotic process automation, chatbots, robo-advisors, fraud detection, money laundering investigations, terrorism financing, and others, are advised to ensure that their AI written procedures, agreements and terms of usage are compliant with, and take into account, the different AI legal texts that are continuously adopted – those published/mentioned by the CSSF as well as those that have been adopted by International and EU institutions.


Our team specialised in Fintech, DLT and Artificial Intelligence is at your disposal in case of questions or need of assistance :

Erwin SOTIRI, Partner

Ruben MENDES, Associate