AI in the Supervisory Board – Opportunities, Duties and Risks from a Corporate Law Perspective
© InfiniteFlow-scaled - stock.adobe.com

AI in the Supervisory Board – Opportunities, Duties and Risks from a Corporate Law Perspective

The use of Artificial Intelligence (AI) is becoming increasingly important in companies. In the realm of corporate governance, particularly at the supervisory board level, new possibilities are emerging: AI can support decision-making processes, identify risks, or monitor ESG criteria. At the same time, the use of such technologies raises a multitude of corporate law issues, especially regarding liability, documentation obligations, and due diligence requirements.

Contact

Richard Hoffmann
Richard Hoffmann
Partner, Lawyer in Heidelberg, Ladenburg
Tel.: +49 6203 95561 2600

Opportunities: efficiency and informational advantage

AI can help the supervisory board fulfill its monitoring duties more efficiently. For example, AI can quickly analyze and thoroughly evaluate large volumes of data such as financial figures, market analyses, or risk reports. In large corporations with subsidiaries abroad, such as in China, AI can also assist in identifying important developments on the ground at an early stage and incorporating them into risk assessments.

Duties: no free pass for technology

Despite all technological capabilities, legal responsibility remains with humans. According to § 111 of the German Stock Corporation Act (AktG), the supervisory board is responsible for monitoring the management, including the technologies employed.

Likewise, the management board is obligated to carefully manage and document the use of AI. This follows from § 93 AktG. The board must ensure that AI systems are transparent, legally compliant, and controllable. Incomplete documentation of AI use can make it more difficult to defend against liability claims in the event of erroneous decisions.

For both governing bodies, due diligence includes the selection, implementation, and ongoing review of AI systems. This especially involves evaluating whether the systems operate reliably, without discrimination, and in compliance with data protection laws. The traceability of AI-supported outcomes is a key aspect in this context.

Risks: liability and transparency

If AI-supported decisions lead to damage, the question of liability arises. The Business Judgement Rule (§ 93 (1) Sentence 2 AktG) protects board members only if decisions are made on an adequate informational basis. Blindly relying on AI or failing to critically assess its results can result in personal liability, including supervisory board members.

Expectations for transparency are also increasing. Stakeholders and investors are increasingly demanding openness about a company’s use of AI. Missing or flawed communication cannot only undermine trust but also give rise to liability risks, particularly for publicly listed companies.

Conclusion

The use of AI in the supervisory board undoubtedly offers opportunities, especially for data-driven, timely, and objective oversight of corporate management. At the same time, new legal requirements arise: due diligence, documentation, transparency, and control are indispensable. Only those who understand technology and recognize its limitations can, as a board member, act with legal certainty and use its advantages responsibly.

Our News Alert

Contact us!

We support you in international law – providing clear structures, secure decisions, and sustainable success worldwide!

Get in touch here
X