AI AND COMPANY DIRECTORS 

This post has been contributed by Professor Chris Riley, Module Convenor for Company law.

This icon use for website presentation and android app

The recent pace of AI development has been breathtaking.  Companies are often at the forefront of these changes, where AI is already transforming how they operate and how they are managed.  What implications does this revolution have for the role and the legal duties of their directors?  

Initially, there was speculation that AI might take over much of the decision-making currently undertaken by directors.  There were suggestions that whole boards could be replaced by ‘Robodirectors’, producing the equivalent of the driverless car – ‘the self-driving corporation’.  At least for now, such possibilities look remote.  Many of the decisions which directors currently take – both individually, and collectively as a board – involve exercising a large degree of context-specific ‘business judgement’. Learning to do this may well be one of the most difficult things for AI to replicate.  It seems like that human directors will remain essential for the foreseeable future, and they will have to continue taking most of the ‘higher level’ and ‘strategic’ decisions which remain their responsibility.   

AI seems, instead, to be being used in areas of corporate decision-making that are more routine and are repeated at large volume – recruitment decisions, employee or customer relationship matters, and so on.  These are typically the sorts of decisions that boards would already have delegated to others – perhaps to individual executive directors or, more often, to lower-level employees.  Does the re-assignment of this already-delegated decision making, from human employees to machines, matter, so far as directors’ duties are concerned?   

Business meeting with employees and a humanoid robot.

It may still do so, in a number of ways.  First, replacing human workers with AI may improve the company’s profits, and be good for shareholders, but its impact on other stakeholders, such as  employees, may be more mixed.  Whilst it might allow some employees to spend less time dealing with repetitive and tedious tasks, and more time doing creative and fulfilling work, it might also be used to cut jobs, increase surveillance, automate disciplinary and grievance mechanisms, and so on.   

All this raises what has always been the most fundamental issue within company law, namely whose interests directors must prioritise.  UK company law answers this, now, through the duty found in section 172 Companies Act 2006.  This, it will be recalled, requires directors to put shareholder interests first, and to consider non-shareholder interests only as a means of better calculating what will be best for shareholders.  Section 172, at least in its current version, requires directors to deploy AI within their companies whenever doing so will, overall, enhance shareholder value, notwithstanding the negative impacts it might have on, say, employment conditions or opportunities.  There have, of course, been longstanding efforts to reform section 172 and make it more ‘stakeholder friendly’, and critics of the current law would argue that the need for reform becomes ever-more compelling in a world of AI.   

The other directors’ duty which is clearly relevant when directors are deciding whether and how to deploy AI is the duty of care, skill and diligence, found in section 174.  Directors need to do ‘due diligence’ to ensure that decisions to deploy AI will indeed be beneficial (to shareholders), rather than ending up harming them.  And once deployed, the directors’ core responsibility, recognised by cases such as Lexi Holdings v Luqman [2009] EWCA Civ 117, to monitor the ongoing management of the company, means directors must ensure there are adequate systems in place for checking and reviewing the performance of the AI they have chosen to deploy.     

These elements within the section 174 duty of care were worked out long before AI became so commonplace.  No doubt courts can try to bend and shape the general obligations in section 174 to fit, to some extent, this new reality.  However, doing so will throw up challenges of its own. Given the innovative nature of AI, and its ‘black box’ quality, it may be unusually difficult for directors to monitor how well AI is performing, or for judges to decide how well directors are doing so.  Added to this, there are concerns about whether directors have the technical skills to address these AI challenges, and whether section 174 is an effective tool in ensuring they do.  

Of course, beyond the hard law of directors’ legal duties, soft law also has a part to play in ensuring that companies are well-governed.  Our module addresses this in Topic 14.  The UK Corporate Governance Code already has much to say about issues like the qualities and attributes that the members of the board should possess, and the use of specialised ‘board sub-committee’ to address key governance challenges.  Small extensions to the Code – such as requiring at least some directors to possess technological expertise, or recommending that companies consider the need for a specialised ‘technology sub-committee’ (to sit alongside the audit, remuneration and nomination board-committees which are now standard) might be valuable supplements to the obligations implicit in the hard law of section 174.    

Leave a Reply