The UK Intellectual Property Office report “Artificial Intelligence, A worldwide overview of AI patents and patenting by the UK AI sector” shows growth on an exponential basis, with around 28,000 patent applications first published in 2017. As with other revolutions, it will – and is already – presenting considerable challenges. Not least the “mutant” algorithm used to calculate exam grades for pupils at UK schools.
The much-maligned “algorithm” has visibly mutated in the face of political pressure. Even so, it is still blamed for “its” unfair behaviour, by students, parents and politicians alike. That could have a lot to answer for: firstly, by bringing the idea of an “algorithm” to the widest public audience in a distinctly negative way; and secondly, for providing a target – the “algorithm” – that politicians and the public alike can blame for the politicians’ and administrators’ decisions. Echoes of the riots of the Industrial Revolution, of GMOs and trampled crops, or of protests against stem cell research? Is there a danger that publicly blaming algorithms in such a sensitive area as education could create (or reinforce) a groundswell of opinion against using machines, machine processes or other AI (which all use algorithms) to make or influence decisions or actions – from medical diagnosis to the offering of tailored ads, all innovations which are transforming modern life. And could that present another challenge to the adoption of innovation through AI?
Key issues in relation to the examination assessment algorithm were fairness and bias. Those are issues in the use of AI which are already well recognised. The Information Commissioner’s Office recently issued Guidance on AI and Data Protection, summarised in our note here, and on Explaining decisions made with AI, noted here. Those documents give examples of how AI may introduce bias and the requirements for explaining decisions. Examples of potential bias are the use of algorithms for ranking potential recruits that rank candidates based on characteristics typical of white, male, middle class data, or data which under represent women. A tempting AI application is to rank potential recruits on the basis of their likely acceptability to the client – but this is very likely to reinforce any bias the client already had. One well publicised early example of a “biased” machine was a hand drier that failed to respond to a coloured hand.
But the ICO guidance still belies the difficulty there can be in implementing safety measures and effective risk assessment within AI to ensure that it is data protection compliant. In the case of the examination assessment algorithm, it was not really even AI: it was machine processing using a statistical algorithm. Of course an innovation. No-one had been faced with providing a robust grading assessment of all candidates for school exams in England in the absence of their taking any exams, and having missed 6 months of face to face teaching. But still, the end result was noted as causing chaos including bias against classes of pupils. We will have to wait for further analysis of why the problems arose. Overcoming these challenges, such as demonstrating the fairness of a decision-making process, and deleting personal data and its impact from AI applications, may themselves need innovative solutions.
There is another challenge arising from the examinations assessment algorithm for innovators in AI. Of course, an algorithm is just a process with steps. Typically steps in a mathematical or computing context. It has no morals – and even if “morals” are programmed in, they can only reflect the programming or data that drives them. Nevertheless it is a convenient scapegoat for politicians to blame. Politicians repeatedly tell us that their reaction to COVID is “guided by the science”. The science can provide the results of research; it can learn how the virus behaves, and how the human body behaves to the virus; it can report how the public have behaved in similar situations. This creates an impression that science can provide definite and accurate answers, even if those answers appear to change week by week. For school pupils, the science can analyse student performance; it may even be able to provide probable outcomes, if the data it uses is sufficiently robust. But what policy-makers fail to communicate is that the science of statistics, of probability and of modelling, together with the inherent uncertainty of novel research and investigation do not provide definitive answers. The science has to be evaluated and often involves externally determined matters of judgment. It is the policy makers that must make the decisions, whether it is apparent at the time or embedded in the AI. We already see similar problems in relation to AI areas such as autonomous vehicles. Policy-makers (and the public) would like to see a way of attributing blame and responsibility to an autonomous vehicle so that they can justify a basis for allocating compensation for an accident. But the responsibility, even for a perfect autonomous vehicle, will lie with the person(s) who made the choice about how the vehicle should behave in a critical situation. These issues are already significant obstacles in public-facing AI.
Blaming AI – or a mutant algorithm – for decisions which are or should be made by policy makers makes no sense. Although it may be politically convenient, it is likely to undermine public confidence in this valuable technology. There are many other challenges in relation to the development and use of AI. Policy makers across the world, including in the UK, are investing considerable effort to address a significant number of tough issues. Examples are the use of personal data, and the legal responsibility for the actions of an autonomous agent, mentioned above. Another example – crucial to investment in innovation – is how AI innovations can be protected. Key issues, identified and being discussed internationally, include:
- are innovations in a number of areas of AI, especially where they make or support commercial or business decisions patentable
- where an AI develops its own internal rules to analyse a problem and provide an “inventive” solution, it may be difficult to provide a sufficient description of how the invention is implemented to satisfy patenting requirements
- people questions whether an AI developed solution is even an invention
- so far, intellectual property authorities, like the US Patent and Trade Mark Office require an invention to have a (human) inventor
- and even if a patent has been secured, will it be difficult to determine whether the invention has actually been used
- if an AI innovation is not patented, does the requirements to be able to provide an explanation for decision-making undermine any secrecy about the algorithms used
- and the most powerful protection for many AI innovations will often be the collection of data which has been used to train an AI engine. Aside from the privacy issues, already identified, what about concerns about monopolistic and unfair control of that data as well as issues around ownership.
In this context, making a scapegoat of AI not only makes no sense, it is counterproductive. It runs the risk of creating real damage to the innovative opportunities AI presents or driving them to more receptive jurisdictions, as well as undermining the very investment policy makers are making elsewhere in meeting the challenges of AI.
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.