Despite the growing ubiquity of AI in the workplace, many organisations do not yet have an AI policy. A recent KPMG/University of Melbourne survey (the survey) of over 48,000 adults from 47 countries found that:
- 58% use AI for their work on a regular basis (83% among students).
- 70% said they use free, publicly available GenAI tools such as ChatGPT (42% use AI tools provided by their employer).
- 41% report that their organisation has no AI policy.
Reasons you might not have an AI policy yet (and why you should!)
1. I’ve been meaning to create a policy, but thinking about AI is overwhelming and I don’t know where to start.
Do you have some ideas about how you would/would not want people in your organisation to use AI? Start with that. There can be lot of value in clear “do and don’t” rules. For instance:
- Don’t share personal data or information that is not in the public domain with publicly available GenAI tools.
- Do assess any response produced by GenAI for potential biases and factually inaccurate information.
- Do critically assess whether the outputs of GenAI are violating intellectual property rights, in particular copyright.
You can build on this foundation as needed.
2. It hasn’t got to the top of my priority list yet – what’s the worst that could happen?
The survey highlights a number of risks, for example:
- 48% have uploaded sensitive company information into a publicly available GenAI tool.
This is a problem. Any information shared in the chat function of a publicly available GenAI tool will be stored in the tool’s training data. When a user makes a relevant query from outside your organisation, that information could resurface. The more of your company’s confidential information that is shared in this way, the greater the chance of an unexpected crisis. In 2023, Samsung employees in Korea shared proprietary source code with ChatGPT and asked it to spot errors. This led the company to ban employees from using publicly available GenAI tools for work. Amazon did the same, after suffering similar incidents.
Turning a blind eye to this kind of “shadow AI” use is a bit like ignoring a dripping tap in the house. My water company tells me that a tap that drips 30ml of water a minute would waste 157,861 litres (or 197 bathtubs) of water over the course of a year. If you ignore the tap for too long, there is always a chance a jet of high-pressure water will spurt out sideways and put you on the phone to a plumber.
Other risks highlighted by the KPMG survey include:
- 56% made mistakes in their work due to AI use.
- 66% relied on AI output without evaluating the information.
- 72% put less effort into their work due to AI.
Despite growing awareness, some users of GenAI still place too much faith in their ability to generate accurate responses to their questions. They may also be unaware that the AI’s output could infringe a third party’s intellectual property rights. The risks associated with the thoughtless use of AI are not (and perhaps not even predominantly) legal. They are also commercial, reputational and – at least according to many of the leaders of the companies responsible for leading AI models – existential.
3. Why should I bother with a policy if it is not a legal requirement?
This isn’t one of those situations where ticking a regulatory compliance box is the primary reason why you should have a policy. You should have an AI policy because:
- Your employees are using AI.
- There are good and bad ways to use AI.
- The good ways can do your organisation a lot of good.
- The bad ways can do you a lot of harm (legally, commercially and reputationally).
- Your policy can steer employees towards the good and away from the bad.
Putting an AI policy in place now could put you ahead of the game on compliance too, as AI related regulatory requirements are very likely to multiply over time. Organisations providing or deploying AI in the EU are already subject to a duty to ensure that their staff have a sufficient level of AI literacy (under Article 4 of the EU AI Act). The process drafting a policy may be helpful in itself, as it will force you to have conversations about which forms of AI use to encourage and which to restrict. These conversations should be lively, as there is room for reasonable people to disagree.
Comment
Our market awareness and legal expertise can help you craft an AI policy that addresses the specific opportunities and risks of AI use in your industry, sector and organisation. Please get in touch if you’d like help drafting a policy, or with your wider AI governance strategy.
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.