Navigating the future - AI in Telecoms friend or foe?

It’s been a busy month on AI at the Global AI Safety Summit at Bletchley Park, followed by a meeting between the PM, Rishi Sunak and Elon Musk, who shared their views on AI and its future risks. At the same time, Deborah Dawson was at an event hosted by Cambridge Wireless at Digital Catapult offices in London, focusing on the Future of Telecoms – the Impact of AI on Security and Privacy.  We summarise the themes from the event, then look at some of the current legal risks posed by the development of AI and how these could be addressed.

AI's integration ranges from customer-facing applications like chatbots such as ChatGPT, Face ID, E payments and virtual assistants. These are just some examples of AI that we use daily and AI is here to stay.  As humans we need to work with it to make sure human intellect is amplified and to make sure we’re in control of its enhancements. 

Approaching AI and GenAI: navigating pitfalls

One of the focal points was how to approach AI and Generative AI (GenAI) responsibly. With the power of AI comes the responsibility to avoid pitfalls, especially concerning breaches of privacy and cyber-attacks. As we harness the potential of AI, understanding its ethical boundaries and ensuring data privacy are paramount.

Generative AI and machine learning: unravelling the future

GenAI describe algorithms, such as Chat GPT that can be used to create new content including audio, code, images, texts, etc. Event participants delved into understanding the intricacies of Generative AI and its interplay with machine learning. Exploring the potential and limitations of these technologies is essential to harness their benefits effectively.

Preparing for the quantum era: security in the face of advancing technologies

Quantum computing is still in the lab in a more theoretical stage. Some experts say we are nearing the “quantum gate” and we will arrive there faster than we initially thought possible. As quantum technology advances, it was suggested we should be prepared to invest to ensure that quantum capabilities for economic prosperity and national security are developed accordingly.

Quantum chat is an emerging technology that combines quantum computing principles with communication systems, enabling secure and efficient communication. Unlike classical chat systems, quantum chat can process and transmit information in fundamentally different ways, promising faster and more secure communication.

One of the key features of quantum chat is Quantum Key Distribution (QKD), a method that ensures secure communication by detecting any eavesdropping attempts. QKD uses quantum properties to exchange encryption keys between users, making it practically impossible for a third party to intercept the key without being detected. Quantum chat holds the potential to revolutionize various fields, including cryptography, finance, and artificial intelligence. Researchers are continuously exploring ways to harness quantum properties for more robust and efficient chat systems, paving the way for a new era of communication technology.

The role of encryption: safeguarding the AI ecosystem

In the context of AI, encryption emerged as a critical component in ensuring data security. The speaker analysed the role of encryption in the world of AI, including encryption techniques to continue auditing AI models for vulnerabilities. Plus real time monitoring to detect and react to risks. As AI becomes more integrated into our daily lives, encrypting sensitive information becomes paramount.

While the benefits of AI and machine learning are undeniable, the discussions highlighted the importance of education, safety, and risk management. To harness the potential of AI, it is crucial to keep it safe, human-centric, trustworthy, and responsible. This responsibility extends to individuals, organisations, and policymakers alike.

Current AI risks from a legal perspective

From a UK legal standpoint, the development of AI poses three key risks; data protection compliance, intellectual property (IP) infringement and contractual uncertainties.

The oceans of data used in AI often includes personal data which can’t lawfully be used unless data protection laws have been complied with. Perhaps surprisingly, technology suppliers are tending to include AI in their products and services without telling their customers and users, or without telling them enough to comply with the law.  So many organisations are falling at the first data protection hurdle. To redress this organisations can use stronger supplier due diligence, tendering procedures and internal governance to integrate data protection compliance with their use of AI.

The oceans of data also include IP ie material in relation to which there are intellectual property rights.  AI doesn’t care where the data comes from, but the intellectual property rights owners do! There are exceptions, but the current generation of AI is generally not good at recording the source and acknowledging the source as part of the output.  If the AI doesn’t keep track of where its data came from, it’s likely that the operator of the AI doesn’t know who the rights owners are and doesn’t have their permission.  In such cases, the AI is systematically infringing the third party’s rights  and so, probably, are the organisations and people who use the AI.

Operational risks stem from AI's inherent unpredictability and lack of memory, necessitating the need for human supervision. From the AI user’s perspective, contractual obligations are a way of ensuring that the provider of the AI (or the supplier that is using AI) put the necessary human resource in place.  From the AI supplier’s perspective, contract provides an opportunity to impose enforceable limits on the supplier’s liability for things like IP infringement, or unwanted outcomes (some of them unforeseen) resulting from the AI’s operation.  These legal concerns underscore the imperative for transparent AI integration and comprehensive contractual safeguards.

In conclusion, the need for a collective approach in navigating the future of AI in telecommunications is clear. As we stand on the precipice of ground-breaking technological advancements, it’s our responsibility to ensure that AI remains a friend, not a foe. By ensuring its safety and adopting rigorous risk management strategies, including legal protections we can unlock the true potential of AI while safeguarding our privacy, security, and ethical integrity.

Written by Deborah Dawson and David Hall

Our content explained

Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.

Posted by

  • Deborah Dawson

    New Business Manager

Tags

Mills & Reeve Sites navigation
A tabbed collection of Mills & Reeve sites.
Sites
My Mills & Reeve navigation
Subscribe to, or manage your My Mills & Reeve account.
My M&R

Visitors

Register for My M&R to stay up-to-date with legal news and events, create brochures and bookmark pages.

Existing clients

Log in to your client extranet for free matter information, know-how and documents.

Staff

Mills & Reeve system for employees.