Artificial Intelligence (AI) continues to be a hot topic, with many businesses now trying to embrace AI rather than fear it – for instance by offering training and specialised forms of AI to their employees such as Microsoft Copilot.
If you’ve used AI yourself you may be quickly realising that it can be used for a massive variety of tasks and activities, including research, calculations, preparing written text such as emails or case studies, and even producing images and videos. Employers are also quickly realising that utilising AI can save employees a significant amount of time and could increase productivity drastically.
Different types of AI
If you are considering using AI in a work capacity, you should firstly consider what your employer allows. Many employers are implementing safe use AI policies or are blocking use of certain types of generative AI altogether.
This likely includes blocking the use of ‘public’ generative AI platforms, including those such as ChatGPT, which can be accessed by the wider public without any specific type of account or licence, free of charge. Public generative AI platforms such as ChatGPT use the data that is provided to them to train the models that the AI uses – essentially, they learn from what you provide it to create more human-like responses.
However, even if your employer does not have a strict policy regarding the use of AI you should always consider the effects of the data you provide to it. Some of the main risks of AI are:
- Confidentiality risks: As above, generative AI platforms will use the data you provide to learn. If presented with confidential data, including client names and private information (which could include things such as trade secrets or, in the worst case, data which could be reputationally damaging if it was released), they may use this to learn and could also present this data to other users of the platform in response to new prompts.
- Hallucination: This is the generation of false or misleading information presented as fact. Unfortunately, these hallucinations (where AI generates false or misleading information) can occur even when the AI appears confident and coherent, leading users to trust inaccurate outputs. In some contexts, e.g. legal or academic, such errors can have serious consequences, including leading to misinformed decisions, reputational damage, or regulatory breaches. It can sometimes be difficult to detect when a hallucination has occurred especially as users of AI may not be well versed in source materials.
- IP plagiarism: As above, given that public generative AI platforms utilise the information they are provided with, where this is data subject to IP protections it could result in AI systems inadvertently reproducing copyrighted or proprietary content without proper attribution. When prompted, AI may generate text, code, or creative works that closely resemble existing IP, exposing users to potential legal liability. Difficulty in tracing outputs to original sources could also further complicate efforts to ensure compliance with copyright laws. As AI becomes more integrated into professional workflows, safeguarding against IP infringement is becoming a critical ethical and legal priority.