The ethical risks of using artificial intelligence to generate legal work are with one exception too obvious to warrant comment; its hallucinations are notorious, and ChatGPT’s knowledge of the world only extends to January 2022.
Yet each of the Legal Practitioners Liability Committee, the NSW Bar Association, the journal of the Queensland Law Society and the Victorian Legal Services Board + Commissioner have published earnest guidance, all available on the internet. Respectively:
- ‘Limitations and Risks of Using AI in Legal Practice’
- ‘Issues arising from the use of ChatGPT and other AI Language Models in Legal Practice’
- David Bowles’s excellent ‘Artificial Intelligence: Do you have a usage policy?’ Proctor, April 2023
- ‘Generative AI and Lawyers‘, and ‘Tips for Developing Legal Self-Help Tools‘.
The exception is the confidentiality risk; some AI platforms use information revealed to them to train themselves. Some provide opt-out opportunities which should be availed of by lawyers.
To disclose client information, whether on an anonymised basis or otherwise, to an AI platform is prima facie impermissible without client consent: barristers’ conduct rule 114 (‘A barrister must not disclose … or use in any way confidential information unless or until … the person has consented’.)
Bowles points out that to provide unusual proprietary client information to an AI platform is to risk the re-use of that information by competitors who use the same platform, trained by the provision of that information, even if the data is uploaded on an anonymous basis.
It would be prudent, at least, to know the usage policy of the platform you are using, and to obtain client consent before uploading such information to an AI platform which uses queries to train itself. It is instructive that some if not all Victorian government departments are banned from using AI at all.
None of the earnest guidance refers to the ethical implications of the creation of AI, or to the legions of poor — an estimated two million in the Philippines alone — in digital sweatshops across the global south whose lives are heavily dependent on doing repetitive tasks used in the development of AI models (think those the whole internet using world is sometimes compelled briefly to perform in establishing that they are ‘not a robot’ in CAPTCHA puzzles) for less than US$1.50 per hour on US-owned micro-tasking platforms.
Time reported the trauma experienced by such ‘taskers’ in Kenya, obliged to review swathes of hate speech and other vile toxic text from the sewers of the internet, with a view to assisting ChatGPT to identify and hide such data in the formulation of its responses.