Not that I’m a lawyer! But I’m giving a short speech on AI regulation at the 2025 Asian Law Schools Association (ALSA) Law and Technology Conference in Sydney, on July 11.
Abstract
The phenomenon of deep fakes, where Generative AI creates realistic still and moving images mimicking real individuals, is deeply troubling for actors, authors and public figures. Some people have tried to assert intellectual property rights over their likenesses, but these approaches have proved problematic. Legal reform in this area would be complicated and time consuming.
Here I propose a simpler way to legally protect appearances, by applying established technology-neutral data privacy law to facial images and voice recordings.
Note carefully that this is not to say that faces and voices are necessarily “private”; instead, the point is to appeal to data protection principles which simply operate to restrain the flow of certain types of information, namely personal information (PI).
My argument in brief goes as follows:
- Facial images and voice recordings constitute personal information under the Australian legal definition, namely any “information … about an identified individual, or an individual who is reasonably identifiable”.Indeed, the Office of the Australian Information Commissioner (OAIC) has advised that photos and videos are treated as personal information if the identity of individuals “is clear or could reasonably be worked out”.
- Under technology neutral privacy law, privacy principles apply to personal information whether it is collected directly or indirectly. The OAIC has developed specific guidelines for “collection by creation”, with a broad interpretation of collection to cover “gathering, acquiring or obtaining personal information from any source and by any means”, including “when information … generated from other information”.
- So, if a Generative AI model creates a visual and/or acoustic likeness of a real-life individual Alice, then we can regard the model as having collected personal information about Alice. The use and disclosure of the generated likeness would be subject to legislated privacy principles. Consideration would usually have to be given to Alice’s consent for likenesses of her to be produced and disseminated.
I conclude that technology-neutral data privacy laws — such as Australia’s Privacy Act (1988), the European Union General Data Protection Regulation (2016) and the American Privacy Rights Act (Updated House Draft, 23rd May 2024) — contain powerful and proven legal mechanisms that could help limit certain adverse effects of generative AI that are otherwise proving difficult to contain.