A creative response to Generative AI faking your voice

With Generative AI being used to imitate celebrities and creators, the question arises, is your likeness a form of intellectual property (IP)? Can you trademark your face or copyright your voice?

These questions are on the bleeding edge of IP law and could take years to resolve. But I find there may be a simpler way to legally protect personal appearance.

On my reading of technology-neutral data protection law, generating likenesses of people without their permission could be a privacy breach.

Let’s start with the generally accepted definition of personal data as any data that may reasonably be related to an identified or identifiable natural person. Personal data (sometimes called personal information) is treated in much the same way by the California Privacy Rights Act (CPRA), Europe’s General Data Protection Regulation (GDPR), Australia’s Privacy Act, and the new draft American Privacy Rights Act (APRA).

These regulatory approaches to privacy place limits on how personal data is collected, used and disclosed. If personal data is collected without a good reason, or in excess of what’s reasonable for the purpose, or without the knowledge of the individual concerned, then privacy law may be breached.

What’s more, technology neutrality in privacy law means it does not matter how personal data comes to be held in a storage system; if it’s there, it may be deemed to have been collected.

Collection may be done directly and overtly via forms, questionnaires and measurements, or indirectly and subtly by way of acquisitions, analytics and algorithms.

To help stakeholders deal with the rise of analytics and Big Data, the Australian privacy regulator developed the Guide to Data Analytics and the Australian Privacy Principles which explains that:

“The concept of ‘collects’ applies broadly, and includes gathering, acquiring or obtaining personal information from any source and by any means. This includes collection by ‘creation’ which may occur when information is created with reference to, or generated from, other information” (underline added).

That guidance should apply to Deep Fakes, for what are digital images and voices if not data?

Digital recordings are series of ‘ones and zeros’ representing optical or acoustic samples that can be converted back to analog to be viewed or heard by people. If those sounds and images are identifiable as a natural person—that is, the output looks like or sounds like someone in particular—then logically that data is personal data about that person.

[Aside: If it seems like a stretch to label digitally sampled light and sound as personal data, then consider digital text. That too is merely ‘ones and zeros’, in this case representing coded characters, which can be converted by a display device or printer to be human readable. If those characters form words and sentences which relate to an identifiable individual, then the ones and zeros from which they were all derived are clearly treated by privacy law as personal data.]

 

And it ought not matter under technology neutral privacy law if an identifiable image or sound was recorded from real life or synthesised by software: the law would apply in both cases.

The same sort of interpretation would seem to be available under any similar technology neutral data protection regime.

That is, if a Generative AI model makes a likeness of a real-life individual Alice, then we can say the model has collected [by creation] personal information about Alice, and the operation of the model could be subject to privacy law.

I am not a lawyer but this seems to me to be easy enough to test in a ‘digital line up’. If a face or voice is presented to a sample of people, and an agreed percentage of them say the face or voice reminds them of Alice, then that would be evidence that personal data of Alice has been collected.

Moreover, if it was found that the model was actually prompted to mimic someone, then the case would be pretty strong, shall we say, on its face.