Tim Friedlander and Carin Gilfry, voice actors and co-founders of the National Association of Voice Actors (NAVA), recently spoke with Casting Networks addressing their work with lawmakers to protect actors from predatory AI technology and contracts. The two meet with elected officials on a regular basis, and have played a role in current AI bills being moved through Washington.
In the first installment of our two-part feature, Friedlander and Gilfry discussed working with policymakers, the current state of AI law and how actors can protect themselves by adding AI riders to their contracts.
Today, we’ll go a step further as they discuss what agents and unions can do to shield actors from AI, the latest AI trends to look out for, and advice for new and aspiring actors joining the entertainment industry in an AI world.
Author’s note: To support the voiceover community in Los Angeles, NAVA has set up a California Wildfire Relief Fund where those affected can apply for relief. To donate or request relief, visit NAVA’s Cal-Fire Fund page.
Insights: What You Need to Know About AI in Voiceover
- Educate yourself on AI in voice work and use NAVA’s checklist to ask the right questions before accepting jobs.
- Include AI riders in contracts to maintain control over the use of your voice and ensure fair compensation.
- Work with ethical AI companies and under union contracts when available for greater protection and support.
What should agents and unions be doing to protect actors from AI?
Tim Friedlander: We have met with, and have tried to educate agents and casting directors and producers throughout the industry, and the first step is [determining] what questions to even ask.
A lot of times, many people don’t even know what they should be asking. They’ll see a TTS (text-to-speech), or IVR (interactive voice response) or machine learning and not know that they should be asking questions about what all that entails.
One of the other things that we have on our website is our 13 questions to ask on our AI page, which is kind of a checklist of things to ask if you’re working with any company that has an AI component. Asking the companies these questions will give you some information on whether this job is safe to do or not.
One of the things that we’re starting to see is language that says, “Hey, don’t worry. This is a safe AI job because it’s only going to be used for training,” or “It’s only going to be internal,” or it’s only going to be used for maybe this very specific thing.
One of the dangers right now is without the ability to trace, track or monitor what happens to our audio files, that company could eventually be sold, the data could be sold off to somebody or the data could be stolen in a breach somewhere [and then be used to train AI].
It’s really [about] educating yourself to know what questions to ask. From there, you can then decide if you want to do the job, and if you feel safe. If you do feel safe doing the job, include the AI rider into that process.
I’m using the term loosely, but are there any “ethical” AI companies that actors should know about?
TF: There are none currently on the market, but there is one that’s coming out shortly called Ethovox, which is doing something completely different than all of the other voice companies. They’re starting from the very, very beginning and building their foundational model from the ground up with informed consent of the voice actors who are involved in that.
A lot of these other companies are using foundational models that are built on public domain or fair use arguments, or scrape data from the internet. The Ethovox website is now available, so you can see and learn about them. They’ll be creating a system that is ethical from the ground up, not just from the usage standpoint.
Carin Gilfry: I want to say, too, that there are a few AI companies—and there are more coming —who have signed contracts with SAG-AFTRA.
TF: Currently it’s just Narrativ and Replica Studios. (After we spoke, Ethovox did sign a SAG-AFTRA agreement in line with the other two companies.)
CG: As we said before, when you’re working under a union contract, you have the protection of SAG-AFTRA built into that contract, so that’s a good thing.
I don’t personally know how those companies’ foundational models are built. It could be that they used data from the public domain or scraped internet sources, or something like that, but the fact that they have made a deal with SAG-AFTRA is a very good step in the right direction.
And if you’re Fi-Core, is there anything different that might need to be done? Can you work under the SAG agreement, or do you have to send your own AI rider out as well?
CG: For any actor—union, non-union, Fi-Core, anyone—if you’re working under a union contract, you have the protections of SAG-AFTRA. If you are not working under a union contract, you don’t have the protection of SAG-AFTRA.
It doesn’t actually matter what your union status is. You could be totally non-union and working under a union contract and you are not a member of SAG-AFTRA, but if that union contract is breached, the union would come in and go to bat for you.
While AI is considered a threat, there’s also this cooling and acceptance that we’re seeing, and there’s even some eye-rolling now toward AI in general. Have you noticed this trend with voiceover? For example, clients that went all in on AI last year who are now reverting back to more human VO.
TF: We are seeing ancillary reports of people saying, “This company came to me to replace the AI voice that they had,” or, “I went in and they had an AI scratch track, but they brought me in as the human to do this.”
One of the things we talk about a lot is that it’s hard to tell if there is job loss, because we just don’t see the auditions anymore. A company is not going to put out an audition looking for either an AI voice or a human voice and then decide to go, “We decided to cast an AI instead.” They’re just not going to put the audition out in the first place.
It’s hard to tell if we’re seeing a trend in that direction or not, but we are hearing people saying, “I came in and replaced an AI voice on this.” Or, “I had a client that came back. They tried to use an AI voice and they came to me because they couldn’t get what they wanted,” or that it took longer for them to get the AI voice where they wanted then it would have just to have hired the human in the first place. We’re seeing that, and I think also we’re seeing a little bit of pushback from consumers.
We’re seeing people who play games don’t want to interact with AI-generated voices. They don’t want to interact with AI-generated content. I think we’re seeing a little bit of pushback in that area as well.
CG: I think consumers almost universally don’t like AI when they can tell it’s AI, but the problem is the technology keeps getting better and better.
Consumers can tell they don’t like it, but we’re getting into a situation where the technology is getting so good that [they] don’t notice it anymore. And that’s going to be dangerous for all of us down the line unless we as actors can use this technology to augment our human abilities instead of replacing us completely.
I think that is our main goal as NAVA and as actors, and as people in general, [is] that AI should help augment our abilities and not replace us.
What advice do you have for actors and aspiring voice actors who are coming into an AI world as the technology continues to improve?
CG: I would say learn as much as you can, educate yourself as much as possible, and then do everything in your power to protect yourself.
If you’re going to work with this technology, do it in a way that gives you consent, control and compensation over the synthetic version of you that’s being created. But even in your regular work, it is beyond time to talk about AI on every job and make sure that with every job, you’re only doing that job and they’re not taking a version of you and propagating it and using it forever and ever.
TF: To add to that, I would say that there’s never been a better time to turn down work if you don’t feel comfortable working on that job. We all know when there are times that—especially if you’re having a slow, slow period where you’re not booking—you get to a point of desperation, even if it’s just even minor desperation, where you’ve just got to book something.
A few years ago, [that job] may not have been as dangerous as it is now to potentially go into a predatory contract as a new talent and do a job that you just wanted to get done because you wanted that quick paycheck and you wanted something. That job could now potentially have long-lasting ramifications and damage to your career in the long term if there aren’t protections in place.
To add to that, hopefully, when the NO FAKES Act goes through, one of the things that it’s going to do is prohibit the in perpetuity contracts for AI, AI voices and AI projects. Before that goes into place, put some kind of limit on the usage of your voice, say five years, 10 years, but try and get rid of in perpetuity, as much as possible whenever possible, [especially] when it would be damaging to your contract.
Disclaimer: This article is for informational purposes only and should not be considered legal advice. For professional legal assistance, please consult a licensed attorney.
Ready to find your next role with Casting Networks? Sign up for a free trial today!
You may also like: