The SAG deal sends a clear message about AI and workers

The SAG deal sends a clear message about AI and workers

On Monday, the leadership of the Screen Actors Guild-American Federation of Television and Radio Artists held a members-only webinar to discuss the contract the guild tentatively agreed to last week with the Alliance of Motion Picture and Television Producers. If the contract is ratified, it will officially end the longest labor strike in the union’s history.

For many in the industry, AI has been one of the most controversial and feared components of the strike. Over the weekend, SAG released details of the AI ​​Consensual Terms, an expanded set of protections that require consent and compensation for all actors, regardless of their status. With this agreement, SAG went far beyond the Directors Guild of America or the Writers Guild of America, which preceded the group in reaching an agreement with AMPTP. This is not to say that SAG has succeeded where other unions have failed, but that actors face a direct existential threat from the advancement of machine learning and other computer-generated technologies.

The SAG deal is similar to the DGA and WGA deals in that it requires protection in any case where machine learning tools are used to manipulate or exploit its work. The three unions have claimed that their AI agreements are “historic” and “protective,” but whether one agrees with that or not, these deals serve as important guideposts. Artificial Intelligence is not only a threat to writers and actors, but it has ramifications for workers in all fields, creative or otherwise.

For those who look to Hollywood’s labor struggles as a blueprint for how to handle AI in their own disputes, it’s important that these deals have proper protections, so I understand those who have questioned them or pushed for them to be more stringent. I am among them. But there comes a point at which we push for things that cannot be achieved in this round of negotiations, and perhaps we don’t need to push for at all.

In order to better understand what the public at large calls AI and the perceived threat, I spent months during the strike meeting with many senior engineers, technology experts in machine learning, and legal scholars in both major technology companies and copyright law.

The gist of what I learned emphasized three key points: The first is that the most dangerous threats are not what we often hear about in the news — most of the people who machine learning tools will negatively impact are not wealthy people but low-income people. and working-class workers, marginalized groups and minorities, because of the biases inherent in technology. The second point is that studios are threatened by the rise of Big Tech and their unregulated power as the creative workforce, something I wrote about in detail earlier in the strike here and which WIRED’s Angela Watercater cleverly expands on here.

    (Tags for translation)Artificial Intelligence

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *