Film festival directors Peter Issac Alexander and Marisa Cohen said that when they first saw the film, they thought it was interesting, creative and unlike anything their review committee had seen before. The flick, a historical movie about an artist from an Italian director, met all the necessary criteria to be screened in front of a live audience and so it was, during last year's Portland Festival of Cinema, Animation and Technology. But after the film was over, some members of the audience started loudly booing. The reason? A disclosure in the credits that read the film was "a blend of artificial intelligence and human creativity."
Out of the 180 or so films screened at last year's festival, only a few had generative AI elements — many submissions didn't make the cut because it was clear that AI had been used to create the whole movie, which the festival doesn't allow. Despite Alexander and Cohen's personal reservations and serious concerns around generative AI, they know AI has become a popular tool for moviemakers.
"It's hard to know what to do as a film festival director, because we want to be fair. We want to show interesting art. We want people to see what tools are available that they could use," Cohen said in an interview with CNET. "Some filmmakers don't have enough money to buy fancy software [or] have a team of animators, and if they want to tell their story, should they use AI?"
This incident highlights how common generative AI is becoming in the creation of movies, despite AI provoking widespread fears and frustration about job security, potential theft and the diminishment of human creativity and its intrinsic value.
It has been two and a half years since ChatGPT exploded in popularity and set off a new race among tech companies to develop the most advanced generative AI. Like nearly every online service, creative software programs got major AI makeovers, including everything from Photoshop to video editors. AI image generators took off, needing only a simple text description called a prompt to create artistic visions ranging from worthy efforts to unmitigated slop.
Despite the near ubiquity of AI in artistic computer programs, there is an intense power struggle raging behind the scenes. While some people brag about AI optimizing creation, others decry the tech as the end of human creativity. Nowhere is this struggle more evident than in the entertainment industry.
The story of AI in Hollywood is less of the traditional "good versus evil" comic book story and more of a complicated, truly tangled mess. Some studios and networks are all-in on AI. Others have serious legal concerns. Unions — which protect hundreds of thousands of entertainment workers — have tried to guide the implementation of AI on sets, with tales of success varying depending on who you ask. Creators of all kinds, from writers to actors to visual effects artists, have been ringing alarm bells over the development and deployment of AI since the tech started rapidly expanding a few years ago.
The entertainment business has always been an ultracompetitive industry. But the industry in 2025 is a different beast, thanks to rising costs that are sending productions overseas and creating a job market that's "in crisis." AI is touted both as the solution to these woes and the very thing that threatens to make these problems permanent.
Every decision that entertainment leaders make today sets the foundation for how AI will affect the next generation of films and the people behind them. Studios, streamers and organizations like the Motion Picture Academy, Television Academy and labor unions are all exploring their options. For the rest of us, the power, money and influence of Hollywood means that those decisions about AI will undoubtedly have seismic consequences for every creative industry and creator going forward. It will also set a standard for what's normal and an acceptable amount of gen AI in movies and TV shows, which affects all of us as viewers.
This is what you need to know to untangle the web of the biggest factors influencing Hollywood's experience and attitudes toward AI.
Lights, camera, AI: How AI fits into filmmaking
Computer-generated imagery isn't new. What makes generative AI different is that anyone can use it to make a lot of content very quickly. Old barriers, whether it be money, education or practical skill, are eroding as AI makes it easier and cheaper than ever to create digital content. The latest wave in this evolution is AI video generators, which create video clips using text-to-video and image-to-video technology.
Most major tech companies and a number of AI startups have announced or released some version of an AI video model. OpenAI, the company behind ChatGPT, released Sora at the end of 2024, followed by Adobe's Firefly and Google's Veo models. Each model has its own quirks, but in general, they all produce AI video clips between 5 and 10 seconds long. The next step for these companies will be focusing on creating longer and higher-resolution videos. Both of those upgrades will prove critical in determining whether AI video generators can be useful enough for professionals.
Even pushing AI videos up to 30 seconds long, Alexander told me, would help "cover pretty much most of what you see in modern filmmaking," in terms of scene length. Only one video generator is able to produce audio, Google's Veo 3, but even that addition is new and often clunky. None of others can create audio natively in these clips, which is another thing making AI video models less useful for professionals.
Not all generative AI tools are for wholesale creation. AI has also accelerated the evolution of video editing software. Adobe's Premiere Pro, considered one of the main professional video editing programs, got its first AI-powered tool, called generative extend, in April. Traditional editing software that can remove objects and de-age actors can also now incorporate some level of generative AI. This generative editing further blurs the line between what content is human-generated, traditionally retouched and AI-generated.
As AI development races along, the tools get better — fewer incidents of 12-fingered people or weird hallucinations. Today's limitations could be removed in the near future, making it more likely for AI to infiltrate editing and post-production processes.
AI has long been a sci-fi villain. Now it's on the cast list
Despite technical limitations, many entertainment leaders are investigating how they can take advantage of the new AI tech. There are multiple motivations behind the entertainment industry's interest in AI. The most obvious is that studios and networks are hoping it will save them money.
Renowned director James Cameron (of Titanic and Avatar fame) said on Meta CTO Andrew Bozwell's podcast in April that to continue producing VFX-heavy films, "We got to figure out how to cut the cost of that in half." He quickly added that he's not talking about laying off half the people who work on those projects, but instead using generative AI to speed up the process for those workers. An expert in creating CGI and VFX-heavy movies, Cameron joined the board of directors at Stability AI, an AI creative software company, in September 2024.
Speeding along production is surely a concern on the big-budget projects like those Cameron leads, both for the crews working on them and for the viewers who are too used to waiting years for the next season of Stranger Things or Bridgerton. But for smaller productions — especially for amateurs — AI is already being used for efficiency and cost savings.
Netflix's co-CEO Ted Sarandos said on an earnings call after Cameron's podcast appearance that he hopes AI can "make movies 10% better," not just cheaper. And that's certainly what some pro-AI celebrities are hoping for. Natasha Lyonne just announced that her sci-fi directorial debut will partner with an AI production studio she co-founded called Asteria, which uses so-called 'clean' AI models. Horror studio Blumhouse participated in a pilot program for Meta's AI video project Movie Gen. Ben Affleck has been vocal in the past about embracing AI in future movie-making to reduce the "more laborious, less creative and more costly aspects of filmmaking."
One of the most notable recent cases of AI being used in moviemaking came up this past awards season. Adrien Brody won an Oscar for his work in The Brutalist, but the film came under fire when the movie's editor, Dávid Jancsó, revealed that gen AI voice tech was used to improve Brody's and his co-star Felicity Jones's Hungarian dialogue. Brody isn't a native Hungarian speaker, so an AI program called Respeecher was used to refine specific pronunciations. But it was also about saving time and money, according to Jancsó. The backlash was instant and intense.
Adrien Brody accepting the Oscar for Best Actor in a Leading Role for The Brutalist
Patrick T. Fallon/AFP via Getty ImagesThe Academy of Motion Picture Arts and Sciences, the organization behind the storied award show, later came out and clarified that AI usage would "neither help nor harm" a movie's chances of winning. The organizations behind the Emmys, the TV show-focused award show, said AI-edited submissions will be judged on a case-by-case basis.
And we'll certainly see more AI usage in at least a few future blockbusters, thanks to the biggest current collaboration between AI companies and studios. An AI video company called Runway and Lionsgate, the studio behind blockbuster films like the John Wick series and TV shows like Mad Men, have teamed up. The deal gives Runway access to Lionsgate's catalog — all its movies and TV shows — to create custom, proprietary AI models that can be used however the studio sees fit. Lionsgate filmmakers are reportedly already using the new AI, according to the company's motion picture chair, Adam Fogelson, in a 2024 earnings call.
It's a one-of-a-kind deal, Robert Rosenberg, former general counsel at Showtime Network and an IP lawyer, said in an interview with CNET.
"I guarantee you, everybody's kicking the tires [on AI]. Everybody is trying to understand it and figure out, are there benefits to this, in addition to the potential harms," said Rosenberg. "But I do find it very telling that you haven't seen a lot of stories about other studios climbing aboard the way that Lionsgate has."
While AI enthusiasts or AI-curious folks are dipping into AI — or diving into, in Lionsgate's case — there are a number of big players still hanging back. OpenAI has had a hard time shopping Sora, and its chief operating officer Brad Lightcap recently said the company needs to build "a level of trust" with studios. Studios are wary for good reason, as there are a number of serious concerns that come with generative AI use in entertainment.
Pulling back the curtain on generative AI
While some leaders may be hoping to incorporate AI and cut costs, there is a lot of anxiety and apprehension around the actual implementation of AI, specifically the legal and ethical consequences. One of the biggest concerns is around copyright — specifically, if AI companies are using copyrighted materials to train their models without the author's permission.
There are over 30 ongoing copyright-specific lawsuits between AI companies and content creators. You've probably heard of the most notable, including The New York Times v. OpenAI and on the image generator side, a class action lawsuit of artists against Stability AI. These cases allege that AI companies used creator content illegally in the development of models and that AI outputs are too similar and infringe on protected intellectual property.
Christian Mammen, an intellectual property lawyer and San Francisco office managing partner at Womble Bond Dickinson, said in an interview with CNET, "The plaintiffs in all of those cases are concerned that having all of their work used as training data is indeed eroding not only their ability to earn a livelihood, but also the importance and value of their copyrights and other IP rights."
(Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
While AI companies and publishers duke it out in court, AI companies are free to keep operating as is. There's a bit of guidance from the US Copyright Office, but there's a lot of debate about how state and federal governments should (or shouldn't) legislate around AI. In all likelihood, the question of AI and copyright will be left to the courts to decide on a case-by-case basis. But the potential of using technology that's built from stolen work is not only legally dicey, it's an ethical breach many creators won't stand for.
Protecting IP elements like visual style is also a concern, going hand-in-hand with copyright. For example, many directors spend their careers crafting the looks that define their movies. Think the iconic, angsty blue hue that colors the city of Forks in Twilight. Or literally any movie by Wes Anderson, with his signature colorful style. The visual identities of movies are painstakingly created by teams of directors of photography, lighting and visual effect artists and color grading experts. Feeding all of that content into an AI image or video generator runs the risk of anyone being able to mimic it.
This isn't theoretical; it's something we've already seen. When OpenAI launched its native image generator in ChatGPT earlier this year, people started churning out anime-looking images in the style of Studio Ghibli. Studio Ghibli is one of the most popular animation studios, the maker of hits like Spirited Away and My Neighbor Totoro. It was a depressingly ironic trend, as many critics pointed out that the founder of Studio Ghibli, Hayao Miyazaki, had said in a 2017 interview that AI is "an insult to life itself."
changed my pfp but maybe someone will make me a better one
— Sam Altman (@sama) March 26, 2025This is a troubling possibility for studios. "Say you're Lionsgate. You don't want the world that the LLM has been able to create, [like] the John Wick world, to all of a sudden show up in somebody else's storyboard, right?" said Rosenberg. "So I think there's a security issue above all… giving away of your trade secrets, your intellectual property, is really first and foremost in the minds of the studios and networks."
Many AI generators have guardrails around creating images of specific people, like celebrities and politicians. But these guardrails can be flimsy, and even if you don't use a director's or actor's name, you can describe the look and feel until the AI content is essentially indistinguishable.
Lionsgate's AI models should be exclusive to the company, but it highlights how the same concern hits different for studios and individual creators. Studios need to protect their IP; creators don't want anyone to be able to copy their style. There's also the risk of reputational harm from these uses. For example, if you didn't know about the Ghibli ChatGPT trend, it could appear as though Studio Ghibli made a cartoon of a crying woman being deported, as shown in one AI image shared by the White House's official X/Twitter account.
These big-picture concerns help explain why it's been hard for tech companies to sell their AIs to entertainment leaders en masse. As entertainment leaders investigate and begin to implement AI, creators' concerns are elevated by labor unions.
Most people affected by AI aren't celebrities. That's where labor unions come in
While some celebrities have been able to fight back against AI encroaching on their work and likeness, like Scarlett Johansson and Keanu Reeves, the majority of people don't have the resources of a celebrity. That's why union protections are so important when it comes to AI, said Duncan Crabtree-Ireland, SAG-AFTRA national executive director and chief negotiator, in an interview with CNET.
AI was a key issue during the 2023 strikes by unions representing writers, screen actors, directors and stage performers. The WGA and SAG-AFTRA contracts that emerged from those strikes outlined specific guidelines around the use of AI.
In the SAG-AFTRA contract, one of those protections concern digital replicas, the process of scanning people's faces and bodies so that moviemakers can insert synthetic versions of actors into a scene after it's been filmed.
Before the contract was enacted, actors were worried that if they chose to sell their likeness, studios could pay actors once for use of their replicas ad infinitum, which could ultimately limit future job opportunities. Without the guardrails against that set in the contract, that process would be "akin to digital indentured servitude," said Crabtree-Ireland.
"We're not trying to stop people from allowing others to create digital replicas of them. We just want people to know what it is they're agreeing to when they agree to it, and that that agreement can't just be perpetual and without boundaries," said Crabtree-Ireland.
Union guardrails like the ones around digital replicas are step one of a longer path toward finding an equitable balance between innovation and protecting labor interests. To the dismay of some members, the union isn't trying to outright ban generative AI, Crabtree-Ireland said.
"Past history teaches us that unions that just try to block technology, they fail. Technological progress cannot be held back by sheer force of will," said Crabtree-Ireland. Instead, the union wants to keep one hand on the wheel. "We're going to use every bit of leverage, power and persuasion we can bring to channel these things in the right direction, rather than trying to block them," said Crabtree-Ireland.
Unions like SAG-AFTRA protect thousands of workers in the entertainment industry. The power they wield can be used to help industry titans navigate new AI, but more importantly, unions can help guide corporations away from abusive, disastrous or straight-up dumb uses of AI. Union contracts can set important precedents. Not everyone who works in entertainment is eligible for union membership, but by raising the bar and setting limits around AI use, unions can still ensure a healthier work environment and stabilize the future of the industry for current and future creators.
Can AI make an emotional connection?
There's no shortage of hype surrounding AI in Hollywood, though technical limitations, legal uncertainties and ethical concerns have held it back from a full-throttle invasion some technologists might have envisioned. But continued innovation and evolving legal postures might entice studios and networks to start exploring AI more aggressively and more loudly.
For Alexander and Cohen, generative AI will continue to be an issue to grapple with on the festival circuit. But for their own work, a sci-fi miniseries called The Cloaked Realm, the duo said they spent thousands of hours over several years hand-drawing and animating the show.
"We didn't even really consider [using AI] because we really care about the depth, the nuance, all these things that we feel like come organically with 2D animation," said Cohen. "I think it emotionally hits people at a different level, and then intellectually, also, people appreciate knowing a human created everything."
"Human touch can be replicated, but I often wonder, will the feel, the emotion that gets produced in someone, is that going to be replicated?" said Alexander. "You know the old saying, no plan survives contact with the enemy? I wonder when these AI models, even as they get extremely polished and perfected, will touch people's souls the way that something that's created by humans can."