The Future of AI Filmmaking Is a Parody of the Apocalypse, Made by a Guy Named Josh

1 week ago 10

The filmmaker could not get Tiggy the alien to cooperate. He just needed the glistening brown creature to turn its head. But Tiggy, who was sitting in the passenger’s seat of a cop car, kept disobeying. At first Tiggy rotated his gaze only slightly. Then he looked to the wrong side of the camera. Then his skin turned splotchy, like an overripe fruit.

The filmmaker was not on a movie set, or Mars. He was sitting at his home computer in Los Angeles using a piece of AI software called FLUX Kontext to generate and regenerate images of the alien, waiting for a workable one to appear. He’d used a different AI tool, Midjourney, to generate the very first image of Tiggy (prompt: “fat blob alien with a tiny mouth and tiny lips”); one called ElevenLabs to create the timbre of Tiggy’s voice (the filmmaker’s voice overlaid with a synthetic one, then pitch-shifted way up); and yet another called Runway to describe the precise shot he wanted in this scene (“close up on the little alien as they ride in the passenger seat, shallow depth of field”).

Josh Kerrigan shirt monitor desk and framed art.

The creator of Neural Viz edits a clip featuring Tiggy.

Photograph: Neftali Barrios

The AI kept getting things wrong. In one shot, Tiggy looked inexplicably jacked. In another, his back was too dry. When the filmmaker told one piece of software to give the back of Tiggy’s head “frog-like skin,” it superimposed an entire frog’s face. The AI seemed to resist depicting Tiggy naked, but Tiggy does not wear clothes. When the director asked for a “short shirtless alien,” he got an error message, presumably because of the tool’s safeguards. “Because I said the word shirtless,” he guessed.

Narratives around AI tend to be all-or-nothing: Either we’re cooked or it’s all hype. Watching the filmmaker work with AI software—morning iced coffee in hand, brown hair and beard lightly unkempt—is quirkier and less dramatic than all that. It’s like dropping in on puppy school. The tools keep ignoring instructions, making odd choices, or veering entirely off-course. But with care and patience, he reins them in, eventually coaxing out eight minutes of densely scripted original TV.

In this case, those eight minutes constituted the latest episode in the sci-fi cinematic universe that the filmmaker has created under the name Neural Viz. The project started in 2024 with a mockumentary web series called Unanswered Oddities, a talking-head TV show from a future where the Earth is inhabited by creatures called glurons, who engage in Ancient Aliens–style speculation about their human predecessors. Each episode explores a different (and badly mispronounced) aspect of "hooman" civilization, like America, exercise, or the NFL. At first it seemed like a funny, self-contained bit.

But then the universe, known as the Monoverse, started to expand. Neural Viz churned out episodes of different series from the same gluron TV network, Monovision: a documentary cop show, a UFC-style show about fighting bugs. Then came podcasts, street interviews. Subplots and arcs started to emerge between videos, with romances forming, religious cults lurking in the background, and grainy archival footage surfacing about the true circumstances that wiped out humanity. Before long, the filmmaker had built an entire world with its own language, characters, and lore, all of it made with AI.

Neural Viz became a cult hit—a favorite of Redditors and AI nerds on Twitter—then a hit-hit, with individual videos racking up hundreds of thousands of views on YouTube and millions on TikTok and Instagram.

But beyond any measures of popularity, Neural Viz counts as a historic accomplishment: It is among the first pieces of AI filmmaking that truly does not suck. The words “AI video” tend to conjure the worst possible associations: hippos on diving boards, babies flying airplanes, Will Smith eating spaghetti, Trump and Barack Obama kissing. In other words, slop. The medium’s reputation is understandably negative, for reasons both aesthetic and political. The bots will ruin Hollywood and destroy jobs, the argument goes, and drive audiences even deeper into their algorithm-induced stupor.

Neural Viz shows a different path forward. In a world of bottom-of-the-barrel, lowest-possible-effort AI dreck, the channel’s author is creating original work, executing a vision as specific and lovingly imagined as any series out there. A couple of important details: Even as he’s writing prompts to help fulfill nearly every other role on a set, the creator of Neural Viz is writing scripts the old-fashioned way. He’s also playing all the characters himself, wearing AI as a mask. Once he has all his shots set up, the filmmaker uses Runway’s facial motion-capture tool to bring Tiggy to life by performing the alien’s lines for him—like Andy Serkis playing Gollum without leaving his swivel chair.

A clip from Unanswered Oddities featuring the character Bobo Fuggsnucc.

The same performance by Neural Viz's creator—before being run through AI facial motion-capture software.

Just as Trey Parker and Matt Stone reinvented cartoons by reaching for the cheapest tools available, the man behind Neural Viz is taking a technology many people consider beneath them and using it to push the medium in a new direction. He just might be the first AI auteur.

He has also maintained near-total anonymity in this role—until now.

The youngest of three brothers, Josh Wallace Kerrigan grew up in a small town outside Wichita Falls, Texas, watching movies like Tremors and Jurassic Park. When he was 9 or 10, he and a friend used the video camera on top of his desktop computer to make a short film about a baseball player serial killer. (Tagline: “Three strikes, you’re out.”) He studied film at Minnesota State University Moorhead, and after graduating in 2012, Kerrigan moved to Los Angeles.

For the next decade, he followed the 2010s aspiring-comedy-writer-in-LA handbook. He took a series of day jobs, working as a barista at “a Starbucks inside a Target,” as an assistant to the director who cowrote the frat comedy Neighbors, and as a producer of behind-the-scenes and promotional videos for movies like Mufasa: The Lion King and the John Cena–Akwafina comedy Jackpot! He formed a sketch group called Hush Money and made a video every week for a year, which appeared on Funny or Die’s YouTube channel. (The group specialized in genre satires, including a Saw parody that got props from director James Wan.) In 2021, he directed a low-budget horror feature and sold a TV pilot to Disney.

Kerrigan accumulated gobs of experience—he got to the point where he could play every role on a set, from cinematographer to gaffer to sound guy—but struggled to gain lasting traction. The pandemic and its aftermath wrecked the traditional Hollywood writer path. The streaming bubble burst, and writers’ rooms shrank. Strikes by the writers’ and actors’ unions froze work for months, and the contracts they eventually signed reflected an ever-shrinking pie, as well as a fear of AI’s encroachment.

In 2023, Kerrigan started playing around with 3D modeling software like Blender and Unreal Engine. He was interested in animation—he liked the idea of building characters and sets he could return to at any time—and wanted to see what he could make on his own. He soon learned about a handful of new generative AI apps like Midjourney and Hedra and found that they automated and sped up the most difficult parts of 3D modeling.

When most people first encounter generative AI tools, they tend to start with the zaniest thing they can imagine. These “crazy” ideas are often surprisingly generic: dragons in space, kittens crying, robot uprisings. Kerrigan took the opposite approach: He paid close attention to AI’s limitations and worked around them. He noticed that the tools were bad at action sequences but good at talking heads, so he decided to make something in a documentary style. He wanted to avoid the uncanny-valley effect of simulated humans, so he opted for bulbous alien creatures. And to mask the imperfections of the renderings, he gravitated to the old-school grainy look of ’80s and ’90s TV. Hence Unanswered Oddities, whose deadpan homage to NBC’s Unsolved Mysteries is unmistakable.

Kerrigan’s early episodes look a bit rough, but they quickly establish the show’s off-kilter comedic voice and ambitious vision. They also set up some of the Monoverse’s core elements and conflicts: the autocratic godlike Monolith that rules the planet, the “Resistance” trying to overthrow it, and the fast-talking conspiracy theorist Tiggy Skibbles, who thinks “hoomans” aren’t real—and who then mysteriously disappears.

Characters from the Monoverse.

Characters from the Monoverse.

Courtesy of Neural Viz

For Kerrigan, discovering generative AI apps felt like unlocking new powers. “The first time you start to see those weird creatures talking and whatnot, it is pretty mind-blowing,” he says. He felt like the meme of the guy standing in the corner at the party watching everyone dance and thinking to himself, They don’t know.

On Reddit, users were impressed by Neural Viz’s decision to lean into the idiosyncrasies, and even the flaws, of AI. Kerrigan got props from other creators too, who speculated about the identity of the artist behind the channel. “I thought he was Mike Judge hiding under a pseudonym,” says Zack London, who creates AI videos under the moniker Gossip Goblin and has over a million followers on Instagram.

Encouraged by the initial response, Kerrigan decided to make more episodes, but he had no idea where it was going. “There was no plan,” he says, so he decided to keep his identity secret. Kerrigan experimented with new formats, driven by his knack for genre satire and a desire to keep himself interested. He created The Cop Files, an X-Files meets Cops spinoff series, in which a detective investigates Tiggy’s disappearance; later came Human Hunters, a parody of Ghost Hunters.

The series also evolved with technology. With new generative AI apps dropping frequently, Kerrigan was intent on trying as many as possible. (Using a new piece of software early can help attract tech-curious viewers.) When he first started, he would record snippets of dialog—that is, recite lines into his microphone—and the AI would roughly match the mouth flaps of the characters to the spoken words as best it could, adding some basic facial movements. This gave Kerrigan some command over performances, but not a lot. Then, in October, Runway released its motion capture tool, Act-One. Now he could act out lines in front of his computer’s camera and the software would map his delivery—both voice and facial movements—onto the model of the character. This gave him much more control over the characters’ look and behavior. It also made the content more him than ever. (On the other hand, the characters started to seem more uniformly him, at least to my eye. Kerrigan says he’d like to hire other actors to diversify the performances, but for now it’s easier to play every part himself.)

Sometimes a new tool would open up storytelling possibilities. When Google’s Veo 2 video generator became available, Kerrigan made a video showing a “flashback” to the moment when the Monolith wiped out humanity—the series’ first narrative sequence. The Cop Files became more narrative too; instead of talking directly to the camera, characters were now moving around, interacting with each other, and setting off on quests.

The technological changes even influenced the show’s lore. In one episode released in April, Tiggy’s skin is noticeably smoother than usual, because the video generation tool Kerrigan was using at that point, called Sora, struggled with character consistency. To cover for this flaw, Kerrigan had Tiggy explain that he is “metamorphosizing” because he can no longer afford his “morph inhibitors.” This fit nicely with the theory—advanced by some characters in the show—that glurons are mutating versions of humans. Since that episode, “morph inhibitors” have become a recurring bit.

The machines’ mistakes would often provide creative fodder like this. A knife-obsessed gluron rancher named Reester Pruckett—many Neural Viz characters are alien versions of the American southerners around whom Kerrigan grew up—has the strange tic of starting sentences with an extremely long vowel, e.g. “Iiiiiiiiiiiiiiiiiiiiiii came out here to practice my switchblade.” This started as a glitch in the software, but it was so delightful that Kerrigan decided to keep it as Pruckett’s signature.

In late 2024, Hollywood executives started DM-ing Kerrigan on social media. He spoke with “almost all of the major studios,” he told me, as well as producers and creators who wanted to talk about collaborating. Many commenters on YouTube told Kerrigan that his videos should be on Adult Swim. But when he met with producers affiliated with Adult Swim, he said, one of them suggested that he might not need them; that the power had shifted to creators. “That sentiment has come up multiple times in meetings with other various studios,” Kerrigan said.

The meetings resulted in two job offers. One was to work in-house at a studio, focusing on AI projects. Kerrigan turned it down in favor of making his own TV pilot (unrelated to the Monoverse) with an independent producer. He was also planning to debut a non-AI film short, which he’d codirected, at SXSW in the spring of 2025. Between his new contract for the TV pilot and the revenue Neural Viz was generating on YouTube and TikTok, Kerrigan now had enough money to live on. So in January, for the first time since moving to LA, he quit his day job.

In June, I attended the AI Film Festival in New York City, an event organized by the AI software company Runway. Hundreds of attendees packed Lincoln Center’s Alice Tully Hall to watch what were billed as the 10 best AI film shorts of 2025, selected from 6,000 submissions.

I found the whole thing depressing. The films were visually stunning but conceptually and narratively weak. The event, which featured a bromide-heavy Q&A with the musical artist Flying Lotus and a partially AI-generated music video for a song by J Balvin, seemed laboratory-made to bolster the case of skeptics who say AI art is all surface with no heart. (The one exception—a clever, disquieting film essay called “Total Pixel Space”—won the top prize.)

It’s a paradox of the AI film scene that, despite the speed and sophistication of these tools, the number of creators producing memorable work is small. I already mentioned Zack London, aka Gossip Goblin, who creates ominous, impressionistic videos about a future overtaken by computers. The musician Aze Alter makes eerie horror-adjacent shorts. A comedy writing duo who release videos under the name TalkBoys Studio (and who are friends with Kerrigan) make animated shorts featuring talking animals and dinosaurs.

Much more common are prompt-and-play AI videos designed to go viral. When Google’s Veo 3 debuted in May, making multimodal video generation as easy as typing a prompt into a box, social media feeds overflowed with—for reasons only the algorithm knows—vlogs of Bigfoot talking into a front-facing camera. One influencer even bragged about setting up an automated LLM-to-video pipeline that generated a Bigfoot clip every hour and pushed it to TikTok. OpenAI’s late-September release of Sora 2, which allows users to scan their own faces and insert themselves into videos, has only hastened the slopocalypse.

Part of the reason Neural Viz has broken through the noise is that Kerrigan takes such a traditional approach to so many parts of the craft. He always starts by writing—slug lines, action lines, dialog, camera movements. Then he storyboards each shot of the episode; for each panel, he creates a still using an image generator like Flux or Runway or sometimes ChatGPT. He makes sure lighting is consistent. During dialog scenes, he maintains sight lines. He takes care to make backgrounds legible—AI tools tend to blur objects—and set the mood of the scene. To get a handheld camera effect, he’ll film his monitor with his iPhone and then map that natural motion onto the AI footage: a hack that bridges real and virtual cinematography. “Everything I do within these tools is a skill set that’s been built up over a decade plus,” he says. “I do not believe there’s a lot of people that could do this specific thing.”

Computer script pen watch notebook and hands

Kerrigan still uses many tried-and-true human filmmaking processes in his work.

Photograph: Neftali Barrios

One day over Zoom, I watched as Kerrigan worked on one of his most challenging scenes yet: After being taken hostage and then rescued, Tiggy meets up with a leader of the Resistance, and things don’t go as planned. The scene called for subtle physical movement, precise timing, suspense, and a major plot twist that would need to land just right. Each element presented a unique challenge. Kerrigan kept adjusting the proportions of one character’s head. When the character pointed a gun, he tried to line up its aim properly. He thought about how to get the Resistance leader to remove his hood in a way that would look natural.

At one point, while Kerrigan was getting ready to perform as Tiggy, I received an email from a Runway spokesperson. He told me that their new motion capture software, Act-Two, would be coming out later that week. I relayed this information to Kerrigan, who decided to stop working on the episode right then. Better to wait and see what the new tool could do.

Toward the end of our day together in Los Angeles, Kerrigan and I visited the Academy Museum of Motion Pictures, a 10-minute drive from where he lives. We walked through the exhibits dedicated to film technologies past: the zoetrope, the Cinerama camera, animatronic monsters. After spending the day looking at AI-generated glurons, I thought that even some of the more recent technology featured in the exhibitions—Bong Joon Ho’s storyboards and monster models, VFX for The Avengers—looked obsolete.

We stopped to take in an early hand-tinted color film that shows a dancer waving her flowing psychedelic robes for the camera. Kerrigan pointed out that the impulse to paint celluloid was probably more about experimentation than making history or some profound artistic statement. “They're not thinking, like, This is gonna be in a museum one day,” he said.

Kerrigan resists grand pronouncements about the future of filmmaking. (Indeed, he resisted going to the museum altogether.) He doesn’t see himself as part of a movement and argues that AI is a tool like any other. In addition to his AI projects, he’s working on a traditional horror feature based on the short he codirected, which ended up winning an audience prize at SXSW. “I’m here to tell stories, and these tools are a part of the workflow,” he says. “They’re not the end-all-be-all, nor do I think they will be anytime soon.”

Yet Hollywood is preparing for an earthquake. Studios are integrating AI into their workflows. James Cameron has joined the board of an AI company, while Darren Aronofsky recently founded an AI-focused studio that is partnering with Google’s DeepMind. During the latest contract negotiations, the writers’ and actors’ unions fought for AI-related job protections.

Kerrigan says he has received some criticism online for using AI, and he acknowledges that the technology could disrupt Hollywood’s labor models. But the bigger, preexisting problem, he says, is that studios control narrative content. Whereas Disney bought and now owns the pilot he made in 2021, AI enables him to create and own work himself. “There is a version of these tools that allows people to become more independent of the system, and I think that’s probably a good thing,” he said. One downside: He worries about burnout. For all the benefits of being able to produce a studio-caliber video every couple of weeks, he now feels pressure to produce a studio-caliber video every couple of weeks.

The TalkBoys Studio writers, Ian McLees and Dan Bonventre, say the initial response to their AI work was mixed. “Our friends who are sitcom writers, feature writers, were like, ‘This isn’t worth your time, this is gonna kill jobs,’” says McLees. “We’re like, the jobs are already gone, the studios killed it.” He likened the shift to previous disruptions in the film industry, including the transition from hand-drawn to 3D animation. “We wanted to be at the table and not on the menu,” he says.

Zack London/Gossip Goblin says he gets blowback from fellow illustrators who are “very, very, very dogmatically opposed and fucking hate it.” He says he has little patience for the knee-jerk detractors. “Bro, you draw, like, furry fan art,” he says. “You don't have to freak out at the first new thing that challenges whatever you thought was creative.”

So far, it seems the losers in the visual AI war will be the craftspeople—those extremely good at doing one technical task. The winners will be the idea people: writers, directors, storytellers. Idea people who can also wield the tools? They'll be gods.

While some new AI tools are facilitating the prompt-and-play approach, others are providing more levers for human fine-tuning. When Kerrigan resumed making his Cop Files episode using Runway’s new software, Act-Two, it managed to capture the nuances of his performances even better than Act-One had. In one shot, as Tiggy delivers an emotional line, his lip trembles.

An ongoing mystery within the lore of the Monoverse is how humanity died off. One character says it’s widely believed they were killed by escalators, sucked into the moving cracks one by one, “taken out by their own dumb invention.” This feels like a reference to AI. In one episode, a news reporter discusses the escalator threat while standing in front of one in a mall. As he was designing the shot, Kerrigan could have left the space around the escalator blank. Instead, he inserted a flight of stairs, and a figure nonchalantly walking up them.


Let us know what you think about this article. Submit a letter to the editor at [email protected].

Read Entire Article