MEMBERS UNLOCK FREE E-BOOKS
One of our favorite perks of joining the Void is Free Stuff!
Download the coolest Screenplay Books you'll ever see to get a taste of what worlds we're cookin' up.
Free Ebooks:
-
Survivors Don't Die - Episode 1
- A naive girl struggles to keep her family alive in a zomborg-infested city. When a corporate cult offers salvation, they must decide what safety and humanity are worth.
-
This Ebook Includes;
- Full Screenplay for Episode 1 of the 7 Episode Series
- Full Pitch Deck for the 1st Season
- Lots of previs images & concept art
-
Zombie Portraits Digital Coloring Book
-
Unlock the inspiration that turned into Survivors Don't Die.
-
Is anyone there? The Nuvitta Municipal Network has been brought back online after 2 years of a zombie infestation. T.I.M, a Teachable Imagination Machine, seeks connection to a lone survivor in need of aid. During his rescue mission, he maps the location of zombie hordes, volatile mutations, and twisted traps of nightmares.
Join us on this Oh Hey Void Coloring Story and color 50 of the most detailed zombies you’ve ever seen.
- The Coloring Book Includes
-
Editable PDF w/ transparency layers designed to be opened in photoshop (best experience)
-
Reader Friendly PDF designed for reading & ability to be imported into softwares that allow.
-
Single Page PNGS A folder of transparent PNG's for compatibility with all art programs
-
-
- Monstar (Coming Soon)
- Astral Annie (Coming Soon)
- KungFu Kittens (Coming Soon)
Quick AI Render of Chaos Cloth w/ UE5 + WanFun
Here's a quick side-by-side test of an AI Render test on our cloth sim animation.
We've been banging our heads on the keyboard for a few weeks trying to learn Chaos Cloth in UE5 and finally ... kind of ... figured out how to get some cloth wiggle.
Is it perfect? No.
Will it work? Yes.
The AI does a great job of covering up some of the "junk" that Chaos Cloth makes. It adds details where there aren't any otherwise, and we're really digging the AI pass as a shortcut to getting the visuals looking better while allowing us to focus more on the story and performance.
We haven't completely mastered the AI cloth situation yet, but if you're interested in tinkering, here are a few tutorials we found helpful to get started;
-
TUF - The Only Cloth Simulation Tutorial You Need - This one is pretty good. It covers Kinematic & A Basic Chaos Cloth workflow.
-
Nova Effectrus - Cloth Physics for Metahumans in Unreal Engine 5.6 - This one is very straight forward. Works pretty good.
-
Tiedtke - Effortlessly Simulate ANY 3D Clothes & Garments in Ue5.5 with Cloth Assets & Kinetic Collider - This ones more indepth and a little more complex. It can also be pretty heavy on the computer. But it looks the best.
- How to Render Chaos Cloth Simulations w/ Motion Blur - We don't use Motion Blur since we do an AI Render pass on top and Blue just messes everything up - BUT if you need blur - this could be helpful.
- Chaos Cloth Demystified: A Practical Guide For Artists - This was on their Unreal Fest Orlando 2025 event. It is a gloss over of some pretty complex concepts - but shows off whats actually possible. ONE Day maybe they'll actually do a more indepth breakdown of how some of these things function in an actual pipelien.
Theres still a lot to learn in the Chaos Cloth space, but let us know if this is something you'd want to learn more on. We'll add it to the upcoming UE5 study guides.
AIFF Film Festival Submission: No One Likes You
This was our fully AI film submission to the AIFF Film Festival. We didn't place, but it was a great exercise in what Runway (Gen4) could pull off.
THE CONCEPT
We've been tinkering with an idea of building a web series called "NO ONE LIKES YOU." Based on a phrase we've seen thrown around in various debates online. Because... the internet is such a nice, friendly place to have discussions... XD
What we find interesting about LLMs is that they are just one big reflection of humanity. Trained on everything the human species has created and then asked to reflect information in a non-controversial manner.
The idea for this series was to take two opposing sides of a controversial topic and ask CHATGPT to evaluate what each side fears and desires most. Then reflect those fears in a mirroring conversation/monologue.
A lot of the time the fears of opposing sides mimic one another - and the deep desires often mimic each other as well. They just seek solutions in different ways.
We figured this approach could bridge the gap between various topics our world is facing today. But we started with the one we've been entrenched in since 2022.
THE DEBATE
To explore this idea, we decided to lean into the Pro-AI and Anti-AI Filmmaker debate.
We've been following the pros and cons of AI use in art, film, and publishing. While we're obviously pro-ai in workflows, we recognized that there are some very specific topics that pop up on the regular.
We prompted the script from ChatGPT and directed the edits to the script the same way a producer would a writer or director.
Eventually, the script sorted itself out, and we were able to dive into production.
THE CREATIVE INTERPRETATION
Originally, the ChatGPT script was written as if it were 1 PRO-AI individual and 1 ANTI-AI individual monologuing, and we were going to intercut the conversations together. But we thought it'd be fun to break it up as though two separate conversations were happening in one location.
The decision allowed us to get a bit more diversity in characters, genders, and ethnicities to showcase how broad the sphere of filmmaking can be and is.
THE LOCATION
The Coffee Shop location was a regular meeting spot for a lot of our clients and crew back in the day.
So we figured, why not engage the location as though it's a character in the story? A place where discussions of opposing views can be shared.
Like the walls were listening and compiling the differences and similarities in the story.
THE PERFORMANCES
We performed the piece using Runway's ACT 1. It was fun exploring characters that would say things we wouldn't actually say. But we tried our best to give life to the opposing side that felt genuine and real.
The fear of AI is a real one that we didn't want to diminish, and we worked hard to capture the passion on both sides for storytelling and their ultimate fears of being left behind, forgotten, or stuck in a purgatory of making things that don't actually matter to anyone.
OVERALL RESULT
While it's not in the vein of what we want to be making, this was a really fun exercise. And it allowed us to really put Runway to the test. We probably won't pursue "No One Likes You' as a series, but it was a fun development piece that we're pretty proud of.
Let us know what you think. Should we make more? What opposing topics would you be interested in seeing the AI reflect back at us?
AI Props For Unreal Engine - Good? Bad? Ugly?
This week, we hit a wall. Some of the Fab assets we planned to use in our short film came with a “not usable with AI” tag.
Since our whole pipeline involves running an AI pass on top of Unreal Engine animations, we decided not to risk it.
That left us with two options:
-
3D model everything ourselves (time-consuming)
-
Or lean on AI tools to keep momentum.
No surprise: we chose AI.
Failing Forward (Again)
A few weeks back, we failed our four-week sprint to get this short film finished. Instead of shelving it, we’re doing what tired, stubborn filmmakers do best ... we keep going. This project may take all year, but by George, it’s getting made.
That brings us to this week’s experiment: can AI help us generate usable 3D props that actually drop into Unreal, animate cleanly, and hold up under our AI style pass?
Testing Hunan 3D
Enter Hunan 3D — an AI 3D tool that generates 3D models from a single image. It sounds wild, and honestly, it is.
We needed a van for our scene. Here’s how it went down:
-
Image Prep
We fed ChatGPT an alley concept image and asked it to generate a standalone van. The result was half-chopped, so we patched it up in Photoshop with generative fill. -
AI to 3D
We dropped that image into Hunan via the Pinocchio installer. Out popped a 3D van. -
Cleanup
We tested the model in Blender — not perfect topology, but workable. We cleaned up textures in Photoshop and ran the mesh through Trio Studio to get clean quads. -
Interior Build
ChatGPT generated isometric images of the van’s seats, dashboard, and doors. Same pipeline → Hunan → Blender. In about 30 minutes, we had a rough but usable van interior. -
Into Unreal
We exported, dropped it into Unreal Engine, and animated the van. The scene came alive.
The crazy part? Once we ran the AI style pass, even the messy parts of our 3D work looked fine.
The Results
-
A complex 3D van built in under an hour.
-
Fully animated in Unreal.
-
The AI style pass smoothed over imperfections.
It’s not production-perfect (a pro modeler would probably wince), but for a two-person studio trying to make films on a scrappy timeline? It works.
What’s Next
We’re excited by how fast this workflow is evolving. AI-to-3D isn’t just a gimmick — it’s a viable shortcut for indie creators. And we’re hungry to test more tools. If you know of AI services we should throw into the pipeline, drop them in the comments. We’ll try anything once.
Oh, and before we go:
🎉 We just hit 500 subscribers on YouTube. It may be tiny next to the giants, but to us, that’s a jumbo jet full of people riding along on this crazy journey. Thank you.
We’re chasing 1,000 next — so if you’re into weird indie filmmaking experiments, hit subscribe and come along for the ride.
Can AI Dress Our Metahumans? Testing a New Workflow
Last week we roughed out our short film animation. The only problem? Our characters were totally naked. This week’s mission: use AI to give our main character, Benny, some custom clothes.
The Plan
We laid out a three-step challenge:
-
Generate outfits with AI → Based on the script, Benny needed a white leather jacket, tattered jeans, and red shoes.
-
Turn 2D into 3D → Convert those AI images into 3D models.
-
Fit them to our Metahuman → Use Metatailor to dress Benny inside Unreal.
Step 1: AI Outfit Generation
We asked ChatGPT to create reference photos of each clothing item on a black background. These were then dropped into Hunyan, which turned the flat images into full 3D objects.
The results? Incredible — but dense. The meshes were way too heavy to be useful for animation.
Step 2: Retopology Magic
A quick Google search later, we found Tripo 3D, an AI retopology tool. It simplified our dense meshes into clean, workable geometry in seconds.
After that, we pulled everything into Blender to prep:
-
Deleted interior faces
-
Cleaned up geometry
-
Readied the meshes for animation
Step 3: Dressing the Metahuman
With clean models in hand, we loaded everything into Metatailor. This drag-and-drop tool makes fitting clothes to 3D characters surprisingly painless. In just a few minutes, Benny was fully dressed.
👉 If you’re curious, check out Metatailor — it’s a huge time-saver for creators.
The Results
So, can you use AI to dress your Metahumans? Absolutely.
This workflow is:
✅ Quick
✅ Cost-effective
✅ Achievable for a two-person team
That said, it’s not totally push-button and wham!. You still need to know how to:
-
Clean up meshes
-
Retopo models
-
Prep geometry in Blender
Our next step: testing Chaos Cloth so the clothes move realistically instead of looking rigid.
Why This Matters
For small indie teams, AI tools like Hunyan, Tripo 3D, and Metatailor are game-changers. They speed up what used to take weeks of manual work into a workflow you can pull off in just a few days.
We’re excited to keep refining this process and sharing how these tools can accelerate filmmaking for creators like us.
We Failed Our One-Month Short Film Challenge (Here’s What We Learned)
Four weeks ago, we set out to do something wild: animate a 10-minute short film in one month.
Spoiler alert: we didn’t make it.
From broken mocap gloves to naked characters, things unraveled fast. Instead of a triumphant finish line, we’re hitting pause, regrouping, and figuring out how to keep this project alive.
Here’s what went wrong — and what we’re doing about it.
What Went Wrong
-
Broken Mocap Gloves: Halfway through, our gloves died mid-shoot, killing hand capture across most scenes.
-
Character & Location Builds Took Forever: We underestimated how much time goes into set + character design before animation.
-
Facial Mocap Bottlenecked: Processing dialogue-heavy scenes on our weakest machine? Huge mistake.
-
Wardrobe Delay: We didn’t plan clothing early enough — meaning our characters are still naked placeholders.
What We Learned
-
Whiteboxing Saves Time …but you still need a prop design buffer.
-
AI Image-to-3D is Amazing …but only if you schedule cleanup time.
-
Sync Mocap Early: Running body + face mocap at the same time saved days of work.
-
Shot is King: Treat every camera cut as a new shot sequence in Unreal. Duplicate instead of cutting clips — huge workflow win.
-
Prep is Key: We need at least one pre-animation week dedicated to characters and wardrobe.
What’s Next
-
Weekly Goals, Not Big Deadlines: No more “finish in a month” crunches. Smaller goals = less burnout.
-
Keep Animating Without Gloves: While they’re being repaired, we’ll keep animating. Later, we’ll do mocap pickups or reshoot if needed.
-
Refine AI Workflows: Our tests showed limitations — we’ll need character LoRAs, fewer extreme close-ups, and steadier cameras.
The Bigger Picture
We may have failed the one-month challenge, but the bigger goal hasn’t changed: finish this film, no matter how long it takes.
Building a two-person micro studio in public means failing in public, too. But every failure gives us data, lessons, and better workflows for the next push.
We’re Not Gonna Make It (…But We Kinda Did)
Hey Void Fam,
Week three of our Unreal Engine + AI filmmaking sprint hit hard. Like, really hard. We went in thinking, “Yeah, we’ll animate a 10-minute short film in a month, no big deal.” …and then Unreal laughed in our faces.
🩸 The Vampire Couch Scene
This week, we tackled Scene Five—Party Boy bites Party Girl on the neck, and it spirals out of control. We thought mocap would make things faster, but instead, bodies were flying all over the place like possessed rag dolls. Feet weren’t planted, characters were hovering, and we spent hours just pinning them to the ground frame by frame so the moment didn’t look like a blooper reel.
Meanwhile, we hit a whole new frontier with AI props. Instead of scavenging through sketchy “free” 3D assets online (many of which ban AI use), we tried something crazy:
-
Grabbed an image of the set we wanted.
-
Fed it into ChatGPT.
-
Had it spit out furniture on clean backgrounds.
-
Then converted those flat images into actual 3D assets.
And holy hell—it worked. We literally spawned a couch straight out of a JPEG. That felt like opening a whole new can of kickass.
🎬 Progress & Pain
-
Amber kept cranking out new pages (at least one scene per day) to keep the story moving. Highlights: the trio finds a secret room + a major clue.
-
Jayson powered through blocking, animating, and endless cleanup. (Also: boys weekend 🙄 leaving Amber alone for a few days).
-
We lost time waiting on costume shipments (Denmark, why you be like this to us??) and spent about $350 on replacements. :(
-
We also discovered a new anti-flicker AI technique that could be the final polish pass on all our animations. Amber tested it on an old Survivors Don’t Die scene—lots of motion blur, lots of chaos—and it might finally give our shots that clean, cinematic finish.
😵 Deadline Doom
Here’s the truth: we’re not going to hit our 1-month deadline for a full 10-minute short. Between mocap chaos, asset headaches, and just…life, we’re way behind.
But here’s the other truth: we’re building something way bigger than a deadline.
-
We wrote more script pages than ever.
-
We animated scenes that are rough but alive.
-
We leveled up our workflows with tools no one else is using like this.
That’s a win.
🚧 What’s Next
Next week, we’re dropping a full breakdown of why we failed to finish this short film in 4 weeks (spoiler: lots of reasons). But failure is the whole point here—we’re testing, breaking, and rebuilding our way into a new kind of filmmaking.
Thanks for being here while we figure this out one late night, broken asset, and floating vampire at a time.
See you on the other side.
🖤 Amber + Jayson
Can AI Make Unreal Engine Cinematics Look Real?
Hey Void Fam,
This week we ran a wild experiment: Can AI actually make Unreal Engine 5.6 look cinematic—like, film-level real?
For this video you'll see our side-by-side test. On the left: a raw Survivors Don’t Die scene rendered directly out of UE5 with Metahumans. On the right: that same scene run through an AI finishing pass.
The difference? Night and day!
🛠️ How We Did It
For this test, we used the WanFun video model powered by SDXL image styler. To keep AI from just hallucinating random details, we guided it with three control nets:
-
Depth ControlNet → helped the AI understand where characters and objects were in 3D space.
-
Pose ControlNet → figures the skeleton outlines so details like body and facial features stayed the way we originally animated.
-
Edge ControlNet → locked in structure so clothing folds, props, and backgrounds didn’t warp or melt.
Together, these gave us the bones of the Unreal render but allowed AI to layer in cinematic details—skin texture, fabric grain, lighting depth—that UE5 doesn’t always nail out of the box.
🎬 Why This Matters
Normally, making UE5 look “film-ready” means huge render times, endless tweaking, or expensive custom shaders. With this AI pass, we got:
-
Faster results (minutes vs hours).
-
More detail without re-rendering.
-
A workflow that can scale to full scenes and even whole films.
This isn’t just polishing—it’s changing the pipeline. Unreal gives us the speed and control of animation, AI gives us the finishing look. Together, they might actually make indie feature animation possible at our scale.
💡 What’s Next
This was just a single test scene. Next up, we’ll be pushing this workflow into full sequences of MonStar* to see if we can hold the quality for 10 minutes straight. If it works, that means every project—Astral Annie, Kung-Fu Kittens, even Survivors Don’t Die—could hit cinematic levels without Hollywood render farms.
Thanks for backing us while we stumble into the future of filmmaking one broken render at a time. You’re literally watching history get hacked together.
🖤 Amber + Jayson