We live like THIS... | Survivors Don't Die | Act 2

We just dropped Act 2 of Survivors Don't Die! This episode ramps up the stakes as the crew battles dwindling supplies, mounting tension, and a horde of zombies hot on their heels—all while trying to reconnect with the outside world. In a gripping finale, the team's desperate broadcast finally gets a response: someone named Rosie596... but can she be trusted?

We also shared exclusive insights into the creative process, emphasizing the delicate balance of hope, survival, and trust that drives the story forward.

Thanks for helping greenlight this project into a fully animated series. Every step brings us closer to making this show a reality! 🌟

AI Audio Table Reads | Survivors Don't Die | Act 1

Alright, folks! 

The audiobook for Survivors Don't Die - Episode 1 took us about a month to learn how to do. We tinkered with different ways we could do voice changers with AI. We tapped a number of services and even trained some of our own RVC models with our voices. 

It was a grand learning experience that ended with us being BLOWN AWAY by ElevenLabs. An AI voice platform that is miles above the rest. 

We really enjoy working with this particular tool because it pays the actors that have trained their voices for every generation we do. 

Everything you hear was performed by us. Any dialogue was run through the voice2voice tools at elevenlabs and we got some pretty decent emotion. 

NOW, we've been tinkering with different ways to present the Survivors Don't Die Episode 1 audiobook. ACX/Audible doesn't allow for any AI voice productions on their site (unless you use their really shitty AI voice tool). Which sucks when you're using legitimate tools, but at the same time - I get it. 

Anyway, we're distributing it ourselves. Huzzah! 

We've got the full 1hr Audiobook experience on Spotify and on Youtube. We noticed that the hour-long audio-only video wasn't exactly engaging. 

Until...

Jwall had a brilliant idea to shorten the length of the stories into more bite-sized single-Act episodes for our micro-podcast. Because everyone needs a podcast now, or something, right? 

He even went a step above and edited together the actual book into the visuals for this one. 

We played with some usual podcast banter, lol, we'll get better. We promise. But we're excited to present each piitch and pilot this way in the future. 

Would love to know what you think on this one! 

- Amber & Jwall 

New Teaser Trailer!!!

Welcome, to the new website! 

Today, we're thrilled to dive into our Survivors Don’t Die teaser trailer, giving you a behind-the-scenes look at the creative and technical process behind our latest peek into the world of Nuvitta.

From the initial sparks of inspiration to the AI magic that helped bring it all together, here’s everything that went into creating the teaser that’s got us counting down to October 31st!

Scene Breakdown & Inspiration

The purpose of this teaser was to immerse you into the tense world of Survivors Don’t Die and bring pages of our screenplay to life!

 Inspiration came from some of our favorite zombie films (Train to Busan and World War Z) and the suspenseful editing style of A Quiet Place.

While we had plenty of thrilling scenes we wished we could include, we focused on the strongest visuals to capture the story's core atmosphere.

Character Deep Dive

In this teaser, Yuki takes the lead as she’s swept up by the Hive's large drones, leaving viewers with questions about where she’s headed and what lies behind the massive walls of Nuvitta. We wanted to give glimpses into Yuki’s curiosity, Tim’s bravery, and Ji’s humor—each revealing hints of their personalities without too many spoilers!

Our original plan had more dialogue scenes, but we decided to dial it down to meet our Oct. 31 deadline. More trailers (and crazier videos) are on the way, and we can’t wait to reveal more about the full team.

Production - Production - Production!

Creating this teaser with just two of us was an exciting test of our skills! We knew we had to tackle Unreal Engine consistently, so we built a daily habit of 8-hour sessions to level up our animation chops. It was these consistent reps that allowed us to power through and bring this ambitious world to life.

As a two-person team, accessible tools like Unreal Engine, Blender, and Rokoko mocap were game-changers. With AI (like MetaHuman Animator for faces and some visual blending), we were able to handle high-quality, complex scenes much faster than we could’ve dreamed!

AI & Animation Process Insights

UE5 has been our animation workhorse, but with recent advances in AI video, we were able to supercharge the process! We ran some of the zombie and previs footage through Minimax, blending these visuals with our animated shots for a truly immersive experience. This AI pass was also key for polishing color correction and smoothing Metahuman textures.

AI in creative work has been a revelation, allowing us to speed up tasks and create visuals that hint at the epic scale we envision once greenlit.

Moving Forward

Up next? Even more promotional material and immersive sneak peeks! Our current goal is to reach 1,000 downloads of our pilot by year’s end to help make Survivors Don’t Die a fully realized series.

Is there a specific part of this world you’d love to explore further? Comment below, and let us know—we can’t wait to share more!

SceneTest | Scene 14

✒️SCENE 14✒️

Zola is checking on their grow house, while Oli is worried about their dwindling supplies. Tensions are rising!

This scene is in a sequence of "everyday life" scenes where we see how our five heroes have survived the last two years. In the sequence, we explore their food supply dwindling, their water and energy issues, and the uncertainty of whether fixing the Nuvitta Network is a good idea. 

Because two years is a long time when the world’s gone to shit, we felt like the sequence this scene falls into was important. How do they eat, what do they eat? Where do they live? What's their day-to-day like? How do these seemingly unconnected people work as a family unit? 

ZOLA & OLI

In this particular scene, we wanted to explore the subtle character dynamic between Zola and Oli, hinting at their friendship, trust, and fears—all with little dialogue. When you're around people who want to succeed with you, silence can be a powerful communicator.

It’s a simple scene on paper, but if done properly in the final edit, it provides a nice slowdown from the first act and previous action scenes. It also shows some nice independent and collective worries for each character regarding their dwindling supply situation. 

ANIMATING THE SCENE

MOCAP & SCENE PACING

Animating this scene was a fun one. It was our first "talkie." 

We needed to animate this scene to work through the production workflow of animating a talking scene with one mocap suit. 

How do you get audio into UE5? What does the pipeline look like to incorporate Eleven Labs Voice2Voice? 

We found that rehearsal was important so the timing when we got into the scene pacing wasn’t so terribly off. It was refreshing to discover the power of animating and editing for a shot; as opposed to animating for a scene. 

If we were shooting this live, we’d have shot it with a traditional coverage style of shooting. I'd have estimated at least half a 10-hour day, if not the full day, for this scene. 

With the real-time animation workflow, we were able to rehearse and mocap the scene in 2 hours and have pacing for the shot animation edit done the following day. 

ENVIRONMENTS

The big thing I wanted to dip into was the importance of environment building. The previous scenes were done with very sparse set design. While we’d made some strides with the actual workflow, it was important that we fill the room with stuff.

Having nanite and learning you can put blueprints inside of blueprints was a game-changer. So was having access to KitBash3D and Megascans. 

It was really important to get some depth in the scene. Ambiance helps create a particular character element that’s unspoken on screen, and we wanted to try our hand at it for the first time. 

SPRITES & FLIPBOOKS

This is a minor detail, but this scene helped us learn the power of the flipbook workflow in UE5. The little glowing screens on the lockers were our first little foray into animated screens for this project. The flipbook and sprite workflow was shockingly simple, and we'll definitely be using more of those. 

THE GROW ROOM

The choice to have this scene in a grow room served two purposes. We knew we needed to explain where their food supply came from. And we wanted to play with intense lighting.  

Because the team set up their bunker in a Network Building, we decided they’d made a home in an office building—a place with a lot of rooms, space, and fluorescent lighting. We also knew that because they were 100 years in the future, chances were they’d have some better growing techniques than present day. 

The other driving force for this scene was that we really wanted to emulate some of the lighting choices done in one of our favorite movies, Attack The Block. 

Jayson was adamant about using colored fluorescents in the scene, and doing so in a grow room was a nice homage/touch. 

In future renditions I'd like to explore adding more hydroponic systems hanging from the ceiling, maybe build out a wall unit and a water system that we could get cool inserts from. 

AI WORKFLOW

It was in this scene we realized we could do a pretty standard Stable Diffusion AI process on top of our characters. We’d tested SD in our previous scenes but did it for each shot, which was time-consuming and generally messy on the file management side. 

For this scene, we batched the entire sequence and then just spot cleaned frames where the AI might have given Oli a man's face or made Zola too feminine. Because we're editing image sequences, updating the shot was faster this way. 

The technique offers a subtle AI filter that just helps the Metahumans feel a little less plastic and a little more "squishy." It also does some really clever color correction that speeds up the process. 

The other AI technique that was fun to play with in this scene was the use of Eleven Labs' Voice2Voice option. We learned that when you rush lines or mumble things, the AI struggles to capture the emotion. But it did its job. 

It was also fun coming up with new lines and adding elements to the scene that weren't in the script. The "Computer play my jams" and "Playing Zola's jams" were a fun spice that helped the opening of the scene. And it was done with text-to-speech, something we would have had to ADR at a later date to make happen.

But with AI, we did it right in the edit. 

MOVING FORWARD

We learned some subtle things on this one, but I think when they all come together, they compound into more tools that will help us fine-tune this project into shape. 

Now, even though we tried our hand at interior environments, you can still feel the emptiness in the environment. I think adding more to the wall textures and filling some of the empty space in the frame with more “human clutter” will amp up the space a bit more.

There is also a desperate need to learn cloth simulation—the joint action in this scene is definitely in need of work. 

But not too shabby for the first pass. 

I'm excited to revisit this one when we circle back for the full production. 

THOUGHTS? 

What were your thoughts on this one? Any tips or ideas to make it better? Any feedback helps us become better so that when it comes time for full production of the episode, things smooth out, feel fuller, and look great. 

It's ALIVE | Scene 13 - Episode 1

About Scene 13

  • The purspose of scene 13 is that Tim and Yuki finally get the Nuvitta Network back on after months of hard technical problem solving. This scene is pivitol to the entire story because this is their moment that sparks everything else into action for the entire rest of the episodes. If the network signal didn't work at this time they would have never gone on their future adventures.

  • The inspirations behind this scene was to see if Amber and I could act out all the characters individually then blend the performances back together to feel as if four characters were actually interacting at once.

  • The visual of this scene was inspired by the reference photos from the script book. We wanted to get the moody lighting vibe with the large server racks behind them.
  • We actually added a few lines in this scene that wasn't in the original screenplay just to add some more spice. Which line you think was added? Let us know in the comments.

Tim & Yuki Turn On the Network

  • Tim & Yuki know this could be a huge communication signal to help find more survivors but Ji isn't interested in any new changes. 

  • This scene develops the relationship between all four characters to show that Yuki & Tim are more connected and see Ji as a grump. Oli meanwhile is keeping everyone on task as the un-official bunker mom.

  • The character choices that were improvised during production was at the end where Yuki & Tim high five each other. We added that while shooting to show their excitement and connection... but also secretly to see if we could animate the two character interacting inside of our 3D softwares. What do you all think of that frickin' hand clap!?

The Production Process

  • The technical aspects of staging this scene was tricky from a production perspective because there's only two of us at our home studio, but we need to perform all four characters... It was fun to learn how all the separate performances needed to be captured for everything to line up properly. It was a challenging problem to try to solve.

  • The challenges we faced with this project was lighting the scene properly. I don't think we captured the exact vibe we were going for but it still works. There's still a mountain of things to learn about 3D lighting characters. The journey continues.

  • The main tools and software we used for this scene was Rokoko for the body motion capture, MetaHuman Animator for the face mocap, and Unreal Engine for putting the full scene together.  

The World

  • The setting of the world is in 2123, two years after a zombie pandemic, and our characters are trying to find literally anyone they can talk to. Anyone!
  • This scene is really the catalyst for the entire rest of the season and was an epic challenge to bring to life from the page to the screen.

AI & Animation Process

  • A specific AI technique we used for this scene was changing all the actors voices. Amber and I acted out all the characters then we used Eleven Labs to transform into the actual actor voices.

  • We got the look and layout of the server room from our script book photos that were generated with MidJourney & Stable Diffusion. This made set design so much easier and faster!
  • We usually use an AI overpass as the final look of our scenes. But this is one of the only scenes we didn't add the AI overpass layer at the end. The picture is straight out of Unreal Engine. Can you notice any difference, if so let us know?

What's Next? 

  • In the next scenes we want to figure out a way for the ai voices to carry more emotion when they are transformed into the characters. 
  • Things we want to create more in the future are more character talking scenes. We usually do action scenes from the script for the trailers and promos, but it will be fun to do more slow, character-driven, scenes in the near future. Stay tuned! 

Scene Test | Scene 30 - 33

The Warehouse Scene - Scene 30-33

In this sequence of scenes, we experience Yuki on her own as she encounters the first Zomborg in the series. Zomborgs are individuals who have opted for cybernetic enhancements prior to being infected by the X80 virus. Now that they're zombies they are some of the most dangerous creatures we could possibly encounter. That is until Yuki is captured by a pesky Hive Bee. 

YUKI, TIM & ZOMBORG 1

This was our first Tim and Yuki scene. It was fun seeing them move around in an environment together. This is a moment for both of them where desperate situations call for them to separate. Tim places Yuki in a spot he thinks is safe while leading away the Hive Bee that was tailing them. Yuki's interaction with the Zomborg is one we wanted to feel terror-inducing. 

UE5 & THE ANIMATION 

This sequence of scenes was one that we had practiced with back in Sept 2023 as our Zomborgs Teaser Trailer. We thought it'd be a good one to circle back to and explore now that we're branded as Survivors Don't Die. What we realized was we could do so much we couldn't with the Move.AI pipeline. 

Soo... we've been busy these last few months getting our animation practice in. These four scenes are the result and we're so very excited to do more. We've come out of this sequence exploding with new ideas and a better understanding of the UE5 animation pipeline.

Unfortunately, we have yet to learn how to customize metahuman body meshes in the manner we want for the end product. So the Zomborg aspect of this edit doesn't really come through. Eventually, we want to update this particular zomborg to have bionic arms and legs seen in the screenplay. 

For this particular screen test we were really focused on capturing the chase aspect. Getting better acquainted with cameras and subsequences. Jayson took control of scene 30-31. While Amber learned on scene 32-33.  Both with equal merits and places to improve.

BIGGEST TIP

A big tip that helped was learning not to pre-set up all of your shots in a sequence. 

Instead, build as you go. Pace your animation and cameras for each moment you have mocapped. Then duplicate that shot for different camera angles. Treat each camera like it's own set up (like real life).  

DO NOT time your cameras to the animation like capturing a play with camera cuts.

You lose your ability to "cheat" the frame when you need to. By working in multiple shots with the same animation timing you can move props/walls/cameras/actors just like you would on set... in real life.

MOVING FORWARD

We learned so much working on these four scenes. Places for improvement and exploration include getting better at pacing in place mocap. We had to keep the camera really close as our feet were sliding all over the damn place. 

Character design upgrades in the wardrobe and hair department. Things still feel too "game-y." Eventually we want to learn how to customize metahuman bodies so we can get those mechanical arms on our metahuman.

Then there's the element of fine-tuning how we communicate with one another. We both have very distinct styles of shooting and we'll need to find a way to communicate our expectations for each moment. That way scenes feel cohesive. 

In the meantime, we'll continue doing test scenes! Let us know what scene you'd like to see come to life. 

Image

Zomborgs Rebrand + Paperback Announcement

Alright, folks. 

We have begun the process of registering our screenplay book with the copyright office, and before we did so realized we probably needed to address the branding issue we'd been ignoring until now. 

Our flagship screenplay book was originally code-named ZOMBORGS. And for a while, we were very much okay with the name. Until we started doing the legwork of copyright.

When you search Zomborgs a video game comes up that we are not associated with. And for a while we were like, "Hmm... they haven't posted in a few years... and it's a video game. We should be fine." 

But when we started imagining where we wanted this story and world to go we realized it also included video games. So it was in our best interest to change the name. 

It also helped that we polled friends and relative about names and they all agreed they would rather watch a show called, "Survivors Don't Die" instead of "Zomborgs." 

So... here we are. Officially rebranding the story to SURVIVORS DON'T DIE. 

It's got a nice ring to it right?

Oh.... 

Also..... 

In celebration and to lock it all in - we've...

 LAUNCHED OUR AMAZON PAPERBACK! 

For a limited time we're selling the paperback at cost, that way folks who are into the story can get it at a discounted rate. 

Honestly, the paperback is my favorite way to experience Survivors Don't Die. The page turn is so satisfying and the large format (8x11) book feels good in the hands. 

We look forward to seeing how it all plays out! 

Rokoko Challenge! Top 100

SCENE 26

This scene is the "Hive Bee" scene where the audience is introduced to a surveillance drone. It's right in the middle of our team being chased by a horde of Zombies. We wanted it to feel like a "we're pinched!" moment. 

YUKI & THE HIVE BEE

Yuki is feeling a lot of things in this scene. Moments before, she had taken out a zombie that landed on some trash cans. This noise alerts the horde the team was trying to avoid. So there is definitely some guilt. Now add the threat of being captured by Hive Bees?

The Terror! 

This is the first scene in the pilot where we introduce the Hive Bees. They're The Sanctuary District's surveillance and recovery droids. They patrol an area of Nuvitta called Salt Town. And it's implied that they "clean out" areas of the city by removing survivors. 

The initial idea was that these Bees were new or retired tech that our team of five hadn't encountered before the X80 outbreak. All they know is that these Mech Bees are picking up people and taking them to the walled-off area they call "The Hive."

A place where no one is ever seen again. Dun-dun-duh!

We'll learn in later episodes that those outside of the Sanctuary District have created myth and legend around what's behind the 100-foot wall and the droids that kidnap survivors.

When we were naming the Hive Bees, it was important that it feel like something a pre-teen would call them. In our heads, "Bees!" is definitely a term the five survivors use amongst themselves whenever they are on prevision runs. 

ROKOKO CHALLENGE

This is only a partial moment in the scene as we were submitting to the Rokoko Intergalactic Challenge. A 7-sec video moment meant to get people to stop channel surfing through an intergalactic TV-feed. 

We were so excited when we placed in the Top 100 of submissions. It was really cool seeing how the judges reacted to the scene, especially because it was our first test run with Rokoko and the UE5 workflow. 

ANIMATING FOR THE CHALLENGE

When we decided to enter the Rokoko Intergalactic Challenge, we knew we wanted to do something with Zomborgs. But what scene to choose from? 

Why... let’s pick one of the hardest scenes we could possibly choose for our first mocap-to-UE5 scene. That's a good idea, right? 

In the end... yes. It was a brilliant idea.

Jayson had just finished his enrollment with Josh Toonen at UnrealForVFX.com. And the knowledge he brought to the project was so very helpful. The community in the course was so helpful when it came to feedback on this one too.

By forcing ourselves into one of the hardest scenes, it pushed us to learn things fast. 

MOCAP

This was our first go using Metahuman Animator and the Rokoko Suit, so we were very excited to explore bringing Yuki to life. Amber mo-capped Yuki's performance, and we were pleasantly surprised how simple the data capture was. Even the setup was pretty straightforward once we got it going.

Now, yes, there were some connectivity issues - but that's nothing new to us. Tech fails. Then it works. Gotta flow with it, ya know? The only gripe we had though was our glove had a malfunction where the middle finger just wiggled out of control. It required having to send the glove back to rokoko for repair. Not awesome. But we expect a return soon. 

Despite the glove malfunction we were able to capture mocap for about 20 characters in the same day - which was... amazing. The cleanup was pretty minimal on this once we learned the power of control rigs and the use of cameras to hide the ugly stuff.

CAMERA & EDITING

Speaking of cameras - the experience of being able to go in and fine-tune shots and angles long after the action is shot was... so very exciting. 

We come from a background of traditional pre-pro, production, post. The animation workflow is similar, but man does it change the production process. 

Coming from the post pipeline there have been sooo many projects we've edited where we said, "Man, I wish this shot was just a little to the left!" Or "Where's your 180 rule, people?" Being able to go in and adjust the shot was something we were both so excited about. 

It's pretty awesome that production IS the editing process with this workflow. It feels like a superpower.

CHARACTER DESIGN

We used Metahumans for all of our Zomborgs characters and have really enjoyed the process of using Blender and UE5's Mesh to Metahuman workflow. It helped us and our limited character design skills get closer to what we had previs’d using Midjourney and Stable Diffusion. 

In this scene, we really only had one character that had hair that wasn't "Metahuman Hair." Because let’s be honest, hair is one of the first giveaways when watching stuff. That and their teeth... but that's a challenge yet to be tackled.

Thankfully, Prompt Muse came to the rescue with her Hair Tutorial the same day Amber went to buckle in and learn it. Who doesn't love a serendipitous moment on a journey?

It was our first attempt at custom hair, definitely not the best, but man, for this video it felt like magic. 

IT'S AI-FREE

So for this one, we actually went the route of no AI. We used the UEforVFX post-processing workflow to get a pretty fun look. 

MOVING FORWARD

Moving forward, we're going to be putting in the reps. We really want to get more practice in the engine, so we'll probably pick out a few scenes to tackle from the *Survivors'* screenplay.

At some point we'll tackle hair and wardrobe issues again. But right now we just want to get better at the scene building. 

Let us know in the comments what scenes you want to see previs for! 

Scene Test | Scene 32

Hey Y'all!

We're thrilled to unveil the Zomborgs Teaser Trailer, a project that reflects months of learning and experimentation with UE5 & AI technologies. It's been about 2 months since our AI Test with Metahumans & Stable Diffusion, and we're excited to showcase what we've achieved by building upon that foundation.

Our journey has been one of trials, errors, and a ton of learning. It's taught us new skills and reminded us that even with AI's remarkable ability to elevate our work, it takes a combination of expertise in script writing, performance, 3D design, animation, and sheer determination to create something worthy of sharing.

For those of you interested in the process here are some of the details of what it took to make this thing happen. 

 

🛠️ TOOLS & SETTINGS USED 🛠️

+ Unreal Engine 5 (on a GTX 1070) 

Unreal Engine 5 became our mainstay for heavy lifting. Learning this software has been an ongoing journey, but the real-time animation workflows, asset libraries, and lighting capabilities have proved invaluable for elevating our storytelling.

 

+ Metahuman & Metahuman Animator w/ iPhone 12-mini 

We continued our exploration of facial performance capture using Metahuman tools. Its accuracy and ease of use saved us significant time, making it an indispensable part of our toolkit.

Pro tip 1 - Subscribing to iCloud is worth every penny for larger projects.

Pro Tipe 2 - Matching your intended AI actor's head shape/features with a Metahuman character improves AI performance, and diverse head shapes help AI produce desired effects more easily.

+ Quixel Mixer (For custom Zombie)

While Substance Painter is a popular choice, Quixel Mixer shouldn't be underestimated. Our budget constraints led us to Mixer, which offers quality UE5 and Megascans assets. It's a smart alternative for those looking to maximize resources.

 

+ Move.AI (Experimental 4 Camera MoCap)

Move.AI's multi-camera mocap trial served us well, even with minor hiccups. The credit system, while occasionally unpredictable, proved useful in the end. We used their experimental camera set up with various cameras (1 iPhone 12, 1 GalaxyNote9, 1 GalaxyNote10, and 2 gopros) and got the body movement for both characters in the piece.

We're also currently exploring options like Cascadeur, especially its new MoCap Alpha

Other Ai-Video-Mocap options we're keeping an eye on include; Rokoko Vision just released a dual cam version. There are also projects like FreeMoCap.com and ReMoCap.com that we're keeping an eye on and hope to test in the future as well.

 

*UPDATE* We are happy to announce that this piece won their discord #show-off competition and got us 600 move.ai credits, which we're pretty pumped about! **

 

+ Stable Diffusion 1.5 w/ Automatic 1111

I know, I know. SDXL is out! What the heck are we doing still on 1.5? Look, okay? We run a GTX 1070 with low vram. While ComfyUI is cool - it makes my brain melt - plus batching and controlnt wasn't an option when we needed it. So while Automatic1111 fixes the bugs - we did this one on 1.5 and it did pretty nicely.

 

Basic Settings Used:

Steps: 12 | DPM++ 2M Karras | CFG 25 | Denoise .3

ControlNet (Lineart-Realistic | Normal Bae | Softedge-Hed)

Models: LifeLikeDiffusion Model and the Detail Tweaker Lora (1.0 setting)

This round we followed a tutorial by HilarionOfficial that had some great tips for temporal consistency. This involved a higher CFG setting which allowed us to use a higher Denoise in our img2img process.

The LifeLikeDiffusion Model was really great to work with and does a great job with our diverse cast. The Detail Tweaker Lora was used for Yuki's shots as it added some nice skin texture and hair strands in some of the shots. 

Edit: When we shared this process with some folks I realized they didn't really understand "WHY" we had to include an AI PASS at all. For reference here are comparison shots of the "Metahuman" and what the AI Pass offered as an alternative. 

Metahuman Frame on the Top VS AI Pass Frame on Bottom

 

To us  the AI Pass looks more "realistic" than the Metahuman frames. It removes that Metahuman uncanny valley thing and puts a layer of "Human Looking" on top. Thus upgrading our quality significantly, considering we've only just started learning the inner workings of 3D design, 3D Animation, and Character Design. 

+ Midjourney + ControlNet + KohyaSS Lora Training

 

For this project we did train a new Lora for Yuki. Her original Lora was trained on Midjourney version 2 & 3 character concepts which were prompted using Metahuman source images so they had a bit of a cgi/concept art feel to the training data.

 

By using our original Lora, LifeLikeDiffusion, and ControlNet we generated new dataset images that were more reallistic and less "cartoony." This helped us get a little more consistent head turns and realistic skin textures on the final video AI pass.

 

In addition to a new Yuki Lora we had to build a custom Metahuman to build our Zomborg. As mentioned above we used Quixel Mixer to customize the metahuman textures. We then rendered out some stills and prepped them for a general Lora dataset trained on Kohya.

 

Because the LifeLikeDiffusion model was having trouble with zombie-fying our Zomborg we needed to train a general zombie style lora. We used midjourney to create this Zombie dataset using basic text-to-image prompting and our metahuman zombie as image reference to get a variety of different zombies for a general Zombie dataset. 

We then trained a style Lora for this general Zombie dataset. This made it possible for us to mix both the Zomborg Lora and the Zombie Lora in a single prompt to get more intense "zombie features." 

+ Audio Clips from: Freesound.org | Mixkit.co | Pexels.com (Mixed by Amber)

was pivotal, and we used free resources from Freesound.org, Mixkit.co, and Pexels.com. I then mixed them in  Adobe Audition.

👨🏽‍🏫 TUTORIALS THAT GUIDED US 👨🏽‍🏫

This is a collection of tutorials used for this screen test.
Getting Started with Metahuman Animator - @UnrealEngine A great straight forward way to set up and use Metahuman Animator for face performance. This one came in handy and its from the folks who made the app. Huzzah! (https://youtu.be/WWLF-a68-CE)
Move AI |  Complete Guide to Getting Set Up - @JonathanWinbush He goes over the I-phone set up. His approach was helpful even though we used to the experimental workflow. (https://www.youtube.com/watch?v=MY7c6... )
Move AI - Set Up Playlist - @moveai This is a great playlist for the Move.Ai motion capture process. (https://www.youtube.com/watch?v=ZpvjD... )
3D Animation w/ AI Pass - @promptmuse This was the general workflow we approached this with but instead of reallusion products we used UE5 & Metahumans. We did not use the multiscreen script for this one - but hope to in the future. (https://www.youtube.com/watch?v=T2nw9... )
Animation w/ ControlNet Only - @HilarionOfficial This was a creat tutorial that explained concepts and settings very well. It's what allowed us to really force our custom Lora onto the characters. (https://www.youtube.com/watch?v=Z6pR7...)


CONCLUSION

Our Zomborgs Teaser Trailer was a project-based learning experience. We harnessed disruptive technologies, pushed our creative boundaries, and used our limited budget wisely enough that we're looking forward to making another project very soon. We hope this peek behind the curtain offers insights into the journey.

If you haven't already don't forget to download Zomborgs.... now titled Survivors Dont Die!

Until the next adventure,
J-Wall & Amber
Oh Hey Void

Getting AI Actors to Talk using Metahumans and Stable Diffusion

The journey has begun! 

We recently "soft" launched our digital download of ZOMBORGS Episode 1. A sci-fi horror series about a girl, an android, a corporate cult, and lots of zombies with a bit of extra "holy-eff!" 

We're determined to see how far UE5, Blender, and Synthography can take us to make a high-end trailer for the script book currently available for download (here). The goal isn't to "just get it done." We want to create Hollywood-quality visuals for a series that could be an epic tale of human identity and social conformity. 

Sure, we've worked in video production for all of our adult lives, but in order to do the high-end visuals we want we'd need a massive budget, epic gear, and a vfx powerhouse propelling us forward to a big fat Hollywood "maybe." None of which we have at the moment. 

What we do have, however, is self-taught 3D design skills, a general understanding of animation in UE5 & blender, comfort in stable diffusion, and an iPhone. 

That should get us where we gotta go, right? 

Maybe. 

The Test

The test we ran this week explored what the workflow would look like if we face-captured with UE5's new Metahuman Animator, ported that into a metahuman, and then took those frames into stable diffusion to get our custom character talking using a voice2voice trained model. 

The Models

Thankfully, we already had our LORA's trained from our script-book project. And we already knew that we liked the LifeLikeDiffusion model found on CivitAI. It does a great job of capturing our ethnically diverse cast and life-like proportions & expressions. 

We also already had our MetaHuman base designed, as we used these metahumans to influence our synthetic actors' looks with Midjourney img2img and blend workflows. 

The Face MoCap

With the release of MetaHuman Animator earlier this month we splurged on a refurbished iPhone 12-mini and quickly got to work on testing the setup process. It took a minute, but Jwall is a genius and power hit the fine performance demonstrated in the test footage. Needless to say, we were impressed with the mocap capabilities and very pleased with the first test. 

The Voice2Voice

Next up, we wanted to be able to use AI voices - but we didn't like any of the text-to-voice options. They sounded sterile, disturbing, and inhumane. Meaning we had to get acquainted with the voice2voice AI options out there. Unfortunately, I was unable to get the local option I wanted to try installed properly. So, for the sake of time, we went with the social media popular, Voice.ai (referral link). 

A few years back I (Amber) had tried my hand at narrating audiobooks. It was one of the most grueling, fun, and underpaid gigs I'd done in my entire career. I narrated two books and ended my VO career quickly after. The bright side of having done it was I now have hours of audio that I could use to train my voice with. And so I did. Within an hour I had an ai voice model of myself. I ran Jwall's voice performance through it and BAM! We got us a girly voice! 

It was really important to us that the voice be able to capture performance in tonality and expression. Voice.ai did a pretty good job. I just wish it wasn't so distorted. I ended up having to do A LOT of post-cleanup and it still sounds a bit off. But... we had a voice that was closer to that of a 13-year-old girl - so we moved on. 

Stable Diffusion

The final step was putting our synthetic actress' face onto the performance. Using Runpod.io (referral link) we ran a RTX 3090 and hoped for the best. Using our previously trained Lora, img2img, controlNet, and the Roop extensions we hobbled our way through it. 

The biggest issue I ran into was anything with a denoise over .15-.2 failed to capture the mouth movement. After generating 600 frames of failed small batch testing I was definitely grateful for the batch image option within Stable Diffusion. 

Prompt Muse's "Taking AI Animation to the NEXT Level" tutorial was helpful in suggesting the scribbleHed option. 

However, I wasn't really digging the fact that I couldn't get the LORA to manipulate the output more. Maybe I was doing something wrong... 

But I eventually stumbled across Nerdy Rodent's "Easy Deepfakes in AUtomatic1111 with Roop" and everything began sorting itself out. 

Here are the SD settings I ended up using as I recall them. 

Sure, I probably could have got away with using the Roop extension only, but I wanted to run the lora as high as I could get it then apply roop to make sure the face looked as consistent and accurate as possible. Because, remember... I needed the girl to talk. 

In the end... the results were a satisfying start. 

For our next adventure we're planning to do more than just a single shot, maybe even throw in some body movement too. So this test definitely proved that it's possible to get the images to chatter and look semi-lifelike. We still have some work to do on the voice2voice ai, the flickering, and the additional facial movements. But we have faith that it's possible. 

If you have any ideas on how to improve our workflow, DO SHARE! Either comment to this blog or the video above. Whichever you're more comfortable with.