SceneTest | Scene 14

✒️SCENE 14✒️

Zola is checking on their grow house, while Oli is worried about their dwindling supplies. Tensions are rising!

This scene is in a sequence of "everyday life" scenes where we see how our five heroes have survived the last two years. In the sequence, we explore their food supply dwindling, their water and energy issues, and the uncertainty of whether fixing the Nuvitta Network is a good idea. 

Because two years is a long time when the world’s gone to shit, we felt like the sequence this scene falls into was important. How do they eat, what do they eat? Where do they live? What's their day-to-day like? How do these seemingly unconnected people work as a family unit? 

ZOLA & OLI

In this particular scene, we wanted to explore the subtle character dynamic between Zola and Oli, hinting at their friendship, trust, and fears—all with little dialogue. When you're around people who want to succeed with you, silence can be a powerful communicator.

It’s a simple scene on paper, but if done properly in the final edit, it provides a nice slowdown from the first act and previous action scenes. It also shows some nice independent and collective worries for each character regarding their dwindling supply situation. 

ANIMATING THE SCENE

MOCAP & SCENE PACING

Animating this scene was a fun one. It was our first "talkie." 

We needed to animate this scene to work through the production workflow of animating a talking scene with one mocap suit. 

How do you get audio into UE5? What does the pipeline look like to incorporate Eleven Labs Voice2Voice? 

We found that rehearsal was important so the timing when we got into the scene pacing wasn’t so terribly off. It was refreshing to discover the power of animating and editing for a shot; as opposed to animating for a scene. 

If we were shooting this live, we’d have shot it with a traditional coverage style of shooting. I'd have estimated at least half a 10-hour day, if not the full day, for this scene. 

With the real-time animation workflow, we were able to rehearse and mocap the scene in 2 hours and have pacing for the shot animation edit done the following day. 

ENVIRONMENTS

The big thing I wanted to dip into was the importance of environment building. The previous scenes were done with very sparse set design. While we’d made some strides with the actual workflow, it was important that we fill the room with stuff.

Having nanite and learning you can put blueprints inside of blueprints was a game-changer. So was having access to KitBash3D and Megascans. 

It was really important to get some depth in the scene. Ambiance helps create a particular character element that’s unspoken on screen, and we wanted to try our hand at it for the first time. 

SPRITES & FLIPBOOKS

This is a minor detail, but this scene helped us learn the power of the flipbook workflow in UE5. The little glowing screens on the lockers were our first little foray into animated screens for this project. The flipbook and sprite workflow was shockingly simple, and we'll definitely be using more of those. 

THE GROW ROOM

The choice to have this scene in a grow room served two purposes. We knew we needed to explain where their food supply came from. And we wanted to play with intense lighting.  

Because the team set up their bunker in a Network Building, we decided they’d made a home in an office building—a place with a lot of rooms, space, and fluorescent lighting. We also knew that because they were 100 years in the future, chances were they’d have some better growing techniques than present day. 

The other driving force for this scene was that we really wanted to emulate some of the lighting choices done in one of our favorite movies, Attack The Block. 

Jayson was adamant about using colored fluorescents in the scene, and doing so in a grow room was a nice homage/touch. 

In future renditions I'd like to explore adding more hydroponic systems hanging from the ceiling, maybe build out a wall unit and a water system that we could get cool inserts from. 

AI WORKFLOW

It was in this scene we realized we could do a pretty standard Stable Diffusion AI process on top of our characters. We’d tested SD in our previous scenes but did it for each shot, which was time-consuming and generally messy on the file management side. 

For this scene, we batched the entire sequence and then just spot cleaned frames where the AI might have given Oli a man's face or made Zola too feminine. Because we're editing image sequences, updating the shot was faster this way. 

The technique offers a subtle AI filter that just helps the Metahumans feel a little less plastic and a little more "squishy." It also does some really clever color correction that speeds up the process. 

The other AI technique that was fun to play with in this scene was the use of Eleven Labs' Voice2Voice option. We learned that when you rush lines or mumble things, the AI struggles to capture the emotion. But it did its job. 

It was also fun coming up with new lines and adding elements to the scene that weren't in the script. The "Computer play my jams" and "Playing Zola's jams" were a fun spice that helped the opening of the scene. And it was done with text-to-speech, something we would have had to ADR at a later date to make happen.

But with AI, we did it right in the edit. 

MOVING FORWARD

We learned some subtle things on this one, but I think when they all come together, they compound into more tools that will help us fine-tune this project into shape. 

Now, even though we tried our hand at interior environments, you can still feel the emptiness in the environment. I think adding more to the wall textures and filling some of the empty space in the frame with more “human clutter” will amp up the space a bit more.

There is also a desperate need to learn cloth simulation—the joint action in this scene is definitely in need of work. 

But not too shabby for the first pass. 

I'm excited to revisit this one when we circle back for the full production. 

THOUGHTS? 

What were your thoughts on this one? Any tips or ideas to make it better? Any feedback helps us become better so that when it comes time for full production of the episode, things smooth out, feel fuller, and look great. 

It's ALIVE | Scene 13 - Episode 1

About Scene 13

  • The purspose of scene 13 is that Tim and Yuki finally get the Nuvitta Network back on after months of hard technical problem solving. This scene is pivitol to the entire story because this is their moment that sparks everything else into action for the entire rest of the episodes. If the network signal didn't work at this time they would have never gone on their future adventures.

  • The inspirations behind this scene was to see if Amber and I could act out all the characters individually then blend the performances back together to feel as if four characters were actually interacting at once.

  • The visual of this scene was inspired by the reference photos from the script book. We wanted to get the moody lighting vibe with the large server racks behind them.
  • We actually added a few lines in this scene that wasn't in the original screenplay just to add some more spice. Which line you think was added? Let us know in the comments.

Tim & Yuki Turn On the Network

  • Tim & Yuki know this could be a huge communication signal to help find more survivors but Ji isn't interested in any new changes. 

  • This scene develops the relationship between all four characters to show that Yuki & Tim are more connected and see Ji as a grump. Oli meanwhile is keeping everyone on task as the un-official bunker mom.

  • The character choices that were improvised during production was at the end where Yuki & Tim high five each other. We added that while shooting to show their excitement and connection... but also secretly to see if we could animate the two character interacting inside of our 3D softwares. What do you all think of that frickin' hand clap!?

The Production Process

  • The technical aspects of staging this scene was tricky from a production perspective because there's only two of us at our home studio, but we need to perform all four characters... It was fun to learn how all the separate performances needed to be captured for everything to line up properly. It was a challenging problem to try to solve.

  • The challenges we faced with this project was lighting the scene properly. I don't think we captured the exact vibe we were going for but it still works. There's still a mountain of things to learn about 3D lighting characters. The journey continues.

  • The main tools and software we used for this scene was Rokoko for the body motion capture, MetaHuman Animator for the face mocap, and Unreal Engine for putting the full scene together.  

The World

  • The setting of the world is in 2123, two years after a zombie pandemic, and our characters are trying to find literally anyone they can talk to. Anyone!
  • This scene is really the catalyst for the entire rest of the season and was an epic challenge to bring to life from the page to the screen.

AI & Animation Process

  • A specific AI technique we used for this scene was changing all the actors voices. Amber and I acted out all the characters then we used Eleven Labs to transform into the actual actor voices.

  • We got the look and layout of the server room from our script book photos that were generated with MidJourney & Stable Diffusion. This made set design so much easier and faster!
  • We usually use an AI overpass as the final look of our scenes. But this is one of the only scenes we didn't add the AI overpass layer at the end. The picture is straight out of Unreal Engine. Can you notice any difference, if so let us know?

What's Next? 

  • In the next scenes we want to figure out a way for the ai voices to carry more emotion when they are transformed into the characters. 
  • Things we want to create more in the future are more character talking scenes. We usually do action scenes from the script for the trailers and promos, but it will be fun to do more slow, character-driven, scenes in the near future. Stay tuned! 

Scene Test | Scene 30 - 33

The Warehouse Scene - Scene 30-33

In this sequence of scenes, we experience Yuki on her own as she encounters the first Zomborg in the series. Zomborgs are individuals who have opted for cybernetic enhancements prior to being infected by the X80 virus. Now that they're zombies they are some of the most dangerous creatures we could possibly encounter. That is until Yuki is captured by a pesky Hive Bee. 

YUKI, TIM & ZOMBORG 1

This was our first Tim and Yuki scene. It was fun seeing them move around in an environment together. This is a moment for both of them where desperate situations call for them to separate. Tim places Yuki in a spot he thinks is safe while leading away the Hive Bee that was tailing them. Yuki's interaction with the Zomborg is one we wanted to feel terror-inducing. 

UE5 & THE ANIMATION 

This sequence of scenes was one that we had practiced with back in Sept 2023 as our Zomborgs Teaser Trailer. We thought it'd be a good one to circle back to and explore now that we're branded as Survivors Don't Die. What we realized was we could do so much we couldn't with the Move.AI pipeline. 

Soo... we've been busy these last few months getting our animation practice in. These four scenes are the result and we're so very excited to do more. We've come out of this sequence exploding with new ideas and a better understanding of the UE5 animation pipeline.

Unfortunately, we have yet to learn how to customize metahuman body meshes in the manner we want for the end product. So the Zomborg aspect of this edit doesn't really come through. Eventually, we want to update this particular zomborg to have bionic arms and legs seen in the screenplay. 

For this particular screen test we were really focused on capturing the chase aspect. Getting better acquainted with cameras and subsequences. Jayson took control of scene 30-31. While Amber learned on scene 32-33.  Both with equal merits and places to improve.

BIGGEST TIP

A big tip that helped was learning not to pre-set up all of your shots in a sequence. 

Instead, build as you go. Pace your animation and cameras for each moment you have mocapped. Then duplicate that shot for different camera angles. Treat each camera like it's own set up (like real life).  

DO NOT time your cameras to the animation like capturing a play with camera cuts.

You lose your ability to "cheat" the frame when you need to. By working in multiple shots with the same animation timing you can move props/walls/cameras/actors just like you would on set... in real life.

MOVING FORWARD

We learned so much working on these four scenes. Places for improvement and exploration include getting better at pacing in place mocap. We had to keep the camera really close as our feet were sliding all over the damn place. 

Character design upgrades in the wardrobe and hair department. Things still feel too "game-y." Eventually we want to learn how to customize metahuman bodies so we can get those mechanical arms on our metahuman.

Then there's the element of fine-tuning how we communicate with one another. We both have very distinct styles of shooting and we'll need to find a way to communicate our expectations for each moment. That way scenes feel cohesive. 

In the meantime, we'll continue doing test scenes! Let us know what scene you'd like to see come to life. 

Image

Zomborgs Rebrand + Paperback Announcement

Alright, folks. 

We have begun the process of registering our screenplay book with the copyright office, and before we did so realized we probably needed to address the branding issue we'd been ignoring until now. 

Our flagship screenplay book was originally code-named ZOMBORGS. And for a while, we were very much okay with the name. Until we started doing the legwork of copyright.

When you search Zomborgs a video game comes up that we are not associated with. And for a while we were like, "Hmm... they haven't posted in a few years... and it's a video game. We should be fine." 

But when we started imagining where we wanted this story and world to go we realized it also included video games. So it was in our best interest to change the name. 

It also helped that we polled friends and relative about names and they all agreed they would rather watch a show called, "Survivors Don't Die" instead of "Zomborgs." 

So... here we are. Officially rebranding the story to SURVIVORS DON'T DIE. 

It's got a nice ring to it right?

Oh.... 

Also..... 

In celebration and to lock it all in - we've...

 LAUNCHED OUR AMAZON PAPERBACK! 

For a limited time we're selling the paperback at cost, that way folks who are into the story can get it at a discounted rate. 

Honestly, the paperback is my favorite way to experience Survivors Don't Die. The page turn is so satisfying and the large format (8x11) book feels good in the hands. 

We look forward to seeing how it all plays out! 

Rokoko Challenge! Top 100

SCENE 26

This scene is the "Hive Bee" scene where the audience is introduced to a surveillance drone. It's right in the middle of our team being chased by a horde of Zombies. We wanted it to feel like a "we're pinched!" moment. 

YUKI & THE HIVE BEE

Yuki is feeling a lot of things in this scene. Moments before, she had taken out a zombie that landed on some trash cans. This noise alerts the horde the team was trying to avoid. So there is definitely some guilt. Now add the threat of being captured by Hive Bees?

The Terror! 

This is the first scene in the pilot where we introduce the Hive Bees. They're The Sanctuary District's surveillance and recovery droids. They patrol an area of Nuvitta called Salt Town. And it's implied that they "clean out" areas of the city by removing survivors. 

The initial idea was that these Bees were new or retired tech that our team of five hadn't encountered before the X80 outbreak. All they know is that these Mech Bees are picking up people and taking them to the walled-off area they call "The Hive."

A place where no one is ever seen again. Dun-dun-duh!

We'll learn in later episodes that those outside of the Sanctuary District have created myth and legend around what's behind the 100-foot wall and the droids that kidnap survivors.

When we were naming the Hive Bees, it was important that it feel like something a pre-teen would call them. In our heads, "Bees!" is definitely a term the five survivors use amongst themselves whenever they are on prevision runs. 

ROKOKO CHALLENGE

This is only a partial moment in the scene as we were submitting to the Rokoko Intergalactic Challenge. A 7-sec video moment meant to get people to stop channel surfing through an intergalactic TV-feed. 

We were so excited when we placed in the Top 100 of submissions. It was really cool seeing how the judges reacted to the scene, especially because it was our first test run with Rokoko and the UE5 workflow. 

ANIMATING FOR THE CHALLENGE

When we decided to enter the Rokoko Intergalactic Challenge, we knew we wanted to do something with Zomborgs. But what scene to choose from? 

Why... let’s pick one of the hardest scenes we could possibly choose for our first mocap-to-UE5 scene. That's a good idea, right? 

In the end... yes. It was a brilliant idea.

Jayson had just finished his enrollment with Josh Toonen at UnrealForVFX.com. And the knowledge he brought to the project was so very helpful. The community in the course was so helpful when it came to feedback on this one too.

By forcing ourselves into one of the hardest scenes, it pushed us to learn things fast. 

MOCAP

This was our first go using Metahuman Animator and the Rokoko Suit, so we were very excited to explore bringing Yuki to life. Amber mo-capped Yuki's performance, and we were pleasantly surprised how simple the data capture was. Even the setup was pretty straightforward once we got it going.

Now, yes, there were some connectivity issues - but that's nothing new to us. Tech fails. Then it works. Gotta flow with it, ya know? The only gripe we had though was our glove had a malfunction where the middle finger just wiggled out of control. It required having to send the glove back to rokoko for repair. Not awesome. But we expect a return soon. 

Despite the glove malfunction we were able to capture mocap for about 20 characters in the same day - which was... amazing. The cleanup was pretty minimal on this once we learned the power of control rigs and the use of cameras to hide the ugly stuff.

CAMERA & EDITING

Speaking of cameras - the experience of being able to go in and fine-tune shots and angles long after the action is shot was... so very exciting. 

We come from a background of traditional pre-pro, production, post. The animation workflow is similar, but man does it change the production process. 

Coming from the post pipeline there have been sooo many projects we've edited where we said, "Man, I wish this shot was just a little to the left!" Or "Where's your 180 rule, people?" Being able to go in and adjust the shot was something we were both so excited about. 

It's pretty awesome that production IS the editing process with this workflow. It feels like a superpower.

CHARACTER DESIGN

We used Metahumans for all of our Zomborgs characters and have really enjoyed the process of using Blender and UE5's Mesh to Metahuman workflow. It helped us and our limited character design skills get closer to what we had previs’d using Midjourney and Stable Diffusion. 

In this scene, we really only had one character that had hair that wasn't "Metahuman Hair." Because let’s be honest, hair is one of the first giveaways when watching stuff. That and their teeth... but that's a challenge yet to be tackled.

Thankfully, Prompt Muse came to the rescue with her Hair Tutorial the same day Amber went to buckle in and learn it. Who doesn't love a serendipitous moment on a journey?

It was our first attempt at custom hair, definitely not the best, but man, for this video it felt like magic. 

IT'S AI-FREE

So for this one, we actually went the route of no AI. We used the UEforVFX post-processing workflow to get a pretty fun look. 

MOVING FORWARD

Moving forward, we're going to be putting in the reps. We really want to get more practice in the engine, so we'll probably pick out a few scenes to tackle from the *Survivors'* screenplay.

At some point we'll tackle hair and wardrobe issues again. But right now we just want to get better at the scene building. 

Let us know in the comments what scenes you want to see previs for! 

Getting AI Actors to Talk using Metahumans and Stable Diffusion

The journey has begun! 

We recently "soft" launched our digital download of ZOMBORGS Episode 1. A sci-fi horror series about a girl, an android, a corporate cult, and lots of zombies with a bit of extra "holy-eff!" 

We're determined to see how far UE5, Blender, and Synthography can take us to make a high-end trailer for the script book currently available for download (here). The goal isn't to "just get it done." We want to create Hollywood-quality visuals for a series that could be an epic tale of human identity and social conformity. 

Sure, we've worked in video production for all of our adult lives, but in order to do the high-end visuals we want we'd need a massive budget, epic gear, and a vfx powerhouse propelling us forward to a big fat Hollywood "maybe." None of which we have at the moment. 

What we do have, however, is self-taught 3D design skills, a general understanding of animation in UE5 & blender, comfort in stable diffusion, and an iPhone. 

That should get us where we gotta go, right? 

Maybe. 

The Test

The test we ran this week explored what the workflow would look like if we face-captured with UE5's new Metahuman Animator, ported that into a metahuman, and then took those frames into stable diffusion to get our custom character talking using a voice2voice trained model. 

The Models

Thankfully, we already had our LORA's trained from our script-book project. And we already knew that we liked the LifeLikeDiffusion model found on CivitAI. It does a great job of capturing our ethnically diverse cast and life-like proportions & expressions. 

We also already had our MetaHuman base designed, as we used these metahumans to influence our synthetic actors' looks with Midjourney img2img and blend workflows. 

The Face MoCap

With the release of MetaHuman Animator earlier this month we splurged on a refurbished iPhone 12-mini and quickly got to work on testing the setup process. It took a minute, but Jwall is a genius and power hit the fine performance demonstrated in the test footage. Needless to say, we were impressed with the mocap capabilities and very pleased with the first test. 

The Voice2Voice

Next up, we wanted to be able to use AI voices - but we didn't like any of the text-to-voice options. They sounded sterile, disturbing, and inhumane. Meaning we had to get acquainted with the voice2voice AI options out there. Unfortunately, I was unable to get the local option I wanted to try installed properly. So, for the sake of time, we went with the social media popular, Voice.ai (referral link). 

A few years back I (Amber) had tried my hand at narrating audiobooks. It was one of the most grueling, fun, and underpaid gigs I'd done in my entire career. I narrated two books and ended my VO career quickly after. The bright side of having done it was I now have hours of audio that I could use to train my voice with. And so I did. Within an hour I had an ai voice model of myself. I ran Jwall's voice performance through it and BAM! We got us a girly voice! 

It was really important to us that the voice be able to capture performance in tonality and expression. Voice.ai did a pretty good job. I just wish it wasn't so distorted. I ended up having to do A LOT of post-cleanup and it still sounds a bit off. But... we had a voice that was closer to that of a 13-year-old girl - so we moved on. 

Stable Diffusion

The final step was putting our synthetic actress' face onto the performance. Using Runpod.io (referral link) we ran a RTX 3090 and hoped for the best. Using our previously trained Lora, img2img, controlNet, and the Roop extensions we hobbled our way through it. 

The biggest issue I ran into was anything with a denoise over .15-.2 failed to capture the mouth movement. After generating 600 frames of failed small batch testing I was definitely grateful for the batch image option within Stable Diffusion. 

Prompt Muse's "Taking AI Animation to the NEXT Level" tutorial was helpful in suggesting the scribbleHed option. 

However, I wasn't really digging the fact that I couldn't get the LORA to manipulate the output more. Maybe I was doing something wrong... 

But I eventually stumbled across Nerdy Rodent's "Easy Deepfakes in AUtomatic1111 with Roop" and everything began sorting itself out. 

Here are the SD settings I ended up using as I recall them. 

Sure, I probably could have got away with using the Roop extension only, but I wanted to run the lora as high as I could get it then apply roop to make sure the face looked as consistent and accurate as possible. Because, remember... I needed the girl to talk. 

In the end... the results were a satisfying start. 

For our next adventure we're planning to do more than just a single shot, maybe even throw in some body movement too. So this test definitely proved that it's possible to get the images to chatter and look semi-lifelike. We still have some work to do on the voice2voice ai, the flickering, and the additional facial movements. But we have faith that it's possible. 

If you have any ideas on how to improve our workflow, DO SHARE! Either comment to this blog or the video above. Whichever you're more comfortable with.