In a comedy anime, “Zombie Land Saga,” girls who have revived as zombies struggle as local idols in Saga prefecture while hiding their true identity.
In April 2021, Mamoru Miyano’s producer “Kotaro Tatsumi” was unnecessarily high in tension, the story was eccentric, and the members of “Francheche” who confronted adversity as an idol gathered sympathy. It was famous as the sequel “Zombie Land Saga Revenge” was broadcast.
And at the comprehensive anime event “Anitsuku 2021” held for three days from September 18, 2021, as one of the sessions, we held a seminar to dissect the backside of the production of this work.
As with other sessions, the production staff talked about techniques and commitments related to 3DCG and digital production for anime fans and next-generation creators.
The goal of the sequel is to “enhance the expression with the same model as the previous work” and “raise the average value of the quality of the live part.”
In this lecture, which began with the title “3D work commentary pursuing attractive characters and live expression of’Zombie Land Saga Revenge'”, the commentary on the schedule of the live scene started.
Work on the live scene started about half a year before the broadcast. There are two patterns of dance expression, drawing times and 3DCG times.
The 12th episode, which is the final episode, cost the most.
In addition to the fact that most of the B part, which is the latter half of the story, is a live scene, there was a fascinating task of about 30,000 spectators.
Regarding that point, 3DCG director Ai Kuroiwa said, “I had to review the workflow and the arrangement of the audience before starting the sequel,” he said about the part that was more powerful than the previous work.
“Zombie Land Saga Revenge” aims at “how much expression can be enhanced by using the same model as the previous work” and “using the know-how of the previous work to raise the average value of the quality of the live part.”
Therefore, I tried not to show the broken picture as much as possible.
Experienced drum staff will show their true potential! The live scene with increased excitement
Let’s see an example of what kind of work was done.
First of all, the line drawing settings and the 3D character model were created based on those settings.
In this work, by enhancing the expression while using the 3D model of the previous work, the movable mechanism of the 3D model called “rig” has been repaired everywhere.
First of all, the so-called “swaying thing” that moves partially, such as hair, has many bones on the premise of using the script “Spring Magic.” Assuming a dance scene where flexibility is required, the unnaturalness when moving has been reduced by strengthening the arm circumference in particular.
In addition, the skirt part is also adjusted by assembling a program with the “script controller” so that it can move in conjunction with the movement of the legs. As a result, the time and effort required to correct the entanglement were reduced, and a failure-free movement was realized.
From here, let’s focus on the 12th episode of the problem.
In the 12th episode, the dance scene was made into 3D, so the background art was also created with 3DBG.
For the 3DBG model used in episode 12, modeling work by “blender” started at the background company early. When the “artboard” that colored the art setting was raised, the texture work of the 3D stage was advanced.
The motion capture considers the movement so that the character appears at the time of dance and choreography of the performance. However, since it is impossible to completely reproduce the animation-like movement, the animator will eventually brush up on the recorded data.
Once the 3D model, 3DBG, and motion capture are ready, it’s time to start making screens. When the storyboard is up, decide which part of making CG and which part of making drawing based on the storyboard on a case-by-case basis by comparing it with the schedule. Consider an effective method.
The motion capture is not only a CG movie but also provided to the animator as a movement guide for drawing.
A miniature model and a virtual camera are used in the “CG meeting.”
The virtual camera was used in meetings such as “How to show a large live venue?” And helped to smoothly convey the director’s image to the staff in the remote field.
After the CG meeting, it’s finally time for the CG animator. We will improve the degree of perfection in the three processes of “layout,” “primary,” and “secondary.”
“Layout” is mainly the work of determining the camera position. The “primary” is used to adjust facial expressions, shaking objects, poses, and timing, and the “secondary” is used to finish the image.
Character movements are created based on motion capture.
Motion capture is a raw movement, so if you change the angle of your arms to make it adorable, or if your face is hidden by choreography, you have to adjust it to make it look better. In the primary, we will process such parts and prepare the whole.
Of course, not all the raw movements of motion capture are denied, and there are cases where detailed gestures that are difficult to reproduce by hand are left as they are.
What the production site was most conscious of was the facial expressions of the characters. Since delicate movements are added by motion capture, CG-likeness will be strengthened unless facial expressions are given in detail. While paying attention to such parts, we will bring out the charm of the character.
The pattern of “squinting” was emphasized in terms of facial expressions.
This work has added a detailed pattern close to the design. 50% of the closed eyes and the designed details have different shapes, so without this pattern, it is impossible to draw an intermediate expression when closing the eyes of a squinting expression when laughing.
The gorgeousness of life is in the audience!
Idols are not the only ones who decorate the stage in live scenes. The audience waving penlights in the audience is also part of the live performance.
However, in the 12th episode of this work, about 30,000 people are planned to be assigned, and it is not realistic to work on each of them.
Therefore, this time, we have created an environment where you can work more efficiently and effectively by adjusting the automation function and increasing the motion.
Rather than creating each model for the audience, we have prepared two models, one that supports ups and the other that supports distant views, to reduce the amount of data.
Also, in the previous work, the color of the penlight was switched manually. Still, in this work, a plug-in that randomly assigns colors is used, and the ratio of the color used can be adjusted according to the image color of the character standing in the center. ..
In this way, the dynamic live scene of episode 12 was completed.
The official YouTube channel of Avex pictures shows the live scene of this work, so why not retake a look based on the making.