At what stage of the developing process of a game mocap is recorded? How much time does it usually take? And in case of expansions/DLCs are actors called again or they record this content together with the content for the main game?
Motion/Performance capture usually happens fairly early during development, because animators need time to take the captured data and turn it into actual performances. Despite what some actors might suggest, there's a huge amount of work, skill, and artistry necessary in order to convert a recorded motion and turn it into a performance that reads well from the viewer's perspective. Here's an example:
The raw motion capture looks rather generic but it isn't immediately clear exactly what is happening from a first viewing. The edited animation reads a lot better - the motions are more exaggerated, the struggles of the victim are a lot more readable, the attacker's stab is more pronounced, and so on - the entire action is clearer, more readable, and even a bit faster than the raw mocap. Much of what animators do during production is this performance capture cleanup and modification/adjustment.
We usually try to schedule performance capture sessions as early as we can because they involve aligning many separate schedules - the capture studio, the actors, and the developers who need to oversee all of the action. Usually we aim for around the end of pre-production, but it all depends on the availability of the studio, the actors, and the developers. The performance capture session is usually fairly quick - it usually takes a week or less to capture all of the necessary movements - but expensive because of the scheduling, number of people, equipment, and traveling involved. It normally costs tens of thousands of dollars per hour to pay for all of the time, equipment, and labor for all of the people involved, and everyone has a busy schedule - actors have other jobs lined up, the capture studio has other customers hoping to do their own recording sessions, and the developers have a whole game to build.
Because of the difficulty in scheduling and expense of recording, the art and animation team must put together a move list of all the actions they need to capture before the recording session. They can modify, update, and change the list up until the deadline, but it's pencils down once the session begins. Reshoots are extremely expensive for all of those reasons - aligning the schedules of the recording studio, all of the needed actors, and the developer representatives can be a nightmare in terms of scheduling. For DLC/expansion content, we usually try to reuse as many assets as we can, and schedule any new recording sessions as soon as we get the green light.
[Join us on Discord] and/or [Support us on Patreon]
⊹ ࣪ ˖୨ৎ‧₊˚ ⋅ My second attempt at animation, I thought this one was a little better, the idea was just to make a floating ghost Rilakkuma and I thought it turned out good. I plan to get better at animations over time. ⋅ ₊˚ ୨ৎ‧ ⊹ ࣪ ˖
How do paired animations work? Like I've heard that even getting to characters to pass an apple can be tough, so how do games manage to make throws and grapples and stuff look good/right?
Each participant in a synchronized animation acts using angles and positioning relative to some common point and performs all actions within a specific space around that common point. If one participant is off-angle or off-position, things won't look right. For the animation to look right, all participants must keep the exact proper distance and orientations while the synchronized animations all play at the same time.
This means that, as long as the environment and conditions are conducive (e.g. the ground is flat, there's enough space, there are no objects to clip through, etc.) to choose an in-game location to assign as the synchronized animation's common point, we can play the synchronized animation on the participants and things should line up. Any mismatch and things will look weird. If the ground is not flat, for example, we could see our characters standing in midair. If the positions are not correct, a punch could visibly miss by a mile. If the angles are not correct, it would look like a phantom judo throw. If the environment is not clear of other objects, we would see our participants walking/flying/falling through seemingly-solid objects. Only when all of the conditions are met will the synchronized animation look right.
The TLDR is that every synchronized animation was created with very specific dimensions, angles, and positions in mind. If we can recreate an environment with those dimensions, angles, and positions, we can play the synchronized animations on the participants and they'll line up. In games where you can perform synchronized animations in arbitrary locations (e.g. executions in Call of Duty), the engineering team has built a system to find good places near the position and orientation of the action that meet all of the necessary criteria in order to play the synchronized animation.
[Join us on Discord] and/or [Support us on Patreon]