This is the process of importing the handheld shots collected via Jetset. Applying the data recorded from a capture session to cameras and syncing them so that can be used in the edit track.
- Categories:
- Training
This is the process of importing the handheld shots collected via Jetset. Applying the data recorded from a capture session to cameras and syncing them so that can be used in the edit track.
Using the prototype software ‘JetSet’ we can transfer our scenes to an iphone where it can be ‘filmed’, handheld, as though the operator was standing in the scene. JetSet is able to animate metahumans within your scene, along with that audio track, and you can shoot using the hands on physicality in the language that has been developed since the beginning of the film industry.
Here you apply the soundtrack used during motion capture to your sequence so that dialog and sound effects are sync’d to one another. As it did during the capture session, this soundtrack helps drive the scene. Here too, as other elements are visualized in-engine, more sound effects can be added – each one serving to inform the filmmaker where the camera should be pointing and when.
Applying Characters, along with their motion capture, within a scene sequence they belong to and syncing the movements of each character to each other. Getting that dance or fight to land right is what’s happening here. The interaction with each other, or in the case of solo scenes, simply the best take for the intention.
This refers to importing processed motion capture to the scene, and retargeting it to the characters that need it. It sounds simple enough, but frankly it seems needlessly complicated, though presently necessary. Imagine that your capture actor was 6ft tall and you attempted to apply that motion data onto a gnome character. The tall skeleton data would stretch the Gnome to be basketball-player-ready. The act of retargeting tells one skeleton how to translate to another.
This refers to situations beyond clean up. Is the character supposed to pick up a cup of coffee but reaches five inches too far to the left? This stage corrects errors like that and ensures all characters hit their marks. This type of animation can work in tandem with set design as you might want to bring props or interactive elements closer to the action instead of the other way around.Each situations calls for its own solution.
Otherwise known as “Animation”. To date, almost all but the most expensive capture solutions require clean up. The amount of clean up can range from filtering and smoothing to extremely involved manual animation. This stage cannot be under valued as it affects all characters in your film. AI tools are being developed to mitigate the clean, but until such time as they prove themselves, this can get expensive. fast.
SPACE: No matter the capture solution, actors will need space to move around and perform the actions. TECHS: Required to run sessions. DIRECTOR: Both Technical/Dramatic directors required. ACTORS: Consider systems that allow two people to be captured at once to have good interaction in your scenes. PROCESSING: It is not ‘automatic’, a great deal of time and care is needed to organize and prepare data for the processing stage, as well as managing that data once processed.
The most current and best solution for face capture is via metahuman animator. It uses depth camera information to create a great facial animation from a performance. Tools required are iPhone and facial capture helmet. This recent tech is a game changer for motion capture and as mentioned previously, the depth information will extend to body capture as well. Previous to this, animating faces was daunting and out of reach for most. With this ‘solved’, more filmmakers are likely to try their hand making an Unreal movie.
Capture methods are constantly changing with the goal to achieve better fidelity, while providing cheaper and easier results (such as markerless). The multicam capture using gopros is now deprecated by move.ai, only available in their ‘experimental’ mode. In September they intend to be releasing a depth camera-based system, but that only uses a single view (one camera). Our feeling is, a true cleanup-free solution will utilize both depth and RGB (visual) in a multi-cam setup. This will help avoid occlusion and produce the best amount of data that can be used to chose the most reliable bone positioning.