loader
Below are general categories that you must observe when making a film in Unreal Engine. We are focusing on films made featuring entirely engine-generated content as opposed to green screen and composited actors, though we will likely feature that pipeline in the future.

The “READ MORE” links are not active yet until video tutorials can be made. Ignore them for now until we post updates as they become available.

Good luck and have fun!

iqonic-blog

To get a stronger likeness, a metahuman can be ‘re-baked’ – a process where an high resolution model is used as a morph target to reshape a previously created metahuman. The mesh-to-metahuman system loses many details that this ‘re-bake’ process (cinemotion patent pending) restores. Beyond that, there are special additions such as moles, scars and makeup effects added onto the above result.

iqonic-blog

Using a program called Marvelous Designer, special clothing is created in some cases from scratch, to fit on each specific metahuman. It works like traditional clothing design, from a sewing pattern that is digitally draped onto the character and then stitched together and simulated for hang, thickness, fabric type, friction and so on.

iqonic-blog

Traditionally, before a clothing item can be worn by a character, it need to be ‘weight painted’, a lengthy and tedious process where each metahuman bone is assigned to drive parts of the mesh of the clothing. It is a constant battle of values per bone, per vertex. With uDraper, though, the entire garment is simulated based on collisions with the metahuman wearing it. Weight painting is not applied here but the garments must be prepared using the uDraper plugin.

iqonic-blog

Capture methods are constantly changing with the goal to achieve better fidelity, while providing cheaper and easier results (such as markerless). The multicam capture using gopros is now deprecated by move.ai, only available in their ‘experimental’ mode. In September they intend to be releasing a depth camera-based system, but that only uses a single view (one camera). Our feeling is, a true cleanup-free solution will utilize both depth and RGB (visual) in a multi-cam setup. This will help avoid occlusion and produce the best amount of data that can be used to chose the most reliable bone positioning.

iqonic-blog

The most current and best solution for face capture is via metahuman animator. It uses depth camera information to create a great facial animation from a performance. Tools required are iPhone and facial capture helmet. This recent tech is a game changer for motion capture and as mentioned previously, the depth information will extend to body capture as well. Previous to this, animating faces was daunting and out of reach for most. With this ‘solved’, more filmmakers are likely to try their hand making an Unreal movie.

iqonic-blog

SPACE: No matter the capture solution, actors will need space to move around and perform the actions. TECHS: Required to run sessions. DIRECTOR: Both Technical/Dramatic directors required. ACTORS: Consider systems that allow two people to be captured at once to have good interaction in your scenes. PROCESSING: It is not ‘automatic’, a great deal of time and care is needed to organize and prepare data for the processing stage, as well as managing that data once processed.

iqonic-blog

Otherwise known as “Animation”. To date, almost all but the most expensive capture solutions require clean up. The amount of clean up can range from filtering and smoothing to extremely involved manual animation. This stage cannot be under valued as it affects all characters in your film. AI tools are being developed to mitigate the clean, but until such time as they prove themselves, this can get expensive. fast.

iqonic-blog

This refers to situations beyond clean up. Is the character supposed to pick up a cup of coffee but reaches five inches too far to the left? This stage corrects errors like that and ensures all characters hit their marks. This type of animation can work in tandem with set design as you might want to bring props or interactive elements closer to the action instead of the other way around.Each situations calls for its own solution.

iqonic-blog

This refers to importing processed motion capture to the scene, and retargeting it to the characters that need it. It sounds simple enough, but frankly it seems needlessly complicated, though presently necessary. Imagine that your capture actor was 6ft tall and you attempted to apply that motion data onto a gnome character. The tall skeleton data would stretch the Gnome to be basketball-player-ready. The act of retargeting tells one skeleton how to translate to another.