Rookie Awards 2024 - Open for Entries!
Lukas Kapp - Rigging & CFX
Share  

Lukas Kapp - Rigging & CFX

Lukas Kapp
by lukaskapp on 25 May 2023 for Rookie Awards 2023

Hey! I'm Lukas and I'm about to graduate in Technical Directing at the Animationsinstitut of the Filmakademie Baden-Württemberg! I'm excited to present the rigging and CFX work that brought our Ghoul to life, as well as my ongoing research project exploring the use of Machine Learning for character work. Enjoy!

14 1156 0
Round of applause for our sponsors

My latest Creature TD reel

A Calling. From the Desert. To the Sea - The Ghoul

The Ghoul is the creature featured in the short film "A Calling. From the Desert. To the Sea" (21st VES Award for "Outstanding Visual Effects in a Student Project"). It is a hybrid between a human (upper body) and many different creatures like ostrich, crocodile, dinosaur and kangaroo (lower body). From a rigging standpoint, there were a number of challenges: using of the upper body both as a human (bipedal) and also as an animal walking on all fours (quadrupedal), creating a believable facial rig and a complete muscle simulation in ZivaVFX. Plus of course, making everything as realistic as possible and as pleasant to animate as possible, while still being very fast to animate.

Modular Rigging Tool

Rigging began in the concept phase to explore locomotion and make changes to the concept from that perspective. Initially the rig was created by hand, but with model changes coming up, it was clear that this wasn't the way to go. We needed a more flexible rigging solution so that model iterations could be easily handled, so I created this modular rigging tool.

This meant that all the rigging was now scripted in Python, which meant that it could be rebuilt at any time and adapted quite easily, making it very flexible to changes. You can see the rough structure below.

The rig is divided into several parts or modules such as arm, leg or spine. These modules work independently and share common features - so if you update one thing, everything else is updated too.

As scripting control shapes can be very tedious, I used a template approach. This means that I created the control shapes I wanted first, saved them and load/apply them in the finalisation part.

After that, all the modules are connected, some settings are made for the master control and finally you have this rig.

In Maya it looks then like this. It is basically just a bunch of buttons being pressed, but the real magic happens in the code behind it.


Body Rig

The body rig was a cross between a biped and a quadruped rig and had the standard features like FK IK switch, space switch, soft IK, reverse foot roll, etc.. It makes heavy use of matrix math (mainly for the constraints) and is built in a parallel-friendly way to increase the performance of the rig.

The special thing about the rig was the so-called "Biped Quadruped Switch" in the arms - you can switch between a biped-like rig and a quadruped-like rig, which gives the animator a lot of flexibility when dealing with different locomotions of the creature. I also wrote a script that adjusts and switches the mode accordingly, so you can even start in one mode and switch to the other later in the shot without a problem.

Anatomical Skeleton Rig

Since I was planning to do a muscle simulation later, the skeleton had to be rigged as well. You have to pay special attention to the different anatomical features that might be ignored when rigging the skin/body itself. So I added the automatic gliding of the scapula over the ribcage - a very important feature since there are a lot of different muscles connected to the scapula, the gliding of the patella between the femur and the tibia, the twisting of the radius and ulna when rotating the wrist and last but not least the extension of the ribcage when breathing in order to include the breathing animation in the muscle simulation later on - without all these features the muscle simulation wouldn't feel quite right.

Face Rig

For the face rig I mainly used a FACS based blendshape system, but since the ghoul only had one expression (which was angry) there was no need to include every shape/action unit in the system. This also saved a lot of time that could be spent on other parts of the rig.

For the mouth, I used a joint-based ribbon system to give the animator more flexibility when deforming the lip itself. You can also see the extra wireframe mesh used for the rivets in this system. This ensured that the rig stayed parallel friendly and therefore the performance did not suffer.

The eyes are a crucial part of selling a creature. So I implemented a "Fleshy Eyes" system that deforms the area around the eye when it moves - mimicking the different muscles and tissues that cause this deformation. To do this, I started with the different FACS shapes for the eyes, such as looking up, looking down, blinking, etc. These will get you far, but to also recreate the sliding of the eyelids over the eyeball, I used a shrinkwrap that pushed the eyelids towards the eyeball and slided over it. For a better result I added a deltamush infront to relax this area. Both deformers change the model drastically, so another blendshape was needed on top to add back the lost data from these deformers. To nail the final deformation I used myself as a reference and tried to match the deformation of a short eye movement test.

Muscle Simulation

To make the creature more believable and realistic, I created a full body muscle/fat/skin simulation in ZivaVFX. In the end I simulated 83 individual muscles that had been modelled by our creature modeller Till beforehand.

Normally a muscle simulation is a multi-step process - first simulating the muscles themselves, then caching them and using them as a collider in the fat simulation, and finally a skin simulation on top. I took a different approach because I had better results in the past when I coupled the muscle and fat simulations so that the fat could react to the muscles and vice versa. This resulted in better deformations and fewer collision errors, but was harder to tweak as you always needed the fat in the simulation, which increased the simulation times.

Below is the final result and also a breakdown of the individual layers:

I started by converting the bones of the skeleton to Ziva bones/collider meshes and the muscles and fat meshes to Ziva tissues. Then I constrained each muscle to its origin and insertion point using fixed attachments and a mix of fixed and sliding constraints to shape the behavior of the muscles during the simulation, e.g. to stick together. You can see all the constraints used in the first image - fixed attachments are red, sliding ones are purple/blue.

Ziva fibers are then added, which mimic muscle fibers and allow the tissue to contract - resulting in the typical muscle behavior you would expect. You can see these fibers in the second image as these lines. They represent the direction of contraction - so when the muscle is excited, it will contract along those lines. Although Ziva creates these directions automatically, I checked them carefully because they depend on the muscle model itself how good they are and can sometimes be off, resulting in not quite the muscle contraction I want. But as with most things in Ziva, you have the ability to adjust these things by simply painting the attribute correctly.

Speaking of painting, another important part of Ziva are the materials that describe how the muscle behaves during the simulation, such as how stiff/soft and easy to compress it is. Setting these attributes is mostly a matter of trial and error, but after a while you get a feel for what values to start with. For more realistic results, I added tendon materials to some muscles where the tendon is quite prominent, such as the triceps. Tendons are much stiffer than muscles and need different material values to simulate them correctly. As with all areas - reference is key, so I carefully compared my setup with various anatomy books, animal videos and videos of people contracting their muscles and made adjustments accordingly.

The contraction of the muscles happens automatically using curves attached to the animated skeleton. Most muscles have their own curve because they have different functions/triggers. When the curve shortens, the muscle contracts - as you can see below. But in reality the muscle contraction causes the skeleton to move, so I cached these curves and imported them again with a slight offset to create the effect that the muscle actually moves the skeleton and not vice versa. Below the curves you can see the triggering of each muscle - red means the muscle is not contracted and yellow means it is fully contracted.

With the muscles set up, it was time for the fat tissue. But before I could start, I had to model the fat tissue first. To do this, I used the creature's body/skin mesh and adjusted it for the simulation, e.g. fixing topology errors and closing holes. Then I push this mesh towards the underlying anatomy via a cloth simulation in Ziva to my inner fat mesh - it is like using a shrinkwrap. Combining these two meshes results in the final fat mesh ready for simulation. The thickness of the fat mesh is important in the simulation, so we kept this in mind when we creating the body model itself.

For the fat simulation itself, I used different materials to shape the simulated deformation. The inner fat mesh gets its own material with a high pressure value so that it sticks to the muscles and slides over them. But since I'm coupling the simulation, I can't use really high pressure values, because that would push the muscle anatomy inwards, which would end up making the body mesh slimmer. To compensate for this, and also to adjust the deformation in certain areas, I used a number of different sliding attachments to stick the fat to the muscles. In the end you get nice sliding over the inner anatomy as you can see below.

The most challenging part was the sliding of the fat over the ribcage. The problem was that the deformation of the ribcage was already included in the model, causing it to slide over the ribcage in the simulation. This gave the impression that the ribcage was deforming, which shouldn't be the case. To solve this problem, I smoothed this region on the fat mesh so that it didn't include the ribcage deformation in the fat mesh itself, and let the deformation create itself during the simulation.

After the fat simulation part, it needed to be transferred back to the animation since the head, hands and feet weren't simulated. To do this, I wrap deform a copy of the skin mesh to the simulated fat and use it as a blendshape target on the animated skin mesh from the animation rig. By painting the target weights I can blend between the deformation of the animation rig and the muscle simulation - for the head, hands and feet the animation rig was used, the rest is the muscle simulation. Below you can see a comparison between the two deformations. It's important that the difference is small because you don't want to completely change the deformation that the animator created.

The last step was to simulate the skin. So I used the transferred deformation as a base and applied a denser skin mesh to it so I could create finer wrinkles. By using different materials, I can change the size of the wrinkles created, which are dynamically created depending on the deformation of the fat mesh. The effect of the skin simulation is subtle, but it definitely adds to the realism.

Inverse Rig Mapping


As part of the Technical Directing course at the Filmakademie Baden-Württemberg we have to do a research project on a topic of our choice. I wanted to do something rigging related, but also wanted to dive into machine learning. Luckily I found an interesting paper by Daniel Holden called "Learning an Inverse Rig Mapping for Character Animation" in which he uses machine learning to basically map skeletal animation back to rigs.

Rigs are pretty complex systems with a lot of different controls and attributes that ultimately control the transformations of the joints - so basically taking something complex and reducing it to something simple. Going back from the skeletal/joint animation to the rig is therefore quite tricky, as you have to guess which attributes and controls are used for the specific joint. It often involves custom solutions that may only work for a specific rig, or reducing the rig itself to make remapping easier.

Machine learning can solve this problem. You can use it to learn the correlation, the relationship between the controls and their attributes and the joints. After training a ML model with it, you can use these animated joints to go back and predict which controls and attributes of the rig have been used. This results in a more flexible and stable solution when working with skeletal animation data, e.g. motion capture or animation synthesis.

I'm still working on it, but you can see the current state below.


To get started, you need data - as with anything AI related. So I created an algorithm that automatically creates different random poses of the selected rig controls and joints and exports them as a dataframe, one for the rig controls and one for the joints. For the rotation I use a rotation matrix because Euler angles are hard to train on since they flip at a certain angle. Below you can see parts of the generated dataframe and how it is generated in Maya.

These data frames are then used to train a Gaussian process regression model in PyTorch using the GPyTorch library. Since adding different Python libraries to the Maya Python environment can be quite tedious, I used a sub-process to use a custom one that already has all the necessary libraries. Before training, I convert the values of the dataframe into a PyTorch tensor and also normalize the input data, i.e. the joints, to be between -1 and 1 to improve the learning performance. After that, the model is trained, which takes a short time, and exported as a torchscript file so that it can be used later without re-training. You can see the training process below - the main work is done in the background:

After training, I can use the trained model to predict the rig control values. To do this, the selected animation is exported as a data frame and used in the prediction algorithm, which exports another data frame. This is then used in Maya to apply the predicted values to the various rig control attributes. In the end, the rig animation (blue) should match the skeletal data (red), which it almost does, as you can see below.

Currently, the whole system only works with smaller, simpler rigs like this IK arm rig, but my goal is to eventually have it work with full body rigs.

This concludes my entry!

Big thank you for making it this far! I hope you enjoyed it! :)



Comments (0)

This project doesn't have any comments yet.