Rookie Awards 2022 is open for submissions. Click here.
Lighting Demo Reel - Laura Ludwig
Share  

Lighting Demo Reel - Laura Ludwig

Laura Ludwig
by lauraludwig on 11 May 2022 for Rookie Awards 2022

Welcome to my entry for this year's Rookies Awards! Here I will showcase and break down my lighting demo reel which I created during my education at PIXL VISN media arts academy last year.

25 1105 6
Round of applause for our sponsors

Hello everyone! In this entry I'd like to showcase my student demo reel which I created during my 18 months of education at  PIXL VISN media arts academy. I'll try to give a detailed breakdown of three projects and some deeper insights into my process of creating those projects. If you're interested in a breakdown of the rest of my reel, feel free to check out my entry from last year!

The Bay

When I started planning my demo reel, I made a list of different aspects and skills which I wanted to cover and showcase. One of these aspects was creating and setting up a matte painting in Nuke, so I thought about what kind of project would be suitable for this. I figured that it would need to have a larger scale than my other projects so I could create different layers for the background which would make sense and behave more naturally than in a project where everything is super close with a lot of parallaxes. As I already had the idea of creating a project with a ship in my head for a longer period of time, I decided to go with this.

Getting started I Preparation and Layout

already had a certain image in my head which consisted of a ship in a bay, surrounded by big rocks, so I looked up different artworks as well as photography of ships and boats in bays for reference.

On Artstation, I found some artworks that were very similar to my idea so I found them to be good reference for my overall layout and composition. I looked at all my collected reference and picked the parts of each image that I liked the most.

As some of these references were so close to what I had in mind, it was important to me to pick the elements I like while still creating my own, individual artwork.

In Maya, I started to create a loose blockout. As I wanted to have a ship as my focal element in the scene, I downloaded a model from the internet and started my blockout with placing it to be sure the camera is close to what I wanted it to be in the final image. Back then, I set it with a really low angle, which I changed later to have the camera more facing downwards to create a better feel of scale.

Having a rough layout, I started with modeling the rocks. For the base, I just modified my blocked models very roughly to get some bigger variations in the shape. After that, I brought them into ZBrush and used different rock alphas to create all the specific shapes. I exported the displacement map and brought them back into Maya. To have a lot of vegetation on the rocks as it would be like in real life, I placed different trees and bushes across the rocks’ surfaces. As they needed to be placed in a specific way to always face upwards and partly needed to intersect with the mesh, I decided to place everything by hand. This was a time-consuming process but gave me a lot of freedom, control and just an overall more natural feeling to it because visually, it worked way better than to just scatter them procedurally. To have more efficiency while rendering and working, all of the trees and bushes were converted to .ass/arnold standins beforehand, so not every single one needed to be separately calculated for the render.

Composition-wise, this project is really driven by leading lines. All the rocks’ edges run in a similar direction, creating a space in the middle of the image, which follows this direction down to the ship. But there are other lines leading to the ship as well, as it’s cutting right through the horizon line as well as creating a long and bright reflection on the water, pointing towards the ship in a straight line. Because of all these assets, there are lines pointing towards the ship from all directions to really set focus there. Another point that strengthens this focus is the ship being a dark (but because of the lights colorful) asset on top of a brighter, more desaturated and slightly defocused background, so it stands out more.

The Surfacing I Shading, Texturing and Lookdev

For the texturing of the rocks, I simply projected different texture-scans of rocks with triplanar projection onto the rocks and blended them with a soft noise. This was already enough to get nice-looking rock textures, especially considering the fact that most of the surface is covered with trees anyway.

As there weren’t any textures provided for the ship, I textured it quickly in Substance Painter with some basic materials and adjusted everything later in the lookdev process, especially depending on the lighting conditions of the scene. To give the sails more of a used look, I added some noises on top of the base color.

It was tricky to get water in the way I wanted it. As a base, I used a big plane on which I simulated basic waves with the BOSS system. My computer wasn’t able to handle all the fine details, so I created them by layering multiple noises with different frequencies and sizes on top of each other and used them as displacement. I gave it a rather basic water shader, with some color and depth. As this is important for the correct calculation and look of depth, I created another plane below which would imitate the ground.

To the shore parts on the left, I added a sand material from Quixel Megascans and modified the texture inside of substance painter. Since the parts that are on the edge to the water get wet, I painted some kind of a “waterline” on it which I gave a darker color and less roughness to recreate the wetness effect.

Creating the mood I Lighting, Rendering, Compositing

With all the textures and shaders ready, I started with the lighting. I wanted a very soft, hazy and foggy feeling to the scene with a mysterious atmosphere. As this kind of lighting doesn’t really have a significant key light to it, I simply added a skydome with an overcast sky HDRI to get very soft, diffused lighting. Since the lower side of the rock on the left-hand side was pretty dark, I added some fill lights to lift the darks in that area. I also added a big fill light to the front of the ship as it was really dark and hard to read with little shaping.

Now, that I had all my basic natural lights set, I moved on and added some warm lanterns to the ship and put lights into the inner part of the ship. This really helped to emphasize the subsurface scattering of the sails. When I placed the lights for the lanterns, I had to make sure there weren’t any bright, distracting light reflections in the water, so I carefully placed them in certain spots. To have the ship’s light more integrated into the scene, I added a warm light to the big rock. It’s located in the same area as the ship and moves along with it.

Some time later, I realized that a part of the palm on the right-hand side was really in shadow, so I added another fill light to lift it.

At that point, I was pretty satisfied with the lighting and had the feeling that adding more lights would only make it feel artificial. To reach the foggy and mysterious look I was going for, I added some atmosphere/fog. For this, I tried different ways, like using aiAtmosphereVolume, using aiFog or real fluids. Everything had advantages and disadvantages, the fluids, for example, weren’t affected by my lights, so I had less control, which I didn’t like. I decided to go with a mix of aiAtmosphereVolume and aiFog, which helped me to have an overall hazy feel to the scene as well as a separate fog layer that sits just above the water. To have the atmosphere a bit more non-uniform, I connected a noise to the density to have everything more broken up, as it would be in the real world.

To have total control, I rendered out all my shading-related AOVs for each light. In Nuke, I first added the AOVs of each light group together to recreate the light and then added the different lights on top of each other. I slightly graded some, did color correction on some assets and added my atmosphere and fog on top. Overall, I tried to match the colors and mood from my references. I noticed that most of them are very desaturated and have a lot of blues and greens in them.

The next step was the matte painting. For this, I collected a set of different photographs of cliffs, bays and big rocks with trees and plants.

In Affinity photo I edited everything together. I isolated the parts of each image which I wanted to use and built up a background consisting of different layers. The biggest problems with this were for one the image quality as I noticed that it was too low compared to my render and also that I wasn’t able to achieve the feeling of depth which I wanted to have. Despite these things, I didn’t have the time to redo everything and also didn’t really find images with a higher quality that were suitable.

When I was ready, I imported my files into Nuke and projected each layer onto a sphere. First, I used cards, but found it tricky to have them aligned with my rendercam which was rotated in several axes. So, I decided to project everything on spheres instead, which worked fine. For matte paintings, it is especially important to have logical distances between the layers to not have way too strong parallax effects. I adjusted everything the way I thought it should be but then noticed that my camera movement is so small that you can’t really tell if there’s a big parallax effect. I still could learn a lot from the process which is very valuable for me.

After that, I added depth of field on my render and background together, so it would match. Initially, I wanted to use PGBokeh for this to have physically correct depth of field but I had to realize that it doesn’t work well with alpha maps. To have a bit more movement and another element for realism in the scene, I got some green screen footage of birds flying from the internet and integrated them into my scene. At the end, I added some final grading as well as lens effects like chromatic aberration or lens distortion.

This project held quite some learning opportunities for me, like using StandIns for the first time or creating and setting up a matte painting. One of the biggest challenges was definitely the rendering of the water as I had to use really really high render settings and still had noise in it because of the low angle of the camera to the water. As I wanted to keep all the detail in the front, I wasn’t able to use a lot of denoiser on it. Another challenge was also the placement of the trees: At first I just took trees with a bigger area, rotated them a lot so they are pretty parallel to the rocks and placed them to cover a lot of surface space at once. However, this didn’t look good at all and I realized that I was completely ignoring my references, so I did everything again - this time following my reference closer.

All in all, I’m satisfied with how the project turned out, but I know that there is always room for improvement.

For this project, I used Maya, Arnold, Zbrush, Substance Painter, Mari, Affinity Photo and Nuke. I rendered everything in ACES colorspace.

CG Integration

As in VFX lighting artists have to match the live-action plates they are given, I wanted to create a CG Integration for my reel - to show that I can follow this workflow and recreate it myself.

Getting started I Preparation and Shooting

I started off by planning the scene as well as the CG object I wanted to integrate. For my asset I wanted to choose something rather simple and casual so it's easier and more logical to integrate into the scene and also to not choose something that stands out right away. Eventually, I decided on creating a scene with different stationery and office supplies and to integrate a simple pen into it. Additionally, I decided to create a pin and integrate it as well. It's pretty obvious that the pen is my integrated asset as the camera movement is really centered on it and the rest of the scene is built around it which was probably not the best decision. I was hoping to take the viewer by surprise by revealing another CG object which is among a group of its kind and therefore doesn't really stand out.

I gathered a few items that were related to the theme I chose and layed them out on a table while estimating where my CG objects were going to be placed later on - in these spots I placed tracking markers that would help me to track the footage in Nuke later on.

I took several different shots with the camera, most of them were simple pans from left to right, and I tried to keep the camera as steady and smooth as possible. Later, I stabilized my footage in After Effects to have it a bit smoother.

Let there be Light! I Creating the HDRI and Lighting

The next step was the creation of my HDRI. During my research, I found three different approaches to do this: shooting a chromeball, taking photos from different angles to capture the full 360° and stitching those together or using a 360° camera right away. They all have some advantages and disadvantages to them: The chrome ball solution is the most inaccurate one as the chrome ball has natural grime and scratches which influence the quality of the HDRI. Also, to extract the HDRI from the chrome ball you have to unwrap it in Photoshop which is also not the most accurate way. Taking multiple pictures and stitching them together gives the best quality due to the high-resolution images that are combined, but is more work because you have to adjust the camera times and times again.
Because I had the opportunity to use a 360° camera, I went with the third method. The resolution isn't as big, but for my scene this was totally fine and actually helped optimizing. I placed the camera at the exact same spot where my pen was located, left the room and took the pictures remotely.

The most important step for creating the HDRI was to shoot with multiple exposure settings - some very low and some very high, which I could then merge together to create an HDRI: a high dynamic range image. This was important to catch accurate lighting with full, umclamped light values. Overall, I took a range of 13 different images to catch all the highlights and shadows.

When using an HDRI for lighting, you don't want to use the direct lighting information that is baked into it, as it doesn't allow any further control. So, I extracted all of my light sourced from the image and saved them as HDRIs to use as light textures later on. By doing this, you are able to separate the indirect from the direct lighting and adjust it during the lighting/scene setup. To prevent any doubling of the light, I painted out my light sources and created a cleaned up version of my HDRI.

The next important step to take would have been to neutralize the HDRI and plate so they match up or grade the HDRI to the plate so they match up, especially since they were shot on different cameras. Unfortunately, I didn't know or think about this step back then, which made it imprecise and created more comp work.

When my HDRIs were all set, I brought my footage into Nuke and used a planar tracker to track the position of the markers on the table and book. As the objects I wanted to integrate weren't as big as the markers and didn't cover them, I had to create a clean plate. For this, I viewed a frame on which I could see the markers and used a rotopaint node and its clone tool to paint over the markers. With the tracking from before, I was able to match the paint over to the movement of the markers.

After that, I created a camera tracker to recreate my camera movement and to export and recreate it in Maya. I adjusted the camera tracker until it had a good balance of accuracy vs number of points. I also selected multiple points on the table and set them as ground plane to have a correct alignment to the camera. I brought the camera back into Maya and constrained my footage to the camera. I then recreated the basic geo of objects that were significant to the scene, like the integrated assets, reflection and shadow catchers. To work correctly, I tried to have all models in real-world scale. For the shadow catchers, I assigned an aiShadowMatte material, so that I have correctly cast shadows that behave as they'd fall on the real objects. For the reflection catchers, I created a chrome material that catches all the reflections. To extract only the reflections that are created by my CG objects - the secondary specular bounces - I just had to shuffle out the specular indirect pass.

For full control, I separated everything into render layers: pen, pen shadow, pen reflections, pin, pin shadow and pin reflections.

When I got to the lighting, I created area lights and plugged in the textures that I extracted from my HDRI. It was important to have them in the correct scale and position as in the real world. Usually you'd shoot a chrome and grey ball on set as a reference for your light positions, intensities and colors, but unfortunately I didn't take this step.

To be able to adjust everything later in Nuke, I rendered out AOVs for every light.

The finishing touches | Compositing

In Nuke it was mainly about bringing everything together. I took my clean plate as a base, then I started by layering everything on top: the pen shadow, beauty and reflection and then the pin shadow, beauty and reflection pass. Of course, I couldn't just take the raw render and leave it as is, but had to integrate it by matching the quality and defocus, adding motion blur as well as grain. I also did some slight color grading to match everything better and worked on the shadow intensity. All those steps required a lot of fine-tuning and took some time to really find the best values. I added a final grade to everything and rendered it out.

Conclusion

As I never really learned the correct workflow for creating an HDRI image and using it to integrate something into a plate, the whole process was very new to me, which is the reason why this project was in a way actually one of the hardest for me. I had to research a lot but didn’t really find a proper, in-depth workflow explanation which is why - from my current point of view - I can see a lot of room for improvement. However, I see this as something positive because I learned a lot on the way as well as afterwards, and if I were to do a CG integration again, I know several things which I can improve. I'm glad I decided to do this project as now that I'm working in VFX those tasks are part of my daily work and going through this process once completely on my own strengthened my understanding of it a lot.

For this project I used Maya, Arnold, Nuke, Affinity Photo and After Effects.

Chester the Chimpanzee

This project was actually not something I had planned for my demo reel until a few days before we started working on it, but it was still a good opportunity to showcase different skills and techniques like managing and lighting fur. My team mates Stefan Klosterkötter (Texturing) and Jessica Wicher (Grooming) came up to me with the idea of creating a scene with a realistic chimpanzee and I decided to cover the layout, lighting and compositing part.

Getting started I Preparation and Layout

With this idea in mind, we started collecting different references for the overall scene and setting the shot should take place in. First, it was our plan to create a very young chimpanzee, but to show more variation and complexity in the texturing and grooming, we decided to go for an adult chimpanzee.

Since none of us planned to specialize in modeling, we got a detailed, high-quality model from Christian Leitner. We searched for layout ideas, especially photography as we wanted to go towards realism. At one point we found a concept we really liked and which we wanted to follow.

As we had this image as our main layout reference, there wasn't that much to the layout to change anymore. Our shot is a bit wider than the reference, so I tried to set the chimp on the screen-right third to not simply have him sit in the middle of the frame. For the plants I tried to follow the style of the reference: Some very defocused plants and leaves in the foreground, some sharper than others and a heavily defocused background. To get a better feeling of the depth, I frequently activated the in-camera depth of field and rendered with really low settings.

I tried to match the pose as close as possible, but since the head shape is really different in some areas, one eye wasn't fully visible anymore which didn't look very pleasing. So, I rotated the head a bit until I could see both eyes properly.

The Surfacing I Shading, Texturing and Lookdev

All the plants and trees we used were Quixel Megascans, so they were provided with textures. This time, I did not create a separate lookdev scene for all the assets like I usually do, but just for a few ones and applied those adjustments to the other assets as they were all similar and since a lot of them were defocused anyway, you couldn't really tell the small differences. Obviously I checked the render to see if everything looked good and didn't just leave it blindly.

To make the tree in the front which the chimp is holding onto look more interesting, I added moss on top of it. For this, I used Xgen interactive groom and utilized the density brush to remove it in certain areas so that it looks more organic. I also layered several shades of green on top of each other and blended them with noises to get a good and organic color breakup and to not have everything look so uniform.

The chimp texturing and main lookdev was done by Stefan Klosterkötter, so I only did very small changed to adjust it t the lighting in the scene. For example, I slightly adjusted the SSS amount as we had a strong light it was too intense.

What was very new to me was the hair shading as I never really worked with hair/fur before except for my scorpion project. At the beginning, I simply used melanin, redness and roughness, but the result was completely different from what my reference looked like. It was way too soft and didn't reflect the light on all the different strands, even though the roughness was really low.

For my final outcome I didn't use melanin and redness at all, but instead regulated everything through base color, IOR, transparency tint and roughness. The base color drove the overall color, IOR and roughness created the reflectivity I was looking for and transparency tint was to regulate the brightness of the hair. I tweaked the parameters until I roughly reached the look of the reference. Of course it was no 1:1 match since it depends heavily on the grooming and in the reference this was longer, wilder and more noisy.


Creating the mood I Lighting, Rendering, Compositing

For the lighting, I started with our main reference as a base. The most important part was to get the face lighting right as the face is what the viewer usually looks at first. I noticed that the eyebrow area casts a significant shadow on the face which gives it more depth. Also, the terminator of the key light runs along one line from side of the mouth along the cheep up to the eyebrow part.

Additionally to our main reference, I collected a lot of different references, too as this helps to get the right look. For this I used images from documentaries as well as movies.

What I noticed and liked much in some of them was a strong key/side light which a) made their heads look more interesting and better shaped and b) created a strong feeling for the sunlight. As we wanted the chimp in our shot to be drawn towards and looking at the sun, I tried to implement this into our scene. This was also a significant aspect of our main reference as you can see the strong key light on the arm.

So, I started off by placing a directional light for the sun until I was satisfied with the mix of light and shadows it created on the face. Since it's supposed to be sunlight, I gave the light a warm tint. Next, I added a skydome light with a simple, soft HDRI to fill up the dark areas and lift the shadows. To separate the chimpanzee even more from the background and to enhance the key light, I added a rim light to the arm, which has a similar color as the key light.

To get a better shaping and more depth, I added rim lights to the screen-right side, which affected the chimp's head and side of his body. To get a bit more color contrast, I colored these lights in a slightly cool tone which contrasts to the warm tones of the other lights. These rim lights aren't too bright to avoid the feeling of an artificial light source without any motivation in the scene.

As some of the parts in the front were still quite dark, I added a big soft fill light to make them a bit brighter. The important aspect while doing this was to not have the light affecting the rest of the chimp, like the face, too much so it doesn't look flat.

Since this scene was supposed to be rather natural, I didn't add any more lights that contribute to the assets' lighting to avoid the scene feeling fake and artificial. What I did though, was to add atmosphere to the scene and to have more control over it, I created two lights that only contribute to it. By doing this, I was able to move the light and the illuminated parts of the volume until I liked them.

One of the lights was for an overall atmospheric effect in the background. It was broad and soft so that the effect is really subtle. The other light was a spotlight to mimic god rays and to achieve this, I plugged in a tree/leaves gobo texture which partly blocked the light and created the effect of rays. Both lights got a warm tint as they were also part of the sun, while the first, broad light has a cooler color than the second light.

To have full control and to avoid artifacts when doing depth of field in Nuke, I rendered everything out in several render layers. I created layers for the background, foreground, the middle ground which contained the chimp body and some plants, the groom, the eye reflections as well as one for each atmosphere light.

I put the groom on a separate render layer as it is really expensive to render and I wanted to use the most efficient settings without oversampling any other assets. Groom for example needs a lot of specular sampling to reduce noise, whereas other parts of the image like the plants didn't need this. It is very important to work as efficiently as possible and optimize your scene setup where you can. As the eyes are usually the first thing the viewer looks at, I wanted to be able to control the placement of the reflections in the eyes as they have an important role regarding the look and feel of the eyes. If placed wrong, they can really distract and make the face look strange.

When creating the render layers it was especially important to work very structured and organized as they all have to work correctly when they are combined at the end. You have to look out for assets covering each other, the effect on the alpha channel and to not have any doubling.

I imported everything into Nuke and started building up my beauty again. For every render layer, I used shuffle nodes to split everything into light groups which I then was able to modify. After the light groups I defocused the image and since I have a moving camera, I also had to animate the focal point so the chimp stays in focus properly. Then, I added a grade/color correction on some layers, before I layered all of them on top of each other. Next, I added a background, but since the geo in my background render got heavily defocused, you couldn't really see much of the image.

After adding and adjusting the atmosphere, I added some particles to make the background area look more interesting. I added a last color grade and several lens effect like lens distortion, chromatic aberration or film grain to get a more realistic look. Overall, there wasn't a lot of compositing needed in this project as everything was really straightforward, it was more about polishing the image and creating a nice final look.

For the look, I decided to push the warmth/sun aspect a lot more, so in the end, I moved a bit away from our main reference, but I think from an artistic point of view, it was reasonable and added a nice touch to it.


Conclusion

While working on this project, I noticed that it was important to check the compatibility of every element in the pipeline. Even though we were frequently sharing our progress and feedback with each other, we noticed that not all of the parts worked out as much anymore when we brought them together and so reworked those parts.

Also, I had a lot of technical issues when working on and exporting the compositing which I didn't see coming. This led to a lot of stress as these things were nothing that you could troubleshoot easily. It showed me that it's always important to leave enough room for unplanned situations in your schedule, no matter how unnecessary it may seem.

Considering the short time we had for this project, I am very happy with how it turned out. I learned a lot about hair shading and rendering which may come in handy in the future.

For this project I used Maya, Arnold, Nuke and I rendered everything in ACES colorspace.

Thank you for taking the time to read my entry, I hope you liked these insights into the creation of my reel!
If you have any questions, feel free to reach out to me!

https://www.linkedin.com/in/laura-ludwig-012/

https://www.artstation.com/lauraludwig

[email protected]


Comments (6)