Right off the bat I'll admit two things: I know that the 4K video of the GH4 camera is somewhat noisy at ISOs of 800 and over. Not deal killer noisy but noisy in the shadows. I will also admit that I am not a veteran video colorist. But I'll make the point that this lack on my resumé gives me some advantages over the people who grew up in the video and motion picture film business by allowing me the ability to come at new the paradigms of video with a cleaner slate.
Before I jump into it I want to present an conundrum from the our collective transition from actual film to digital and how we changed our practices. When we shot transparency film we routinely "metered for the highlights." That meant, practically, that we were frightened to over expose our film and lose our highlights to "clear film" (the 255+ of yesteryear). We slightly underexposed our slide film to make sure we had ample detail in the highlights and we let the shadows fall where they may. Or we filled the deep shadows with light from flashes or reflectors.
When we started shooting with digital cameras we ported over the same mentality and it made sense. If you stepped over the line at 255 you had blown highlights and they were never coming back. But digital was different from film and weak, noisy shadows were the result when we started pulling up the shadow area exposures in post processing. Then we discovered the practice of ETTR (or expose to the right) which pushed us to expose brighter and move the histogram closer and closer to the right hand side of the scale to precisely nail highlights while bringing up shadows into a usable range. Now we have raw files with lots more latitude for highlight and shadow recovery. Almost like negative film. Most of us are no longer paranoid about blown highlights and our images look great. It doesn't hurt that the latest Sony sensors are beasts when it comes to the lower part of the tonal scale and resist noise almost as effectively as my wallet resists hundred dollar bills. We're now through with the last century methodology of shooting a processing. We have successfully changed the way we shoot and process and we get better quality images as a result.
So, what does this have to do with the GH4 and video? Well, the GH4 gets slammed for two things. The first is noise and the second is that the files don't do well in the older and pervasive paradigm of video shooting and post processing. If you thought still photographers in the early days of digital were a bit nervous about losing highlight detail the video guys were scared to death about having too much contrast and too much saturation in the files. The saturation was an issue because once it was baked into a file it was hard to match up the file to broadcast standards which called for shoving files with normal gamuts into extremely tiny broadcast standard gamuts. The same with contrast. The tiny sensors that most of the video cameras used fell apart with high contrast scenes.
At the same time film shooting cinematographers got used to using negative stock to shoot features. They could slightly (or profoundly) overexpose the negative (c-41) film stock while shooting and then compensate when developing to create an image that was more or less bullet proof to over exposure or too much contrast or saturation. That made it easier to shoot contrasty scenes and the idea was that color saturation and overall contrast could be more easily handled in post processing.
Now we're in the future. We have cameras that can shoot pretty wide ranging scenes without requiring special handling. And the newest computer monitors can deliver two or three (or four) stops more dynamic range than CRTs and old TVs. But what old schoolers are doing is setting up their new video camera the same way they did in the bad old days; to their lowest contrast, lowest sharpness and lowest color saturation levels; in effect sucking out 90% of the information in the files, the math... and then bitching when they can't restructure the information in Final Cut Pro X, DaVinci Resolve or Premiere to look as good or better than the info they just sucked out. It's just a bit insane.
The video mavens are overlaying antiquated techniques to the new tools for no other reason than because THAT'S THE WAY THEY'VE ALWAYS DONE IT.
But it doesn't have to be that way. We don't have to start with damaged footage to make content that looks great on screen. I may not be as smart as the online cinematographers but I can look at files and methodologies and make tests that tell me today's video cameras, be they Alexa's or GH4's were created to make beautiful images----in the cameras. And all of the editing tools I just mentioned will take those perfectly exposed, nicely tonal mapped and medium saturated images and make absolutely great video files. In an absolute sense they can look better because they don't have buckets of vital information (pixel data) stripped out of them before use.
I do get the logarithmic progression used by S-Log profiling and understand that it does provide real increases in dynamic range but the DR will still need to be compressed to reside in most display gamuts. The problem with the old method is that it works best with huge raw files from dedicated video cameras and not as well from the more fragile files from more conventional in-camera codecs. But these are the kinds of codecs most people will turn to when they make their video work at this point in time. If you are outputting 10 or 12 bit uncompressed raw files from your camera into a outboard digital recorder you probably know what you need to do to hit your targets and you don't need to listen to my advice but I think about half of the "flat world" videographers do things in this fashion for.....fashion, and because the higher level of voodoo tends to create a barrier to newbs.
I just read a review of the GH4 written by the assistant of a famous (and very good) cinematographer who complained that the files they worked with, battered and butchered by the ancient but revered process, looked like... video. Not filmic. I would challenge this cadre and, in fact, I would challenge video experts all over the world to take the risk and embrace the modern tools exemplified by smaller cameras and DSLR cameras and use them the way they were designed to be used to deliver great results. To my mind that means creating really good looking files in the cameras and sending them into the edit universe instead of sending artificially flat and desiccated files.
In the comment section of this poo-poo video review of the GH4 the famous cinematographer repeated over and over again that the camera 'didn't make it' in the process he normally uses. But I'm also guessing he wouldn't get great results souping his transparency film in Reisling wine either. My take is that people under think and over think at the same time. I even wonder if anyone in his shop bothered to stick a fast card into the camera and shoot some footage at its default settings. I'm wondering if they had used the camera as the makers of the camera intended whether or not they would have been more (and unexpectedly) impressed. Here's my take:
Shoot at the right exposure setting. Make a good and accurate custom white balance. Set the saturation at its default or "natural" or "neutral" setting. Choose a contrast setting that works well without throwing out the "data babies" along with the bathwater. And then do an "A-B" test of this with their current methodology, along with the usual post processing "magic" they like. You'll have more control over exact saturation levels in post. You might like a bit of contrast in your images, I know the viewers do. Without a true S-Log profile setting in-camera everything else is a joke because the camera compresses with a much different and more destructive methodology that ruins non S-Log files. No way around this. It's like trying to run a Jpeg through a raw processor and not understanding why you can't make huge correction shifts without consequences!
As I said at the top, I am not the consummate video editor or colorist. I don't have the years of experience (and indoctrination) that many others do. I may not even be right. I could be missing a huge step here. But I do know digital files and they never come back together again at the same quality once you step on all the parameters and suck out information. Perhaps this works in a RED raw file but not in any of the consumer/prosumer cameras. And not with mainstream codecs.
Don't believe me? Try this: Take your Canon, Nikon, Panasonic, etc. still camera and set every control to its absolute lowest setting. Click the adjustments all the way to the left in sharpness, contrast, and saturation. Then go an shoot a portrait or a landscape or a street scene in Jpeg. Come back to the studio and open that file in PhotoShop and then try to make it look like a good image using every tool in PhotoShop. Should be an interesting experiment. Same with the video part of cameras.
Here's an interesting read: http://www.xdcam-user.com/2013/03/to-shoot-flat-or-not-to-shoot-flat/
Curious what my video experts here have to say on the matter.
I know, I know. Most of you could care less about video and care even less about nonsense like codecs and video profiles. Patience, my friends. We'll circle back to real photography soon enough.
Before I jump into it I want to present an conundrum from the our collective transition from actual film to digital and how we changed our practices. When we shot transparency film we routinely "metered for the highlights." That meant, practically, that we were frightened to over expose our film and lose our highlights to "clear film" (the 255+ of yesteryear). We slightly underexposed our slide film to make sure we had ample detail in the highlights and we let the shadows fall where they may. Or we filled the deep shadows with light from flashes or reflectors.
When we started shooting with digital cameras we ported over the same mentality and it made sense. If you stepped over the line at 255 you had blown highlights and they were never coming back. But digital was different from film and weak, noisy shadows were the result when we started pulling up the shadow area exposures in post processing. Then we discovered the practice of ETTR (or expose to the right) which pushed us to expose brighter and move the histogram closer and closer to the right hand side of the scale to precisely nail highlights while bringing up shadows into a usable range. Now we have raw files with lots more latitude for highlight and shadow recovery. Almost like negative film. Most of us are no longer paranoid about blown highlights and our images look great. It doesn't hurt that the latest Sony sensors are beasts when it comes to the lower part of the tonal scale and resist noise almost as effectively as my wallet resists hundred dollar bills. We're now through with the last century methodology of shooting a processing. We have successfully changed the way we shoot and process and we get better quality images as a result.
So, what does this have to do with the GH4 and video? Well, the GH4 gets slammed for two things. The first is noise and the second is that the files don't do well in the older and pervasive paradigm of video shooting and post processing. If you thought still photographers in the early days of digital were a bit nervous about losing highlight detail the video guys were scared to death about having too much contrast and too much saturation in the files. The saturation was an issue because once it was baked into a file it was hard to match up the file to broadcast standards which called for shoving files with normal gamuts into extremely tiny broadcast standard gamuts. The same with contrast. The tiny sensors that most of the video cameras used fell apart with high contrast scenes.
At the same time film shooting cinematographers got used to using negative stock to shoot features. They could slightly (or profoundly) overexpose the negative (c-41) film stock while shooting and then compensate when developing to create an image that was more or less bullet proof to over exposure or too much contrast or saturation. That made it easier to shoot contrasty scenes and the idea was that color saturation and overall contrast could be more easily handled in post processing.
Now we're in the future. We have cameras that can shoot pretty wide ranging scenes without requiring special handling. And the newest computer monitors can deliver two or three (or four) stops more dynamic range than CRTs and old TVs. But what old schoolers are doing is setting up their new video camera the same way they did in the bad old days; to their lowest contrast, lowest sharpness and lowest color saturation levels; in effect sucking out 90% of the information in the files, the math... and then bitching when they can't restructure the information in Final Cut Pro X, DaVinci Resolve or Premiere to look as good or better than the info they just sucked out. It's just a bit insane.
The video mavens are overlaying antiquated techniques to the new tools for no other reason than because THAT'S THE WAY THEY'VE ALWAYS DONE IT.
But it doesn't have to be that way. We don't have to start with damaged footage to make content that looks great on screen. I may not be as smart as the online cinematographers but I can look at files and methodologies and make tests that tell me today's video cameras, be they Alexa's or GH4's were created to make beautiful images----in the cameras. And all of the editing tools I just mentioned will take those perfectly exposed, nicely tonal mapped and medium saturated images and make absolutely great video files. In an absolute sense they can look better because they don't have buckets of vital information (pixel data) stripped out of them before use.
I do get the logarithmic progression used by S-Log profiling and understand that it does provide real increases in dynamic range but the DR will still need to be compressed to reside in most display gamuts. The problem with the old method is that it works best with huge raw files from dedicated video cameras and not as well from the more fragile files from more conventional in-camera codecs. But these are the kinds of codecs most people will turn to when they make their video work at this point in time. If you are outputting 10 or 12 bit uncompressed raw files from your camera into a outboard digital recorder you probably know what you need to do to hit your targets and you don't need to listen to my advice but I think about half of the "flat world" videographers do things in this fashion for.....fashion, and because the higher level of voodoo tends to create a barrier to newbs.
I just read a review of the GH4 written by the assistant of a famous (and very good) cinematographer who complained that the files they worked with, battered and butchered by the ancient but revered process, looked like... video. Not filmic. I would challenge this cadre and, in fact, I would challenge video experts all over the world to take the risk and embrace the modern tools exemplified by smaller cameras and DSLR cameras and use them the way they were designed to be used to deliver great results. To my mind that means creating really good looking files in the cameras and sending them into the edit universe instead of sending artificially flat and desiccated files.
In the comment section of this poo-poo video review of the GH4 the famous cinematographer repeated over and over again that the camera 'didn't make it' in the process he normally uses. But I'm also guessing he wouldn't get great results souping his transparency film in Reisling wine either. My take is that people under think and over think at the same time. I even wonder if anyone in his shop bothered to stick a fast card into the camera and shoot some footage at its default settings. I'm wondering if they had used the camera as the makers of the camera intended whether or not they would have been more (and unexpectedly) impressed. Here's my take:
Shoot at the right exposure setting. Make a good and accurate custom white balance. Set the saturation at its default or "natural" or "neutral" setting. Choose a contrast setting that works well without throwing out the "data babies" along with the bathwater. And then do an "A-B" test of this with their current methodology, along with the usual post processing "magic" they like. You'll have more control over exact saturation levels in post. You might like a bit of contrast in your images, I know the viewers do. Without a true S-Log profile setting in-camera everything else is a joke because the camera compresses with a much different and more destructive methodology that ruins non S-Log files. No way around this. It's like trying to run a Jpeg through a raw processor and not understanding why you can't make huge correction shifts without consequences!
As I said at the top, I am not the consummate video editor or colorist. I don't have the years of experience (and indoctrination) that many others do. I may not even be right. I could be missing a huge step here. But I do know digital files and they never come back together again at the same quality once you step on all the parameters and suck out information. Perhaps this works in a RED raw file but not in any of the consumer/prosumer cameras. And not with mainstream codecs.
Don't believe me? Try this: Take your Canon, Nikon, Panasonic, etc. still camera and set every control to its absolute lowest setting. Click the adjustments all the way to the left in sharpness, contrast, and saturation. Then go an shoot a portrait or a landscape or a street scene in Jpeg. Come back to the studio and open that file in PhotoShop and then try to make it look like a good image using every tool in PhotoShop. Should be an interesting experiment. Same with the video part of cameras.
Here's an interesting read: http://www.xdcam-user.com/2013/03/to-shoot-flat-or-not-to-shoot-flat/
Curious what my video experts here have to say on the matter.
I know, I know. Most of you could care less about video and care even less about nonsense like codecs and video profiles. Patience, my friends. We'll circle back to real photography soon enough.