Right off the bat I'll admit two things: I know that the 4K video of the GH4 camera is somewhat noisy at ISOs of 800 and over. Not deal killer noisy but noisy in the shadows. I will also admit that I am not a veteran video colorist. But I'll make the point that this lack on my resumé gives me some advantages over the people who grew up in the video and motion picture film business by allowing me the ability to come at new the paradigms of video with a cleaner slate.
Before I jump into it I want to present an conundrum from the our collective transition from actual film to digital and how we changed our practices. When we shot transparency film we routinely "metered for the highlights." That meant, practically, that we were frightened to over expose our film and lose our highlights to "clear film" (the 255+ of yesteryear). We slightly underexposed our slide film to make sure we had ample detail in the highlights and we let the shadows fall where they may. Or we filled the deep shadows with light from flashes or reflectors.
When we started shooting with digital cameras we ported over the same mentality and it made sense. If you stepped over the line at 255 you had blown highlights and they were never coming back. But digital was different from film and weak, noisy shadows were the result when we started pulling up the shadow area exposures in post processing. Then we discovered the practice of ETTR (or expose to the right) which pushed us to expose brighter and move the histogram closer and closer to the right hand side of the scale to precisely nail highlights while bringing up shadows into a usable range. Now we have raw files with lots more latitude for highlight and shadow recovery. Almost like negative film. Most of us are no longer paranoid about blown highlights and our images look great. It doesn't hurt that the latest Sony sensors are beasts when it comes to the lower part of the tonal scale and resist noise almost as effectively as my wallet resists hundred dollar bills. We're now through with the last century methodology of shooting a processing. We have successfully changed the way we shoot and process and we get better quality images as a result.
So, what does this have to do with the GH4 and video? Well, the GH4 gets slammed for two things. The first is noise and the second is that the files don't do well in the older and pervasive paradigm of video shooting and post processing. If you thought still photographers in the early days of digital were a bit nervous about losing highlight detail the video guys were scared to death about having too much contrast and too much saturation in the files. The saturation was an issue because once it was baked into a file it was hard to match up the file to broadcast standards which called for shoving files with normal gamuts into extremely tiny broadcast standard gamuts. The same with contrast. The tiny sensors that most of the video cameras used fell apart with high contrast scenes.
At the same time film shooting cinematographers got used to using negative stock to shoot features. They could slightly (or profoundly) overexpose the negative (c-41) film stock while shooting and then compensate when developing to create an image that was more or less bullet proof to over exposure or too much contrast or saturation. That made it easier to shoot contrasty scenes and the idea was that color saturation and overall contrast could be more easily handled in post processing.
Now we're in the future. We have cameras that can shoot pretty wide ranging scenes without requiring special handling. And the newest computer monitors can deliver two or three (or four) stops more dynamic range than CRTs and old TVs. But what old schoolers are doing is setting up their new video camera the same way they did in the bad old days; to their lowest contrast, lowest sharpness and lowest color saturation levels; in effect sucking out 90% of the information in the files, the math... and then bitching when they can't restructure the information in Final Cut Pro X, DaVinci Resolve or Premiere to look as good or better than the info they just sucked out. It's just a bit insane.
The video mavens are overlaying antiquated techniques to the new tools for no other reason than because THAT'S THE WAY THEY'VE ALWAYS DONE IT.
But it doesn't have to be that way. We don't have to start with damaged footage to make content that looks great on screen. I may not be as smart as the online cinematographers but I can look at files and methodologies and make tests that tell me today's video cameras, be they Alexa's or GH4's were created to make beautiful images----in the cameras. And all of the editing tools I just mentioned will take those perfectly exposed, nicely tonal mapped and medium saturated images and make absolutely great video files. In an absolute sense they can look better because they don't have buckets of vital information (pixel data) stripped out of them before use.
I do get the logarithmic progression used by S-Log profiling and understand that it does provide real increases in dynamic range but the DR will still need to be compressed to reside in most display gamuts. The problem with the old method is that it works best with huge raw files from dedicated video cameras and not as well from the more fragile files from more conventional in-camera codecs. But these are the kinds of codecs most people will turn to when they make their video work at this point in time. If you are outputting 10 or 12 bit uncompressed raw files from your camera into a outboard digital recorder you probably know what you need to do to hit your targets and you don't need to listen to my advice but I think about half of the "flat world" videographers do things in this fashion for.....fashion, and because the higher level of voodoo tends to create a barrier to newbs.
I just read a review of the GH4 written by the assistant of a famous (and very good) cinematographer who complained that the files they worked with, battered and butchered by the ancient but revered process, looked like... video. Not filmic. I would challenge this cadre and, in fact, I would challenge video experts all over the world to take the risk and embrace the modern tools exemplified by smaller cameras and DSLR cameras and use them the way they were designed to be used to deliver great results. To my mind that means creating really good looking files in the cameras and sending them into the edit universe instead of sending artificially flat and desiccated files.
In the comment section of this poo-poo video review of the GH4 the famous cinematographer repeated over and over again that the camera 'didn't make it' in the process he normally uses. But I'm also guessing he wouldn't get great results souping his transparency film in Reisling wine either. My take is that people under think and over think at the same time. I even wonder if anyone in his shop bothered to stick a fast card into the camera and shoot some footage at its default settings. I'm wondering if they had used the camera as the makers of the camera intended whether or not they would have been more (and unexpectedly) impressed. Here's my take:
Shoot at the right exposure setting. Make a good and accurate custom white balance. Set the saturation at its default or "natural" or "neutral" setting. Choose a contrast setting that works well without throwing out the "data babies" along with the bathwater. And then do an "A-B" test of this with their current methodology, along with the usual post processing "magic" they like. You'll have more control over exact saturation levels in post. You might like a bit of contrast in your images, I know the viewers do. Without a true S-Log profile setting in-camera everything else is a joke because the camera compresses with a much different and more destructive methodology that ruins non S-Log files. No way around this. It's like trying to run a Jpeg through a raw processor and not understanding why you can't make huge correction shifts without consequences!
As I said at the top, I am not the consummate video editor or colorist. I don't have the years of experience (and indoctrination) that many others do. I may not even be right. I could be missing a huge step here. But I do know digital files and they never come back together again at the same quality once you step on all the parameters and suck out information. Perhaps this works in a RED raw file but not in any of the consumer/prosumer cameras. And not with mainstream codecs.
Don't believe me? Try this: Take your Canon, Nikon, Panasonic, etc. still camera and set every control to its absolute lowest setting. Click the adjustments all the way to the left in sharpness, contrast, and saturation. Then go an shoot a portrait or a landscape or a street scene in Jpeg. Come back to the studio and open that file in PhotoShop and then try to make it look like a good image using every tool in PhotoShop. Should be an interesting experiment. Same with the video part of cameras.
Here's an interesting read: http://www.xdcam-user.com/2013/03/to-shoot-flat-or-not-to-shoot-flat/
Curious what my video experts here have to say on the matter.
I know, I know. Most of you could care less about video and care even less about nonsense like codecs and video profiles. Patience, my friends. We'll circle back to real photography soon enough.
Before I jump into it I want to present an conundrum from the our collective transition from actual film to digital and how we changed our practices. When we shot transparency film we routinely "metered for the highlights." That meant, practically, that we were frightened to over expose our film and lose our highlights to "clear film" (the 255+ of yesteryear). We slightly underexposed our slide film to make sure we had ample detail in the highlights and we let the shadows fall where they may. Or we filled the deep shadows with light from flashes or reflectors.
When we started shooting with digital cameras we ported over the same mentality and it made sense. If you stepped over the line at 255 you had blown highlights and they were never coming back. But digital was different from film and weak, noisy shadows were the result when we started pulling up the shadow area exposures in post processing. Then we discovered the practice of ETTR (or expose to the right) which pushed us to expose brighter and move the histogram closer and closer to the right hand side of the scale to precisely nail highlights while bringing up shadows into a usable range. Now we have raw files with lots more latitude for highlight and shadow recovery. Almost like negative film. Most of us are no longer paranoid about blown highlights and our images look great. It doesn't hurt that the latest Sony sensors are beasts when it comes to the lower part of the tonal scale and resist noise almost as effectively as my wallet resists hundred dollar bills. We're now through with the last century methodology of shooting a processing. We have successfully changed the way we shoot and process and we get better quality images as a result.
So, what does this have to do with the GH4 and video? Well, the GH4 gets slammed for two things. The first is noise and the second is that the files don't do well in the older and pervasive paradigm of video shooting and post processing. If you thought still photographers in the early days of digital were a bit nervous about losing highlight detail the video guys were scared to death about having too much contrast and too much saturation in the files. The saturation was an issue because once it was baked into a file it was hard to match up the file to broadcast standards which called for shoving files with normal gamuts into extremely tiny broadcast standard gamuts. The same with contrast. The tiny sensors that most of the video cameras used fell apart with high contrast scenes.
At the same time film shooting cinematographers got used to using negative stock to shoot features. They could slightly (or profoundly) overexpose the negative (c-41) film stock while shooting and then compensate when developing to create an image that was more or less bullet proof to over exposure or too much contrast or saturation. That made it easier to shoot contrasty scenes and the idea was that color saturation and overall contrast could be more easily handled in post processing.
Now we're in the future. We have cameras that can shoot pretty wide ranging scenes without requiring special handling. And the newest computer monitors can deliver two or three (or four) stops more dynamic range than CRTs and old TVs. But what old schoolers are doing is setting up their new video camera the same way they did in the bad old days; to their lowest contrast, lowest sharpness and lowest color saturation levels; in effect sucking out 90% of the information in the files, the math... and then bitching when they can't restructure the information in Final Cut Pro X, DaVinci Resolve or Premiere to look as good or better than the info they just sucked out. It's just a bit insane.
The video mavens are overlaying antiquated techniques to the new tools for no other reason than because THAT'S THE WAY THEY'VE ALWAYS DONE IT.
But it doesn't have to be that way. We don't have to start with damaged footage to make content that looks great on screen. I may not be as smart as the online cinematographers but I can look at files and methodologies and make tests that tell me today's video cameras, be they Alexa's or GH4's were created to make beautiful images----in the cameras. And all of the editing tools I just mentioned will take those perfectly exposed, nicely tonal mapped and medium saturated images and make absolutely great video files. In an absolute sense they can look better because they don't have buckets of vital information (pixel data) stripped out of them before use.
I do get the logarithmic progression used by S-Log profiling and understand that it does provide real increases in dynamic range but the DR will still need to be compressed to reside in most display gamuts. The problem with the old method is that it works best with huge raw files from dedicated video cameras and not as well from the more fragile files from more conventional in-camera codecs. But these are the kinds of codecs most people will turn to when they make their video work at this point in time. If you are outputting 10 or 12 bit uncompressed raw files from your camera into a outboard digital recorder you probably know what you need to do to hit your targets and you don't need to listen to my advice but I think about half of the "flat world" videographers do things in this fashion for.....fashion, and because the higher level of voodoo tends to create a barrier to newbs.
I just read a review of the GH4 written by the assistant of a famous (and very good) cinematographer who complained that the files they worked with, battered and butchered by the ancient but revered process, looked like... video. Not filmic. I would challenge this cadre and, in fact, I would challenge video experts all over the world to take the risk and embrace the modern tools exemplified by smaller cameras and DSLR cameras and use them the way they were designed to be used to deliver great results. To my mind that means creating really good looking files in the cameras and sending them into the edit universe instead of sending artificially flat and desiccated files.
In the comment section of this poo-poo video review of the GH4 the famous cinematographer repeated over and over again that the camera 'didn't make it' in the process he normally uses. But I'm also guessing he wouldn't get great results souping his transparency film in Reisling wine either. My take is that people under think and over think at the same time. I even wonder if anyone in his shop bothered to stick a fast card into the camera and shoot some footage at its default settings. I'm wondering if they had used the camera as the makers of the camera intended whether or not they would have been more (and unexpectedly) impressed. Here's my take:
Shoot at the right exposure setting. Make a good and accurate custom white balance. Set the saturation at its default or "natural" or "neutral" setting. Choose a contrast setting that works well without throwing out the "data babies" along with the bathwater. And then do an "A-B" test of this with their current methodology, along with the usual post processing "magic" they like. You'll have more control over exact saturation levels in post. You might like a bit of contrast in your images, I know the viewers do. Without a true S-Log profile setting in-camera everything else is a joke because the camera compresses with a much different and more destructive methodology that ruins non S-Log files. No way around this. It's like trying to run a Jpeg through a raw processor and not understanding why you can't make huge correction shifts without consequences!
As I said at the top, I am not the consummate video editor or colorist. I don't have the years of experience (and indoctrination) that many others do. I may not even be right. I could be missing a huge step here. But I do know digital files and they never come back together again at the same quality once you step on all the parameters and suck out information. Perhaps this works in a RED raw file but not in any of the consumer/prosumer cameras. And not with mainstream codecs.
Don't believe me? Try this: Take your Canon, Nikon, Panasonic, etc. still camera and set every control to its absolute lowest setting. Click the adjustments all the way to the left in sharpness, contrast, and saturation. Then go an shoot a portrait or a landscape or a street scene in Jpeg. Come back to the studio and open that file in PhotoShop and then try to make it look like a good image using every tool in PhotoShop. Should be an interesting experiment. Same with the video part of cameras.
Here's an interesting read: http://www.xdcam-user.com/2013/03/to-shoot-flat-or-not-to-shoot-flat/
Curious what my video experts here have to say on the matter.
I know, I know. Most of you could care less about video and care even less about nonsense like codecs and video profiles. Patience, my friends. We'll circle back to real photography soon enough.
Brilliant post. VSL rocks the GH4!
ReplyDeleteThis is not what I was taught and yet it makes perfect sense.
ReplyDeleteInteresting post, even for who doesn't usually care for video!
ReplyDelete"The video mavens are overlaying antiquated techniques to the new tools for no other reason than because THAT'S THE WAY THEY'VE ALWAYS DONE IT."
ReplyDeleteUmm... nope, not quite, not all of them. I think you're over-egxaggerating and over-simplifying the issue. Much ado about nothing.
Whilst ignoring all the hype, the notion of shooting flat did make sense in some cases, (it still might in some cases), and it was not created by some opinionated luddites, but by trial and error when trying to solve a practical problem created by some popular new cameras with mediocre video capabilities.
But the "problem" with today's instant gratification (online) world is that a lot of people don't bother to delve deeper into the craft and they settle for scratching the surface, often expecting others to solve their problems for them. That creates memes and superficial knowledge, but not understanding. It's the same in both photography and cinematography.
Quite a few people do understand that there is no one size fits all solution to shooting settings and post production. Many people who have been fiddling with the GH4, for example, will take some time and examine what works best with the new tool. They understand what the "shooting flat" meme is (mostly) about.
To them it's obvious that the basic footage that comes out of the GH4, or the H264 mpeg that comes out of some Canon dSLR, needs different treatment than, say, the ProRes footage shot in the FIlm mode that comes out of, say, a BMPCC. Which is quite flat as default, unlike the default GH4 footage. Same is true for, say, the Slog2 shot with the A7s. Grading all those well needs slightly different approaches, and takes some practising and learning.
The BMPCC and the GH4 4K footage have more 'stuff' to work with to begin with, when compared to the typical H264/AVCHD footage delivered by many popular mainstream camera. Shooting and grading those is not the same, and one size does not fit all.
I believe most of the hype and confusion happens among the mainstream shooters. Most of the professionals shooting with today's digital gear have figured out a workflow that works for them and their gear, out of sheer necessity. With a few exceptions, of course, there's always one out there on the internyet. Ho hum.
"I just read a review of the GH4 written by the assistant of a famous (and very good) cinematographer who complained that the files they worked with, battered and butchered by the ancient but revered process, looked like... video. Not filmic."
Well, perhaps you should have taken your criticism directly to the comments section of that particular review. Whilst making sure you did get what he meant exactly. After all, we, your readers, don't have no idea. We only know what you are telling us.
"In the comment section of this poo-poo video review of the GH4 the famous cinematographer repeated over and over again that the camera 'didn't make it' in the process he normally uses."
If it was a public review on the internets, why don't you give us the link to that blog post and let us make our own conclusions about the case?
I'm not questioning your powers of comprehension or video sorcery, but it would be more fair, wouldn't it. Then, if we disagree with his views, we, along with yourself, can aim our critical comments to the right address.
Without that, our comments here are merely adding more noise into the hype.
This comment has been removed by the author.
ReplyDeleteHey Anonymous. Please stop telling me how to write my blog.
ReplyDeleteWow! Kirk Tuck takes on the video establishment...perfect. For 90% of us practicing photographers and er...film-makers, (amateur or pro), your "take" is right on the money. Cameras like the GH4 are not made for Hollywood producers after all, much more for those of us who love to make enjoyable, perhaps useful and meaningful moving pictures. Although I don't have that camera, I use a few (GH2, GH3, GX7) that produce pretty good stuff when I do my job right. Gentle post-processing is not excluded and most of the time that is all I need. 99% of the audience really is not hung up on whether there is a little noise in the shadows or the DR is less than the maximum possible. If anything they are interested in the content - which is where, with so much technical perfection around, we so often seem to go back to the old paradigms.
ReplyDeleteIt's an interesting point, Kirk, as I usually shoot a little flat and then tweak in post. I've been mostly happy with the results, most errors being operator vs. equipment based. The challenge for shooting with the look "baked in" is having a good enough field monitor (and controlled lighting conditions in which to view said monitor) that you can trust WYSIWYG.
ReplyDeleteI smiled when I read the complaint of the assistant about how files have a "video look" rather than a "filmic" look. I have been frustrated by the fact that my current video-editing software takes interlaced 1080 HD video footage and produces progressive output that is "filmic" rather than interlaced footage that preserves the temporal resolution (a.k.a. "video look") of the original shot. I actually want that "video look". For some reason older software managed to produce the results I wanted when used in a very-specific way, but it won't work with my current OS. I should be using FCP X, but (alas) cannot afford the necessary hardware upgrades at this time. Grrr!
ReplyDeleteI find the overwhelming preference for the "filmic" look to be a bit unfortunate. Back in the mid-1980s I saw two films shot using a process (Showscan?) where the film was shot and projected at a rate of 48fps or 60fps rather than the standard 24fps. The result was stunning, smooth, crystal-clear footage. It is a shame this process was not more widely adopted. I would much rather see a film done with that process than one shot in one of today's 3-D processes.
As is often the case in your writing, I think you've caught the practical side of a deeper historical/philosophical issue.
ReplyDeleteI attended the Maurice Kanbar Institute of Film and Television at NYU, one of the world's "best" film schools, from 1998-2002, and I kicked around a bit in the NYC industry for a few years thereafter. What I saw was screenwriters generally (though not always) getting more training in concept than in conflict, and people wanting to be directors getting more practice manipulating an actor's performance after the fact in the editing room than in working with the actor to actually craft a character and deliver a performance in front of the camera (and to be fair, not every actor in the movie world is capable of doing this). I've been on sets where the whole extent of "rehearsal" is the director giving the actors line-readings in the break room while the lights and sound are set up, and then walking through once for camera before taking each shot. This is in part a reflection of the kind of personalities the industry attracts, and in part about the financial incentives in a sector where you have people putting beacoup bucks into something today that won't appear for a couple of years, and they want to feel like they can change your sardonic sci-fi western into a stirring medieval epic with a couple of clicks of the mouse if the zeitgeist changes.
What I'm suggesting, I guess, is that while there are often good technical reasons to shoot flat (matching grades across film stocks/video cameras, combining greenscreen and wild footage, &c) it's also deeply baked into the mentality of a workflow where the editor, often with the director and/or executive producer on her shoulder, is the last "writer" of a movie and the process is designed to keep options open and changes available as far down the line as possible.
Hey Anonymous, Your first name wouldn't be Shane would it.
ReplyDeleteIf your camera supports S-Log then by all means you will benefit from shooting flat with that feature but if your camera doesn't you are basically tossing away information and then hoping magic pixies will make everything okay for you in the edit suite.
ReplyDeleteIt's a matter of choosing your poison. What's worse, having to fix a shot in post that you over baked (in camera) or under baked? An over-baked 8-bit shot is just as much a PITA as an under baked shot to fix.
ReplyDeleteI aim to under bake my shots a bit, not a ton. My goal is to be able to grade the footage to my liking with gentle use of the sliders, not wild swings. I'm happy with the results. Just thought that I'd share. Your mileage will likely vary.
For me this is a particularly relevant topic because I've become caught up in the recent "film making" trend that video-capable DSLRs have caused. Being originally a still photographer, at first I considered it a bit of a gimmick when manufacturers started adding video functions to DSLRs but then when I saw some of the beautiful work being produced with these inexpensive cameras (which wasn't possible with the cheap consumer video cameras that I had previously played around with mostly for family home movies) I was hooked on learning more. One thing I strongly agree with you about (and with the "mystery" famous cinematographer assistant) is that everyone should run their own tests of these settings. I've been guilty of reading the recommendations for flat settings and then using them with little to no testing on my part. However, my understanding is that most consumer level products are designed to produce what the manufacturer perceives as the ideal out-of-the-box image that most buyers want, even though it may be a bit overly saturated or contrasty. Hence most cinematographer types suggested that cameras like the Canon 5D Mark II should be used with a neutral picture style and dialed down contrast, saturation, and sharpness settings which makes perfect sense because the motion film aesthetic tends towards more muted tones rather than the wildly over the top saturation of a lot of today's attention getting still images.
ReplyDeleteA bigger issue for me is the moire and aliasing issues with a lot of the DSLRs available now. In spite of shooting RAW video on a Canon 5D Mark II with Magic Lantern on a recent personal project I'm more excited about using this new-to-me/used GH2 I just got. I absolutely love this (hacked, Driftwood Moon T8) GH2 and I'm pretty certain that at some point when I have mastered it I'll be able to get just the look I want. Already the moire and aliasing issues are minimal to non-existent.
I know there's at least one other well-known cinematographer, Philip Bloom, who is very much on board with the GH4. His review at http://philipbloom.net/2014/06/30/gh4/ is very positive and I wonder if that "other" famous cinematographer would say Bloom's film "Postcard From Phang Nga" (http://vimeo.com/99523009) has a "video" look. I think it's beautifully filmed no matter what anyone calls it.
By the way Kirk, being obviously enamored of Bloom's "Postcard from..." device he uses for several of his film making projects, I'm anticipating creating my own "Postcard from Austin" film using the GH2. Mostly it's just an excuse to make the drive to Austin to mess around with the GH2 and to visit my son who lives there.
I love that you're also caught up in this video world as I am. And as always, your blog posts are interesting and thought provoking.