I'm always interested in the "what if we just...." preambles to experimentation. We know how to make good photographs with conventional cameras but what if we just screwed around with other kinds of cameras and took a peek at what we got? I've been mulling over the ability of a number of cameras to generate 12 bit CinemaDNG files at 4K, 6K and even 8K resolutions and it got me wondering... just how decent a still photo frame might look if I grabbed it from a raw video stream?
With previous video technologies we were limited first by the resolution of earlier cameras, and then by the file types. It was hard to make a serious case for still frame capture with cameras boasting only SD video or the milder forms of HD video (720p or 1080p). But we were equally hampered by the mindset we brought over from the video world. The idea that files looked best when shot with a 180° shutter angle. Or, in rough translation, at a shutter speed that was double the frames per second number. So, if movies were shot on cameras at 24 fps the shutter was, by convention, set at 1/48th of a second, or something close to it. And once a rule like that becomes established it tends to mute variation. Advances in all kinds of dedicated video cameras and hybrid cameras have changed our capabilities but perhaps our prejudices about procedure have yet to catch up. Or maybe I'm just not finding the right reading materials.
You can record video in 5.9 and 6K with a range of different cameras if they are connected to an external monitor, like an Atomos or a Black Magic. Some cameras, will even record the higher res files in camera, but not with 12 bits of information per channel, and not in a raw format. The little Sigma fp will shoot CinemaDNG in 12 bit and you don't need a bulky, external monitor since it can write directly to an external SSD drive; as long as it's USB 3. Since the Sigma fp has the capability to shoot at 4K and since 4K is about 8.2 megapixels, I thought I'd dress mine up and take it out for a test run. If the images look great then my next step might be to drag out a bigger Panasonic camera and hook it up to an Atomos Ninja V and see if the resulting ProRes Raw 6K files actually look 50% better.
Today I just wanted to see what the CinemaDNG files looked like if you did a frame grab in a photo editing program like Adobe Lightroom. How would the color look? How sharp would they be? And how usable would they be in comparison to photographic camera originated Jpeg files?
The test camera was a Sigma fp set to record CinemaDNG files in 12 bit, at 24 fps, directly into a Samsung T5, 1 Terabyte SSD drive, hooked directly to the camera via a USB cable. The exposure was set manually using 100 ISO, a shutter speed of 1/320th of a second, and using the f-stop to fine tune exposure. The only other parameter setting was white balance and I left that at AWB figuring I could tweak the colors a bit in post.
I'm sure someone makes a fancy clamping device that bolts onto an equally fancy "cage" to hold the SSD drive tight to the Sigma fp camera but, in the moment, I depended on the magic of gaffer tape and taped the drive to the top of the enormous Sigma loupe. Seemed to work just fine and it all fit together like a glove. I formatted the drive with the camera and headed out of the studio to find suitable test subjects.
I shot in short bursts, in the direct sun, but 24 fps generates so many files that it's hard to find the perfect needle in an imperfect haystack. I started pulling files at random, back in the studio, to include in my "research."
My assessment is this: The camera wastes no time or energy in doing any sharpening to the raw files it's writing to the HD for video. The presumption, I guess, is that video requires a certain "unsharpness" in order to look realistic and good. The multiple frames coupled with the human persistence of vision fill in the sharpness "blanks." When I first opened the files I was a bit underwhelmed because they all appeared so soft. This might also be because I've gotten used to the mountains of detail delivered by state-of-the-art 24 and 48 megapixel still cameras. At best today's files were about 8.2 megapixels and profoundly unsharpened. But that doesn't mean that they couldn't be sharpened...
By setting the camera to shoot at a shutter speed of 1/320th of a second I didn't need to use a neutral density filter and I was able to freeze normal action. A fast moving cyclist or runner caused blur while casual walking was largely frozen except for feet or swinging hands.
The files I got were "okay" but not great. It's pretty obvious that CinemaDNG is optimized differently than formats that are dedicated to getting the most from still frames. Even so I think many of the frames are usable; especially for web use. The files are malleable when it comes to shadow recovery and color elasticity but I'll have to spend some more time finding a proper sharpening formula.
The exercise was intended to be an opening gambit to be followed by trying the same thing with cameras that can kick out higher res files. The next logical experiment will be with something like the Panasonic S5 and the Atomos Ninja V writing 5.9K ProRes Raw files. The increase in resolution might be the enhancement I'm looking for.
Why am I doing this? Because I wanted to see if the video grab frame situation had gotten to the point where I could direct talent in front of the camera to do an action and be able to film it continuously and then pick the peak of action. Or the peak of expression. Would I be able to find a formula that would allow for high frame rates coupled with raw files full of detail? My first experiment leads me to say....maybe.
And be aware that in addition to still frames you will still have full motion video at a very, very high quality to work with. The one caveat is that the faster frame rate takes away a bit of the optical magic that makes our brains believe in video. The individual frames are too crisp and that makes playback of the video appear choppy. I'm almost certain there is or will be a process to treat the video footage in order to bring back the type and amount of blur that makes watching video feel right even if it is shot at higher shutter speeds/narrower shutter angles.
At some point, and maybe it's 8K, video still grabs (at the right shutter speed to freeze motion) will be equal to an industry standard photograph taken with a 24 megapixel camera. Then we'll see some real friction in the market as camera makers concentrate on making better and faster electronic shutters while traditional photographers bemoan the loss of compatibility (via mechanical shutters) with flash, florescent lights and other processes that we've used for many decades. It will be a weird and emotionally fraught intersection but at its core it's all about "having one's cake and eating it too." Clients are demanding more and more video. but they still want good photographs of the same subject matter.
It's a hassle to set up and shoot for both video and stills in the same assignments now; even with cameras that do both reasonably well. The advertising "holy grail" is to find a camera methodology that can do both simultaneously and be tweaked in post to yield top quality in both media. That's coming sooner than we might think.
So, be sure to click on the frames to look at them as large as you can. You'll find some rolling shutter; I'm sure. But you might also be surprised at what you can get out of a video frame of a certain kind. I was. And I'm just now digging in to see how I can do this better.