At some point it's all about the rubber chicken.
I'm starting to get more and more selective about the projects I accept. If I get the impression that we're being asked for sheer quantity; a shoot till you drop affair, I won't take the bait. Life is too short. But I'm not above mixing and matching a couple of disciplines if the client is good and the project is comfortable.
While the advertising agencies I usually work with generally have their hearts in the right place they can't always control their clients and sometimes communication gets frayed. But in most instances it's just a case of clients not understanding how long it might take to do something, or misunderstanding a bid based on usage, not high volume production. That's an important point.
With portraits taken on location often a client doesn't understand that there is always post production involved and that post production is the part of a job that's like an iceberg; 90% of it is under water and unseen by them during the shoot.
Here's a case in point from yesterday: Part of our job was, on paper, to photograph 5 or 6 people from the blank company. There was an outside location at their H.Q. that they'd used in the past which was shaded by a tree and had a shaded, non-ugly background that could easily be put out of focus. I set up my camera, and a flash in a smaller Octa-box, and roughed in my lighting. These were quick portraits and I shot maybe 20 or 25 images per person instead of the 70-100 I might shoot for a classic, studio portrait.
We finished photographing the first six people but more and more people came out to have their portraits taken. The final tally was 13. This was not something discussed beforehand but I have a twenty+ year relationship with the ad agency and decided not to start a discussion about it with the agency owner in front of the client. It was easy enough and took a short amount of time in the moment.
But...what the client won't see, and the agency might not explain, is that the amount of post production time goes up a lot. I have to do a global correction for each person's files since skin tones, etc. are all different. There's the editing down to a handful of selections and then the creation of individual galleries for each person. Finally, we'll get back the clients' selections over a random time period, piecemeal, which takes any efficiency out of doing a final retouch and delivery process. Adding six people might add only five minutes per person on the front end but might adding an hour on the back end. Or more. Next time we bid for that agency we'll just pop in a per person price up front so they know that each person they add will also increase their final bill.
Next, we moved into the company's manufacturing area and started a quick discussion of how to handle making some beauty shots of products and also how to include some b-roll video. Previously we would have totally separated those two functions. We would have done all our photography sequentially and then packed up that gear and started in on video.
But the creative director presented the idea of shooting high enough quality video to allow for pulling still frames out of the b-roll video for use on the website, etc. leaving the photography of just a few actual products to the still camera ---- because they might need those images for printed pieces and conference graphics.
I put one camera on a gimbal and set it up to have the best shot at making general content that would also work as a "frame grab" resource. I set another camera up on a tripod so I could make high resolution photographs with lots of depth of field.
The frame grabs might work might work. They might not. The b-roll will work for b-roll. The high resolution stills will work for print. But I think grabbing lots of good stills from video is still a bit of a crap shoot. With that being said, we decided to shoot at 60 fps on a GH5 and set the shutter angle at 180° (1/120th of a second) and shot in the Long-gop format. We weren't doing any fast moves and I tried to put in static shots as well as moving shots so we'd have a better chance of having a good range of frames in which the action was more or less frozen. I'd love to have shot in All-I but that's not available at 60 fps in 4K so we tried a compromise.
While I shot in a Long-gop format in the camera (which allowed for 60 fps, 10 bit, 4:2:2) when I pulled the footage into Final Cut Pro X I transcoded it into ProRes HQ which is an All-I editing format. After a bit of color correction on the footage I output it for the client in ProRes 422 which created a whopping big file of about 80 gigabytes. It gives the web designers the best shot of pulling out good still frames from the video. We also output the same extended clip in H.264 so they could scrub through it very quickly and find what they need.
But the idea of frame grabbing brings up some interesting questions. At least I find then interesting.
When the Red video cameras were introduced about a decade ago one of the marketing messages that followed the launch was the assertion that the Red Code Raw video files were so detailed and good that one could use them as still images instead of limiting usage just to video. That was a big ask at the time but there were some sample shoots done by a New York headshot photographer and also by a fashion photographer/videographer which showed some pretty good results.
Ten years on and now we have cameras like the S1H that can shoot 6k (18 megapixel) raw files, downsampled from even higher resolution sensors and we've got so much more control over the post production. Sure, you have to go through a lot of frames to find the perfect images, and the storage demands increase dramatically, but think of how nuanced your selection of the "exact" perfect moment(s) could be.
Camera sensors now have very large dynamic ranges and in-cameras tools like vector scopes and waveforms can help us drill down to perfect color and exposure settings. Being able to shoot a continuous burst of frames might be just the thing for a "twitchy" portrait subject. And it might be easier to put a camera on a tripod and shoot a three second burst of video raw files and blend them in Photoshop for products than to switch out between photographs and video on a fast paced shoot.
One of the downsides I can think of would be the practical inability to use electronic flash. But that's not an issue for most kinds of subjects and LED lighting has gotten so good that it's a nice substitute for scenes where matching the power of sunlight or dealing with fast moving subjects isn't an issue.
Another interesting point is the pace at which what used to be photography jobs (only!) are morphing into either combo jobs (video and stills) or have progressed to video replacing photography. While we aren't used to the idea right now don't we suppose there will come a time when we can shoot 24 megapixel cinema DNG raw files that will look as good as the best Jpegs from the same cameras? After all, Jpegs are currently on 8 bits while raw video can be 10 or 12 bits and Jpegs have a more limited color subsampling.
I can see a time when we might show up, arrange a scene the way we like it and then "direct" the subject of a portrait sitting through a range of expressions while running a video/hybid camera shooting 6K raw video. With a little practice I think we would become proficient in knowing when we got good stuff and not running the camera for too long. Bore down to the ranges of frames you want to consider and erase the rest.
It's pretty instructive to watch what camera makers are launching into the market right now. The Sigma fp is much more about raw video than it is about being a still camera. Same with the Lumix S1H; although ti is a much, much better still camera while being more facile than the fp with video. The Canon C70 is a much talked about product in video circles and still imaging is almost an afterthought with that camera. The Sony A7Siii is resolutely a riposte to Panasonic's S series cameras.
Everything coming out has moved the hybrid imaging game up a lot. The files are much beefier and more detailed. The raw capabilities add so much more control for us over image quality and our ability to set a wider and wider range of codecs and profiles is liberating.
I get that most of us taught ourselves the pleasure of shooting one frame at a time and that this would be a huge earthquake change for us to get used to but in many ways we're already doing it with our iPhones. The same folks who might say, "I never want to shoot video." might not be aware that some of what makes iPhone photos work at squeezing great images from tiny sensors is the fact that the camera in the phone is blazing away, shooting endless video frames until you press the "shutter button" and is then almost instantly stacking large numbers of those frames, dropping out anomalies and noise while integrating all of the color and detail from multiple samples. It's basically shooting video even when it's not shooting video, it's just that the processing is taking place under the hood. And it's blisteringly fast.
And the video is set up the same way! We're already shooting the way I'm talking about with our phones and I think it's only a matter of time till it comes to our cameras. Under the hood at first but then fully customizable.
Seems more and more like the future is raining down on us and we can either get soaked or roll with it.
From a hobbyist point of view no one is calling on you to take any action. Keep on shooting exactly as you like. But if you are one of my readers who is trying to make a living shooting some kind of visual content it behooves you to learn and experiment all the time. It's inevitable that your photography clients will ask for moving pictures at some point in the future just as it's inevitable that all you video guys are going to increasingly be asked to pull great frames out of the video content you just shot and re-purpose them as still graphics.
A lot to chew on for a Friday evening.
Currently packing up to do a one person, music video with my friend, Kenny Williams, tomorrow afternoon. A couple of James Bond theme covers.... Nice.
12 comments:
Well said and explained, particularly your points about the iPhone. Now, when can I ditch my Fujifilm and Nikon ILCs for an iCamera? Whoops! I guess I already partially did...
😄
Thanks for confirming what I had cobbled together as an explanation of how the iPhone works. It had become apparent that the lack of motion blur - despite my moving the phone when touching the "shutter" button - meant that something was going on. The only idea that made sense was that the phone was taking a series of photos, buffering however many, then selecting the best frame after the button press.
The compositing to eliminate anomalies hadn't occurred to me.
Wanna buy another barely used G9? I think I probably should free up some cash for a next-generation iPhone.
Kirk wrote, "With a little practice I think we would become proficient in knowing when we got good stuff and not running the camera for too long. Bore down to the ranges of frames you want to consider and erase the rest."
Which is a long version of what Apple does with Live Photo -- the camera records 1.5 seconds before and 1.5 seconds after shutter push and you get to pick the best frame.
The future is oddly unlike the past.
M.M. I'm saving up all my $$$ for the iPhone 12. Probably send everything else to the landfill. It would seem cruel to burden the next generation with archaic, single use cameras...
Kidding, just kidding (kinda).
You make an interesting point and confirmed what I suspected with my little Google phone. The stills are amazing for such a small sensor, and when I pixel peep I've always thought they have a rather HDR look about them. When I examine the jpeg files I see they typically exceed 5gb. I suppose it's all about getting the most out of your tools. So I've began dabbling with some HDR stills using my S1 and Fuji X-T3, and have been quite surprised at what can be captured when done correctly and under circumstances which warrant such a technique. But where the little smart phone excels I suppose is in its use of AI to select and discard the appropriate image(s). As this technology continues to make its way into the APS and FF cameras, I do wonder if the the use of AI in the smart phones will ever fully displace the advantages of a larger sensor and its ability to capture greater light and dynamic range?
Good piece. The work just gets more and more complicated, and seems like clients can always come up with something just a little extra. I'm still very much in the video learning process, and all this talk for formats and bit rates reminds me just how much I have yet to learn. But keep writing -- I'm learning a lot here.
The bit about the wonders of phones has me wondering how long it will take makers of "real" cameras to catch on. And if they will have any market left by the time they do. I was watching the grandmothers at a recent family gathering, shooting away with their phones. Not only were they getting better quality than most cameras could deliver a few years ago, they were very aware of lighting and posing and camera angles. I don't know how the photos would print (if anyone prints anymore) but they looked damned good on Facebook. Better than what some of the local photographers are charging for.
I think I've said this before, but odds are my next serious camera purchase will be a really good phone.
Exactly!
Part of what is killing the still camera business is the lack of computational methods to get the most out.
I see a future like the Harry Potter newspaper with movement, even 2 seconds in all images. As most content is online, it just makes sense to grab your attention.
I think Olympus has this buffering feature called Pro Capture. You half press the button and it starts capturing frames, if you release the button it deletes them. If you press the shutter, it saves a number of frames before and after. https://learnandsupport.getolympus.com/learn-center/photography-tips/settings/pro-capture-mode
Hi Stephen, I know some cameras can buffer frames so that you nail the moment but this is, I think, different from a process which samples and combines a large number of frames together, processes them to decrease noise, increase detail and sharpness in a single frame. That should be the next step for camera makers like Olympus.
Kirk,
Olympus kind of does have it.
Multi shot mode on an Olympus camera takes only 2 shots. But can be used at half exposure to sum or each at full to neutralize out the noise.
The new ND feature takes up to 6 images to blur out movement. But none movement is filtered. So it limited by shutter speed and max ISO but can be used.
Live time is used like light painting. Adds only images with new light.
The HDR mode I haven't played with, but also computational.
The biggest problem with Olympus cameras is they are too good and there features alone didn't sell. Since Olympus marketing is total trash. Know one knows about these features.
Hi David, Multi-shot on the Panasonic cameras blends eight exposures into one big one and also subtracts noise but it's 99% useless for anything that moves or anyone working without a tripod. I think the cameras need much faster processors and much, much faster sensor read out to really make the computational photography work with the bigger files involved. Soon, I bet.
Post a Comment
We Moderate Comments, Yours might not appear right after you hit return. Be patient; I'm usually pretty quick on getting comments up there. Try not to hit return again and again.... If you disagree with something I've written please do so civilly. Be nice or see your comments fly into the void. Anonymous posters are not given special privileges or dispensation. If technology alone requires you to be anonymous your comments will likely pass through moderation if you "sign" them. A new note: Don't tell me how to write or how to blog! I can't make you comment but I don't want to wade through spam!
Note: Only a member of this blog may post a comment.