when files come spilling out of the Leica Q2 they have already been profiled and corrected for various things. You don't get to see "under the hood" unless you want to. Is it wrong? Is it better?
There is always something to be contentious about in the world of photography. Who has the fastest AF? What cameras have the highest dynamic range? Why only dentists can afford Leicas (which seems like slight aimed at hedge fund managers, corporate CEOs and others....). Whose zoom lens is sharpest? And then there's the whole contentious issue of "who has the best color science?" which sounds a bit like "what's your favorite color?" "Oh! It's blue? Well that's just flat out wrong...."
But one of the issues that tickles me to no end is the combat between people who unequivocally state that to achieve highest purity of purpose a lens must depend wholly on physical, optical design and construction for best results. Any thing else is "cheating" "a shortcut" "budget-driven" etc. and those who embrace the idea that a lens can be designed to correct for some things optically and other things via mathematical boosters. Firmware that tells the camera exactly how to correct for specific lens parameters such as vignetting and distortion.
The biggest argument against electronically correcting something like geometric distortion in a lens has always been that the corners of the corrected frame suffer from lower resolution as a result of the need to interpolate up the corner resolution. Or the increased noise caused by boosting the corner exposure to compensate for optically uncorrected vignetting.
Like a lot of photo lore I think some of the anti-math arguments had their origin in the dark times of very low sensor resolution and very noisy CCD sensor performance. If one was photographing with a camera that had 6 megapixels and the corners needed augmentation then on a big print it's quite possible that the results could be a visual issue. And the lower res in the corners much more obvious. On a camera where ISO 80 was optimal and ISO 200 was an unholy mess of noise the results of a software driven solution for lens vignetting might also have been obvious. More speckle-y noise in the corners than in the center of the photographic frame. But oh how times have changed.
While I'm certain many camera makers are using firmware and math corrections to make entry level lenses cheaper, smaller and lighter I'm equally certain that at the higher end of each maker's catalog of lenses the engineers are making careful design choices to wring the most performance out of a combination of software and hardware (actual lens design) implementations.
Many parts of purely optical design are based, by necessity, on compromises. If you want a fast lens (big max. aperture) you'll need a big, curved front element. Physics demands that less light reaches the edges than the center of the lens while the curve of the lens means that the actual plane of focus is different at the center of the frame than at the edge or corner. You can correct each issue with added corrective glass elements or more expensive and esoteric glass types but this will change the visual character of the lens as each element adds its own compromise. Aspheric elements affect out of focus rendering (bokeh) and can make a lens look overly sharp. Additional correction elements; adding more air/glass interfaces require finer manufacturing tolerances and can also reduce overall contrast.
As in most endeavors the best practice in lens design is to use as few elements as needed to preserve the contrast, resolution and acuity of the lens. Each added element can have both a positive and a negative effect on overall rendering. Each added element adds to the complexity of actually making the lens correctly. Some correction elements require complex mechanical movements separate from other lens elements and as you can imagine the movements have to be very, very precise which adds to the obligation to consider machine tolerances as part of lens design.
So, a simple Gaussian design 50mm lens might consist of six glass elements in five groups. It will have good sharpness and contrast from nearly wide open but it will manifest this performance in a broad swath of the center of the lens. The corners, because of the curvature of the elements, will have lower performance until the lens is stopped down. Your options for optical correction might be a more complex optical design (like the Sigma Art series 50mm f1.4) which adds much size, weight and cost. Also, moving heavy collections of dense glass reduces auto focus performance. With added correction elements the lens will now perform "better" even when used wide open.
We like to chant that it's stupid to buy a "fast" lens and then use it at medium apertures and to an extent I agree. But most of use end up shooting our lenses at f4.0 or f5.6 and above, rarely do we use them in the low light that they are advertised to excel in. Which begs the question of why we are so focused on wringing out great performance at the margins rather than accepting a set of compromises that benefits most photographers more.
Now, if we take a simpler lens design and maximize its overall performance for typical shooting we have a different path. We can have a lens with high and usable center sharpness at its widest aperture and really excellent performance across the frame at the middle apertures. All at a cost and at a size that works better for the vast majority of customers. If we want better performance of two critical issues: Geometric Distortion and Vignetting, then optical designers can exhaustively profile the issues involved and create models of correction that go a long way toward correcting these "faults" (if indeed we do perceive them as faults instead of just being the personality of the lens). Designers have become quite proficient at correcting both of these issues with corrections applied via lens firmware. The lens will likely never approach the nosebleed performance of a highly optically corrected lens in corner and edge sharpness because of uncorrected field curvature but the designers can get it darn close.
By mapping the lens's personality and making interpolated corrections we can enjoy photography by using comfortably sized lenses with more than enough performance for most users. We can also meet the conundrum in the middle designing really good optical and mechanical systems and then augmenting and perfecting them with a dose of firmware finesse. I would contend that this is the course of action the designers of better lenses use all the time.
I'm happy using older lenses on my new cameras. Even lenses with fewer elements and no software/firmware corrections. But I am equally happy, when trying to solve specific problems, to depend on the lens designer's solutions of mixing math and glass. No affordable lens is designed and built to deliver maximum performance at every f-stop and at every focusing distance. Some lenses give their best performance at infinity while a bunch are corrected for 10x to 50x their minimum focusing distance. Many world famous macro lenses are corrected for 1x life-size or 1/2x life-size but are middle of the road performers at longer distances and not that great an infinity focus.
The application of software/firmware corrections makes more lens universally usable. Not perfect and not always the compromise that everyone wants but better than using optical formulas alone.
Do we need ultimate performance? Most of us don't. I'd love to have a Leica 50mm APO Summicron-SL for my SL2 camera but the lens is nearly $6,000 and it's heavy and big. It is quite good when used wide open. Probably the best thing around if you shoot at f2.0 and limit your focusing distance from about one meter to about five meters. But is it so much better than a lowly Panasonic or Nikon Z 50mm f1.8 when both are used at f5.6? Maybe you can see a difference......but maybe not.
Finally, the old argument of image quality loss in the corners when using math correction is a bit passé. There might still be a small hit but the sensors have abundant resolution now and the ability to handle so much less exposure with so much less noise that it's largely becoming a non-issue for lens designers using math to fine tune.
From my point of view the bottom line is that a mix of good optical design and equally good processing design in a lens yields benefits nearly everywhere and keeps the cost of really tremendously good optics lower. A lens like the Sigma 85mm f1.4 DN DG gives incredibly good results when corrections are applied. Without them it has vignetting and geometric distortion. Lots and lots of barrel distortion. But it all goes away when the lens profiles are applied. If we depended solely on optical design to get the same performance then the front element and other elements in the lens would have to be much bigger and heavier. There would be more mechanical complexity in the mix. The AF performance might suffer as well. And in the end the price would be multiple times higher --- far out of the comfort zone of most users.
I guess you can always argue in the other direction but I'm not sure you'll be on solid ground.
I'll praise the mix of technologies. It works very well for the vast majority of us.
I have no problem with in-camera profiles or post-processing profiles (e.g.., Lightroom, etc.) But I do want the option to disable either or both. Sometimes I want the original, uncorrected image.
ReplyDeleteDavidB
Indeed, Sigma's "design for things software can't correct" philosophy is an absolute winner in my eyes. Love my 65+90.
ReplyDeleteHaving the lens corrections done well in-camera makes doing it, probably less well, in post, superfluous.
ReplyDeleteSo, this morning (Saturday), I was running a course about an hour away and the first part of the drive meant driving on packed, (not loose), snow and ice. Put it this way, I kept all the software controlled aspects of my Toyota AWD active throughout, plus driving according to the conditions. As I always say, you can't change the laws of physics.
The same for cameras and lenses and I prefer the aesthetic of smaller, beautifully made, lenses which may not be perfect. Rather like the photographer.
Everything in a digital image is the result of mathematical computation.
ReplyDeleteSince we actually only exist in the matrix all our lives are just a result of continuing mathematical computations.
ReplyDelete>> "But most of use end up shooting our lenses at f4.0 or f5.6 and above..."
ReplyDeleteIt appears that a large part of these arguments hinge upon the above assumption. Is this really a true statement, esp. for younger generations that did not grow up with "compromise" design lenses? For that matter, there are plenty of examples of Large Format photographers who shot in broad daylight with extremely limited DoF.
Many will start calling the low DoF look a "fad," dumb, or whatever. From there on, it becomes a discussion on artistic merit, taste, and personal preference. But too often treated almost axiomatically.
I tend to agree, although the Leica 50/1.4 you mention, like the Panasonic 50/1.4 S Pro I use, are seriously big, heavy, expensive prime lenses AND require software correction for distortion. You have to wonder if that isn't symptomatic of a certain laziness!
ReplyDeleteNonetheless, the image quality is excellent, far better than it really needs to be for most purposes. And of course the fact that one might be able to see a difference doesn't mean it matters. It's like MP3 codecs - yes you can 'hear the difference' if you listen very closely and compare with the uncompressed source, but it rarely matters. The differences are of no musical significance.
Cosmic, man.
ReplyDelete