I think everyone who has upgraded from a camera that has a resolution was just a little higher than the resolution of their desktop monitor to one that is much higher has had a disappointment in finding that the resulting new images just didn't seem much sharper. Or any sharper. Or as sharp. It's interesting to me because I've experienced this situation and I continue to experience it. I wish I understood it better.
The issue seems not to be so much about the actual number of pixels on the sensor of a camera but in how small the pixels are and how densely they are packed in. The idea is that denser sensors are prone to a quicker onset of a sharpness robbing effect known as diffraction. As I understand it diffraction, or bending of light around the edges of an opening of a lens (or a pixel array) is what actually causes primary sharpness issues but the overlapping of "Airy disks" is what lowers resolution on the sensor.
Here's an in-depth and well done article that I found on diffraction and its various effects: http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm
When I look at the details of the "science" I can understand that diffraction makes images progressively less sharp after a certain point. There is a calculator (actually two) in the linked article that shows the effect of pixel density and sensor size on diffraction limits. It shows, theoretically, what the minimum aperture would be for a given sensor size and pixel density before diffraction rears its mathematical ugly head and starts causing problems vis-a-vis sharpness.
I used the calculator for several different camera sensor sizes and density. What I found was that on an APS-C sized sensor a system becomes "diffraction limited" (where sharpness starts to gradually decline---it's not on or off in a binary sense) based on the density of the pixel packing. A 24 megapixel sensor (like the one in my D7100) hits the wall at f5.9. If I use a D7000 with 16 megapixels instead the diffraction limit sets in at f7.3 and if I use a 12 megapixel camera the diffraction limit steps into the equation at f8.4.
If I use my micro four thirds cameras at 16 megapixels we become diffraction limited at f5.9 (the same as the APS-C at 24 megapixels....) and if we were able to wedge 24 megapixels into the next gen of m4:3rd sensor we'd see diffraction rear it's ever softening head at f4.8. Best case in the current market in respect to delayed onset of diffraction would be the Sony a7s at 12 megapixels. The calculation shows that lenses on that camera don't become limited until hitting f12.7.
The mind reels but essentially there's a fixed pattern that tells us you can have some stuff but not other stuff. If you are shooting with micro four thirds cameras of 16 megapixels it really behooves you to buy fast lenses that are well corrected wide open and at wider apertures. By the time you hit f5.6 you've almost got a foot in the optical quicksand. Stopping down to improve lens aberrations probably cancels out overall improvements with the advancing onset of diffraction.
So, the mind boggles even more. If I am shooting outside and want max depth of field the numbers tell me that I might be better off shooting with a less densely packed sensor camera. If I needed f11 to get sharp focus on a big bridge for example, I might be better off shooting on a 12 megapixel camera than a 24 megapixel camera. While the depth of field remains the same if the sensors are the same overall geometry the more densely packed sensor will succumb to unsharpness at lower f-stops. Now, theoretically if I resized the 24 megapixel image to the same size as the 12 megapixel file I'd get the same level of sharpness. At least that's what I gather. But there are so many other variables.
The optical detail transferred by our lenses is limited by the lenses ability to deliver sharply defined points. The lenses output quality has to do with something called, Airy Disks, that limit their ability to deliver more resolution beyond a certain point as well. The Airy disk is a 2D mathematical representation of a point of light as delivered by an optical system to film or to a sensor. As the pixels get smaller more of them are covered by the same single Airy disk delivered by the optical system. Additionally, when Airy disks overlap they loose their resolving abilities by a certain amount. Also their are different sub calculations for the different wavelengths of the different color spectra.
If the information represented by each Airy disk is spread over more and more smaller and smaller pixel there can be a reduction of sensor artifacts but it will be offset by the resolution limits of the actual lens. One of the reasons some lenses are brutally expensive is that the designers have opted to make their lenses as sharp as possible (or diffraction limited) wide open so that one doesn't need to stop down to get better lens performance. The old way of designing lenses (especially fast ones) was to do the best design you could and aim for highest sharpness two stops down from maximum aperture. You see that in most of the "nifty-fifty" inexpensive normal focal length lenses. Lots of aberrations along with unsharp corners and edges when used wide open but then shaping up nicely by f5.6. Now, with high density sensors, you'll start to find that f5.6 also might become your new minimum f-stop which, for all intents and purposes means that your mediocre (wide open) lens has only one usable f-stop. The one right before diffraction sets in.
When you overlay the idea of Airy Disks and their effect on resolution based on sensor size with the quicker mathematically implied diffraction effects of denser sensors you can see why an image from a lower density sensor might look better on screen at normal magnifications than the same lens used on the same scene but shot on a much higher resolution system. The difference is in acuity or perceived sharpness. Because at the diffraction limited point it's the edge effect that gets eroded. The contrast between tones is reduced which reduces our perception of the sharpness of the image.
What a weird conundrum but there it is. I started thinking about this when I started shooting a D7100 next to a D7000 and started finding the 7000 images (16 megapixel sensor) much sharper in appearance. At the pixel level the D7100 was sharper but on the screen the D7000 images were more appealing. And if the target is the screen then all of the theoretical information is just more noise.
There are really so many more things at work here than I understand when I compare images from different cameras. There are generational issues having to do with noise reduction and dynamic range that shift the results and out ideas of what constitutes "a good camera." But Sony has done something that seemed at the time driven by the needs of video but at the same time revelatory of what we can see when we strip away some other muddying factors which have served to make us want the higher megapixel cameras (=more DR and less overall noise). They recently introduced a full frame camera at 12 megapixels that combines the state of the art noise handling and beyond state of the art dynamic range on that sensor. Now the seat of the pants evaluation and the awarding of "best imaging" prizes to the highest megapixel cameras is called into question. It may be that there will be a trend back toward rational pixel density driven by the very need for quality that drove us in the other direction. They've changed the underlying quality of the sensors and that may allow us to go back to being able to stop down for sharpness and to skirt some of the constraints of the laws of physics as they apply to optical systems. And in the end benefit with both great looking files and far more flexibility in shooting and lens choice.
But, as I've said, I don't understand all the nuts and bolts of this and this article is an invitation for my smart readers to step in and flesh out the discussion with more facts and less conjecture. Have at it if you want to....
Don't be so hard on yourself about the diffraction limiting apertures. The calculators give you a theoretical aperture where the diffraction begins to set in. I find that it really doesn't make much difference until you are at least a stop beyond that. It is sort of like a rising river, where the flooding is only a few inches deep and inconsequential at the edges, but much more impressive as you wade further out.
ReplyDeleteYou don't have to print this, but if you want the actual physics (nuts and bolts facts)look to Thom Hogan. I bought one of his books on the Nikon D7000 and was amazed at his knowledge and his ability to explain it what a digital picture is, pixel size, resolution, etc. As Dolly Parton says, "If you can't explain it so I understand it, then you don'y know it." or something like that.
ReplyDeleteGood luck
Not that it will change the meat of your discussion, but the Airy disk *is* the result of diffraction. It is the (diffracted) image of a perfect point of light. A real scene can be thought of as the sum of all the Airy disks coming from all the points of the scene (mathematically, it's a complex sum that takes into account the phase).
ReplyDeleteIf you look at lens tests at various apertures, you can see the curve where sharpness typically goes up from wide open until diffraction kicks in and then it goes down again. With a higher res sensor, the point at which sharpness goes down is earlier (lower f-number). Photozone sometimes shows lenses tested on multiple bodies.
ReplyDeleteBut just because it starts dropping off "earlier" (on a higher res sensor) doesn't mean it's worse stopped down. You're never going to be worse off with a high res sensor. As you stop down, you see diminishing returns, approaching a point where you're capturing the same detail, and as you stop down and detail drops off, your high res image is going to look worse at 100%. But it should never look worse printed or viewed at a normalized size on screen.
Sharpness drops off earlier, but it drops off from a higher measurement.
Someone posted today on a forum about the "magic" of a Canon 5D versus a 16MP APS-C (Sony E mount) and whether moving from the APS-C to an A7 would give him the fun of the small camera and the magic of FF. The "magic" is a big unknown - there's no way to know what he's talking about, but it could simply be that those big pixels on a 12MP sensor and a 50mm prime record an image that looks amazing at 100%, versus a higher res (and smaller sensor) image or even a 24MP FF image, which won't look so great at 100% but could look better printed.
HI Kirk
ReplyDeleteThe sharpness between the Nikon's is possibly the noise reduction that is always on with the D7100 in jpg mode
Shooting Raw and using Lightroom the 24mp will easily be sharp enough for you.
Probably too sharp for people when using studio flash, all that detail can be unflattering.
Great sensor though, makes A1 prints with ease
Regards
Tony
First things first...the "diffraction limit" of a digital imaging system is not a brick wall. The point calculated by the CiC website widget gives the point where, with an otherwise perfect lens, diffraction effects will begin to appear in the recorded image when viewed at a suitable magnification. That's a mouthful, and should give one pause before freaking out about the whole subject.
ReplyDeleteRemember, what actually is happening is that the sensor is recording the airy disks themselves, if you will. As you stop down, the DOF of the image will continue to increase, but the maximal recorded sharpness will not increase as fast. So there are still benefits to stopping down, and one should not think that there's no point in using the full aperture range of your camera if you'd like. There are many many discussions on the net, including the food fights that transpire on DPR, but there are nuggets of information and experience with the phenomenon amongst them. We have to be careful of the semantics obscuring the physics. Sort of a language version of "diffraction limit", you might say.
Kirk, I think the Cambridge calculator is useful as far as it goes. However, one needs to distinguish between diffraction effects on resolution in the absolute sense of being able to resolve two points as separate things; and the effect of diffraction blur on micro contrast. The latter can be detected, in principle, even at large apertures for which the lens is well corrected, and regardless of pixel size. In other words, diffraction blur is always present and always detectable if not obscured by other sources of blur. That is why very subtle diffraction effects can often be detected at larger apertures than the Cambridge calculator might suggest. On a more mathematical note, the use of the Airy disk as a measure of diffraction blur is purely arbitrary. The disk is calculated as the diameter of the central part of the Airy diffraction pattern out to the first minimum. In fact the intensity of the central disk falls off markedly well before the first minimum. I have never seen this discussed on any photography web site. My own tests suggest that a more meaningful estimate of diffraction blur is on the order of 70% of the diameter of the Airy disk as usually calculated. If anyone is interested in seeing the evidence, it can be found at http://philservice.typepad.com/f_optimum/2014/05/optimal-aperture-in-photography-2-testing-the-theory-blur-equivalence.html and http://philservice.typepad.com/f_optimum/2014/05/optimal-aperture-in-photography-3-testing-the-theory-the-diffraction-blur-coefficient.html
ReplyDeleteThat's an interesting perspective that I hadn't really considered before. I tend to use aperture as a means of controlling DoF without giving much, if any, thought to diffraction. Which means I probably haven't either seen diffraction, or its got lost in general bad technique.......
ReplyDeleteSo, I would like to posit that like many of the subjects discussed on the internet, this may be one of those theoretical issues that do not affect the real world unless you go and look for it, and would people who are not technically aware, or inclined notice - or are they more interested in the subject matter.....
Good post though Kirk and really enjoy reading your blog
Regards
Andy
Andy, I see it pretty quickly when I am doing product shots. Most typically when doing a server or other product against white and stopping down to keep it all in focus. There is a point at which when I stop down too much everything near to far becomes less sharp. How much less sharp? Very apparent when there are sharp lines or type within the photo...
ReplyDeleteGood responses from everyone today. Thanks.
Nikon makes a 36Mp FF camera, Canon is rumored to have a 50Mp FF camera coming in 2015 and Lloyd Chambers asks "Are Today’s Lenses Good Enough for a 72 Megapixel DSLR?" So there may still be life left in MaxiMegapixel cameras.
ReplyDeleteI've had many "defraction limited" photos printed in ads/catalogs, so have many of my friends. Sometimes, even with a Tilt & Shift lens, you need to break the "defraction limit" to get the needed Depth of Field.
Seeing defraction at 100% doesn't mean you'll see defraction on a magazine cover or on a web site.
Discussions like this make my head hurt. It's in the class with the recent threads I've encountered on "T" stops which allege that f/stops on our lenses are wrong because they don't take into account the transitivity (coined word?) of the glass used in the lenses which varies from lens to lens. One YouTube video asserts that your lens may be as much as half a stop slower than the marked maximum aperture because the glass is absorbing that much of the light keeping it from getting to the sensor/film.
ReplyDeleteAside from displaying a misunderstanding of what the f/# represents it is not a relevant consideration for the day-to-day practice of photography and I see this discussion in the same light. I mean, you can spend your photography time doing complex math calculations if you like, but the real meaning of your photographs is in the content. If your audience is commenting that the eyelashes aren't critically sharp when the image is viewed at 100% you are missing the boat somewhere IMO and isn't in your mathematical calculation of the optics.
As an aside I noted your comment about getting a camera that doesn't match the resolution of your monitor. Any time you interpolate an image from one pixel dimension to another you will alter the image sharpness to a degree because you are actually altering the tones and hues of adjacent pixels to approximate the original but at a enlarged or reduced scale. This is necessary because each pixel can only represent one combination of tone/hue and if an edge falls in the middle the pixel will average the two contrasting tones. That is why Photoshop has different algorithms for enlarging or reducing in an attmept to minimize that effect.
And then there are the demosaicing filters in most digital cameras whicht blur the image to avoid things like moire and thus require sharpening when interpreting the image from RAW to JPG, TIFF or whatever. As you note it is very complex with a lot of variables that are of more interest to engineers who are designing lenses and cameras than to practicing photographers. At least that's how I see it. I'd rather focus (no pun intended) on my subject than complex optical calculation.
Kirk,
ReplyDeleteSorry for another post, but this topic is obviously important to me.
I move that we banish the term "diffraction limited" from all future discussion. The point being that the effects of diffraction on microcontrast and edge detail are generally apparent long before a system becomes "diffraction limited" by the criteria on the Cambridge website. Diffraction blur is always present and in principle can affect image quality at any aperture regardless of pixel size, provided that it is not obscured by other sources of blur. I think that the price/weight/size/performance tradeoffs in lens design and manufacture have been skewed by the desire to make lenses with very fast maximum apertures (e.g., f/1.4). My gut feeling is that better image quality could be obtained by designing lenses with with more modest maximum apertures (e.g., f/2.8), but for which the maximum aperture is also the best corrected. That is a strategy for minimizing the effect of diffraction blur on image quality. In this regard, I think it is telling that most very expensive fast lenses (Otus 55 f/1.4, for example - reviewed at lensrentals.com) often have their best MTF50 resolution at f/4 or f/5.6: apertures that produce appreciable diffraction blur.
Yes, sensors with smaller photosites will become "diffraction limited" at larger apertures than sensors with larger photosites by the criterion of the Cambridge calculator. However, a sensor with 10 micron photosites would have no chance of resolving 100 lp/mm no matter how good the lens or how small the diffraction blur. On the other hand, a sensor with 5 micron photosites could, in principle, resolve at that level provided that the lens was up to that sort of resolution. The real point of this discussion, I think, is that if we really want to exploit the greater theoretical resolution of sensors with small photosites, we need to used high quality lenses at the largest aperture that produces excellent results. Of course, depth of field might not be so great. But trade-offs are inevitable.
Phil
JIm, I disagree with you on one point. The f-stop is a theoretical geometric description of the lens opening as a fraction of the focal length. T-stops are accurate descriptions of the real light transmission through the lens. We use T-stops all day long in video and motion picture film because we use incident light meters rather than the meters in cameras that take the transmission difference into consideration. Many times I've metered an interior studio scene only to find that the actual exposure was quite different. Sometimes more than a stop. Knowing in advance that the lenses weren't particularly efficient in light transmission meant that I was looking for the inconsistency before it became a client problem. It's good to know basic theory because you never know when it will make a difference. I also like to know about diffraction limitation because while the sharpness loss may be masked by bad technique, etc. there are times when you need to pull out all the stops and make stuff as sharp as you possibly can. Knowledge in the service of art is just another powerful tool for image making. Not a navel contemplation exercise in the least.
ReplyDeletePhil, always happy to continue the discussion and learn new stuff. I never know exactly how I will use the knowledge but I know i will. Thanks.
ReplyDeleteDiffraction is a real issue if printing large. However a good deconvolution sharpener will give you maybe 3 stops more when viewed at 200%. A small write up at http://davidsutton.co.nz/2013/03/16/how-sharp-do-you-want-your-photographs-diffraction-revisited-part-two/
ReplyDeleteDoes anyone know the original genesis of the phrase "diffraction rears its ugly head?"
ReplyDeleteNow Kirk, I would never call you anything less than an original, but I chuckle now every time I see that phrase because it appears in nearly every written discussion of diffraction on the internet. I do applaud you, however, for at least supplementing the phrase with that stick-poke of an adjective in your entry.
Next time you are waiting for the dentist to see you, Google "diffraction rears its ugly head" and see how many pages you can count that use the phrase before the hygienist calls you in. Its comical!
Such a prevalent phrase and yet I'm not sure how many people could actually tell the difference between diffraction's ugly head and a head that is ugly at any aperture.
I call on all creative writers to pledge that from now on, any talk of the f stop at which diffraction takes away from critical sharpness will not include the phrase "rears it's ugly head."
Here's some such-as's:
"At f/16 diffraction begins to swing its rubber mallet at your circles of confusion."
"At f/11 diffraction detracts from the act of differentiating digital facts."
"At f/22 diffraction's side effects can include panic, increased heart rate, depression, nausea and buyer's remorse in chronic overmagnifiers."
"At f/3.5, diffraction is still several stops away from irrevocably turning your art into unbearable smudge."
Isn't that refreshing?
Ezra, How about, "When diffraction uncomfortably manifests itself like a pungent fart in a crowded elevator"....?
ReplyDelete"When diffraction creeps around the edge of the aperture blade hellbent on blurring your carefully constructed reality."
"When diffraction shows up like a slowly intensifying pain. The slowly forming appendicitis of imaging..."
Bravisimo! You have once again made the internet a nice place to visit!
ReplyDeletePFI!
ReplyDeleteI can't stop laughing.
My eyes glaze over with this type of discussion. I read all the comments here and didn't learn a damn thing, it's just beyond me. Does this mean I need to go back to shooting with my 6mp Nikon D50? Come to think of it, the files looked awfully nice at 100%. I always loved that camera.
ReplyDeleteRegarding the 16MP D7000 appearing sharper than the 24MP D7100, I saw the same effect and result comparing the 16MP Pentax K-5 with their later 24MP K-3, each of which use the same sensor as the corresponding Nikon models.
ReplyDeleteIn spite of my engineering background (EE) and all the time I spent in my younger years arguing these issues, at this point in time I've come to realize it's irrelevant.
ReplyDeleteI've come to realize I'd rather spend my precious time out observing the world, looking for those singular moments (to me) to photograph.
Our current display technologies are quite a bit lower than the sensors in our cameras. So, isn't what we see on the screen still in an interpolation of the sensor data? We don't actually see all of the pixels we record now. Correct?
ReplyDeleteThe matter of diffraction limitations will become more of a concern as our displays get higher resolutions. Our cameras currently record much higher than HD resolution. There are already 4K and 5K displays available. In a couple of years they will be much more affordable and common. At that point, we might find that our older files don't hold up as well as they used to and that our discussions will have shifted in another direction.
A few other things to think about.
ReplyDeleteAs a landscape photographer I'm often concerned with sharpness throughout the frame. So a lower res camera can actually increase apparent sharpness due to the fact that there is less of a gradient from the sharpest pixels in the middle to the corners. The middle is limited by the resolution of the camera, and the corners which can resolve less aren't as far away from the maximum resolution of the sensor.
Also, your note on monitor resolution and apparent sharpness is actually now being turned on its head. The new iMac with a 5k resolution requires a 15mp image just to fill the screen at 1:1! So an image from a Sony a7s would actually have to be scaled up to fill the screen.
Interesting times.
But the bottom line for me in the urban landscapes I do which require a hefty amount od DOF is that the situation is like the classic economics graph; I am looking for the sweet spot where the line for increasing DOF intersects the line for decreasing resolution as I stop down above the lenses sweet spot. That point I have determined by trial and error. But the idea that I might be better off using a 24MP FX camera instead of my 36MP D800 given my need for DOF is intriguing. Thanks, Kirk.
ReplyDeleteI find it weird, that people suggest using a 12MP camera over a 36MP camera due to diffraction. As other people wrote here, it's not a process acting immediately, blurring the whole picture. With a higher MP camera you have more options. You can use the resolution to print larger at lower f-stops, where diffraction is not present or weak (not possible with a low-resolution model). Second, you could resize your image to 12MP, for example, which will give you the same diffraction limit and will furthermore be sharper (downsampling uses high spatial frequency information which is not there in low MP-sensors). Third, you can crop when using diffraction free f-stops. You furthermore compare across formats. Important is mostly the pixel size, which is similar for a 24MP DX and 16MP m43, or a D810 compared to a D7000, for example, determining the pixel level diffraction used in many calculators. The Cambridge in Color calculator is nice, as it takes print size and viewing distance into account, too, since the effects are not immediately visible. Personally, I prefer high MP cameras, as they provide more options for me, as long as the lenses can cope with the resolution (easier for FF, as can be seen in the same picel pitch of D810 and D7000).
ReplyDeleteFascinating. A little more knowledge is never a bad thing.
ReplyDeletePretty good comments, compared to the usual degeneration into tech-talk, and arguments about petty details.
ReplyDeleteIn my understanding, diffraction can have an impact at the pixel level, but it is rarely important for the overall composition. As you say, product shots may require attention to pixel-level details, and landscape photographers are constantly fighting the DOF vs diffraction problem, but otherwise composition is probably the more important consideration than pixel-details.
RFF has a good discussion today on the same topic today.
HF is right. Downsample the larger pixel image with Photoshop bicubic sharper to the size of the smaller pixel image and it will be at least as sharp with less noise. More importantly, you have to do your final output sharpening to the files to compare sharpness. Higher pixel files will sharpen up better for the same size output.
ReplyDeleteThis very learned and technical post makes me glad that I have just accidentally acquired a Minox camera - 8x11mm image on 9,5mm film, with a fixed aperture. Enlarging the results from it will be interesting, not to mention cutting the film myself from rolls of 135. Uh oh...
ReplyDeleteI shoot a lot of macro and as you shallow DOF is the bane of my life. For the subjects I shoot focus stacking is really not a practical option - too much movement. So I regularly dip into the murky pool that is defer action. As others have noted it is not like you throw an on/off switch. It is a progressive and gradual effect. So what I have found is that if I open the image up and see a bit of a problem I make a layer mask using the hig pass filter and then paint it into the areas where it is needed and then play around with the opacity. It just boosts the micro contrast on the edges and makes things appear just a bit better. It won't fix mush but it does do quite a bit. Obviously if you're seeing halos then you've gone too far. Also I've found that the values used for the high pass filter depend upon the resolution of the sensor. The 12Mp sensor in my 5d needs very little while the 18Mp Canon sensors need more help.
ReplyDeleteThanks for the good info Paul! I appreciate it.
ReplyDelete