Can I Use Aps-c Lens On Full Frame Camera
The subject of sensor crop factors and equivalence has get rather controversial between photographers, sparking heated debates on photography sites and forums. So much has been posted on this topic, that it almost feels redundant to write about information technology over again. Sadly, with all the great and not-so-dandy information out there on equivalence, many photographers are only left more than puzzled and confused. Thanks to so many different formats available today, including 1″/CX, Micro Four Thirds, APS-C, 35mm/Total Frame, Medium Format (in unlike sizes), photographers are comparing these systems by computing their equivalent focal lengths, apertures, depth of field, camera to bailiwick distances, hyperfocal distances and other technical jargon, to prove the inferiority or the superiority of one system over another. In this article, I desire to bring up some of these points and express my subjective opinion on the matter. Recognizing that this topic is i of the never-ending debates with stiff arguments from all sides, I do realize that some of our readers may disagree with my statements and arguments. So if you do disagree with what I say, please provide your opinion in a civilized style in the comments section below.
Earlier nosotros become started, let'southward first go over some of the history of sensor formats to get a better agreement of the past events and to exist able to assimilate the fabric that volition follow more easily.
Table of Contents
1) The Nascence of the APS-C Format
When I first started my journey every bit a photographer, the term "equivalent" was very foreign to me. The first lens I bought was a kit lens that came with my Nikon D80 – it was the Nikkor 18-135mm DX lens, a pretty skillful lens that served equally a learning tool for a beginner like me. When I researched well-nigh the camera and the lens, references to 35mm picture did not bother me, since I had non shot motion picture (and thus did not use a larger format than APS-C). At the time, Nikon had not yet released a full-frame camera and few could afford the high-end Canon full-frame DSLRs, so the term "equivalent" was generally targeted at 35mm moving-picture show shooters. But why did the beginning DSLR cameras have sensors that were smaller than the classic 135 moving-picture show frame? Why practise nosotros even have the question of equivalence that'southward on the minds of then many photographers?
Today, APS-C (or any other smaller-than-total-frame format for that matter) is marketed equally the meaty and inexpensive choice, and the market is filled with DSLRs and other compact / interchangeable lens cameras. With smaller sensors come up potentially smaller, lighter camera bodies and lenses. But it wasn't always similar that, and information technology certainly was not the reason why APS-C took off as a popular format. Due to technical bug with designing big sensors and their high cost of manufacturing, it was challenging for camera manufacturers to make full-frame digital cameras at the time. Then smaller sensors were not just cheaper to make, but were besides much easier to sell. More than that, the APS-C / DX format was non originally intended to be "small-scale and compact", as it is seen today. In fact, the very first APS-C cameras by both Nikon and Canon were as large equally the high-terminate DSLRs today and certainly non cheap: Nikon's D1 with a two.7 MP APS-C sensor was sold at a whopping $5,500, while Canon was selling a lower-end EOS D30 with a three.1 MP APS-C sensor for $3K.
Equally a issue of introducing this new format, manufacturers had to find a way to explain that the smaller format does bear on a few things. For example, looking through a 50mm lens on an APS-C sensor camera did not provide the aforementioned field of view every bit when using that same lens on a 35mm motion picture or a full-frame digital photographic camera. How exercise y'all explain that to the customer? And then manufacturers started using such terms as "equivalent" and "comparable" in reference to 35mm, by and large targeting existing film shooters and letting them know what converting to digital really meant. In one case full-frame cameras became more popular and manufacturers produced more than cheaper and smaller lenses for the APS-C format, then we started seeing "advantages" of the smaller format when compared to full-frame. Marketers quickly moved in to tell the masses that a smaller format was a not bad choice for many, because it was (or, actually, has get) both cheaper and lighter.
To summarize, the APS-C format was simply born because it was more economical to make and easier to sell – information technology was never meant to be a format that competes with larger formats in terms of weight or size advantages as information technology does today.
two) The Birth of APS-C / DX / EF-S Lenses
Although the first APS-C cameras were used with 35mm lenses that were designed for moving picture cameras, manufacturers knew that APS-C / crop sensors did not utilise the full paradigm circumvolve. In add-on, there was a problem with using film lenses on APS-C sensors – they were not broad enough! Due to the change in field of view, using truly wide angle lenses for 35mm picture was rather expensive, choices were limited and heavy. Why not make smaller lenses with a smaller epitome circle that can cover wider angles without the heft and size? That's how the first APS-C / DX / EF-South lenses were born. Nikon'due south first DX lens was the Nikkor 12-24mm f/4G lens to comprehend broad angles and Catechism's first lenses were the EF-South 18-55mm f/3.5–5.half-dozen and EF-S 10-22mm f/iv.5–5.half-dozen, which were as well released to address similar needs, but for more budget-conscious consumers. Interestingly, despite the efforts past both manufacturers to brand smaller and more affordable lenses, neither DX, nor EF-S lines really took off. To date, Nikon has only made 23 DX lenses in total, only two of which can exist considered of "professional" form, while Canon's EF-S lens line is limited to 21 lenses, 8 of which are variations of the aforementioned 18-55mm lens. Canon does not take any professional class EF-Southward "L" lenses in its line. So the idea of providing lightweight and smaller lens options of high class was non something Nikon or Catechism truly wanted to do, when they could crank out lenses for full-frame cameras.
3) The Demand for Lens Equivalence and Crop Factor
Since the APS-C format was relatively new and the adoption rate of 35mm picture cameras was very high in the industry, field of view equivalence ofttimes expressed as "equivalent focal length" fabricated sense. It was of import to let people know that a 50mm lens gave a narrower field of view on APS-C, similar to a 75mm lens on a 35mm picture / full-frame camera. Manufacturers besides came up with a formula to compute the equivalent field of view in the form of a "crop factor" – the ratio of 35mm film diagonal to the APS-C sensor diagonal. Nikon's APS-C sensors measuring 24x16mm have a diagonal of 29mm, while total-frame sensors measuring 36×24 take a diagonal of 43mm, so the ratio difference betwixt the ii is approximately 1.5x. Canon's APS-C sensors are slightly smaller and have a crop factor of 1.6x. So computing the equivalent field of view got rather simple – take the focal length of a lens and multiply it by the ingather factor. Hence, 1 could hands calculate that a 24mm lens on a Nikon DX / APS-C photographic camera was similar to a 36mm lens on a full-frame camera in terms of field of view.
However, over time, the crop factor created a lot of defoliation among beginners. People started to say things like "Image was captured at 450mm focal length", when in fact they shot with a 300mm lens on an APS-C camera. They felt similar they could say such things, thinking their setup was giving them longer "reach" (pregnant, allowing to become closer to action), while all information technology did was give them narrower field of view due to sensor cropping the paradigm frame. Then permit's establish the very kickoff fact: the focal length of a lens never changes no affair what camera it is attached to.
four) Lens Focal Length vs Equivalent Focal Length
Whether you lot mount a full-frame lens on a full-frame, APS-C, Micro Four Thirds or one″ CX photographic camera, the physical properties of the same lens never modify – its focal length and aperture stay constant. This makes sense, as the just variable that is changing is the sensor. And so, those that say that "a 50mm f/1.four lens is a 50mm f/1.4 lens no thing what camera torso it is attached to" are correct, only with 1 condition – it must be the aforementioned lens (more on this below). The only affair that can alter the physical properties of a lens is another lens, such every bit a teleconverter. Remember, focal length is the altitude from the optical center of the lens focused at infinity to the photographic camera sensor / flick, measured in millimeters. All that happens as a result of a smaller prototype format / sensor is cropping, every bit illustrated in the below epitome:
If I were to mount a 24mm full-frame lens on an APS-C photographic camera to capture the above shot, I would only be cutting off the corners of the image – not getting whatsoever closer physically. My focal length does non change in whatever way. It is yet a 24mm lens. In terms of equivalent focal length, the resulting crop would give me a narrower field of view that is equivalent to what a 36mm lens would requite on a full-frame camera. However, the key word here is "field of view", as that'south the only thing that differs. This is why I prefer to use the term "equivalent field of view", rather than "equivalent focal length", as in that location is no change in focal length.
If you were to endeavor a quick experiment past taking a full-frame lens and mounting it on a total-frame camera, and so mounting the same lens on different camera bodies with smaller sensors using adapters (without moving or irresolute any variables), you lot would get a similar result as the in a higher place image. Aside from differences in resolution (topic discussed further down below), everything else would be the same, including perspective and depth of field (actually DoF can be different between sensor sizes, meet references to DoF beneath). And then the groundwork and foreground objects would not appear whatever closer or further away, or look more or less in focus. What you would see is in-camera cropping taking identify, nothing more.
The in a higher place is a rather simplified case, where we are taking a full-frame lens with a large image circle and mounting information technology on different cameras with smaller sensors using adapters. Without a uncertainty, the results will always exist the aforementioned with the exception of field of view. However, that's non a practical case today, since smaller sensor cameras at present have smaller lenses proprietary to their photographic camera systems and mounts. Few people use big lenses with smaller formats than APS-C, because mountain sizes are different and they must rely on various "smart" or "dummy" adapters, which unnecessarily complicate everything and potentially introduce optical problems. Again, there is no point in making large lenses for all formats when the larger image circle is unused. When manufacturers make lenses for smaller systems, they desire to produce lenses as small and as lightweight as possible. And then when new interchangeable lens camera systems were introduced from manufacturers like Sony, Fuji, Olympus, Panasonic and Samsung, they all came with their "native" compact and lightweight lenses, proprietary to their lens mounts.
5) ISO and Exposure / Effulgence
In film days, ISO stood for the sensitivity of film. If you shot with ISO 100 film during daylight and had to move to low light weather condition, yous had to alter out motion-picture show to a higher sensitivity blazon, say ISO 400 or 800. So traditionally, ISO was defined every bit "the level of sensitivity of pic to available light", as explained in my article on ISO for beginners. Still, digital sensors deed very differently than picture, as there is no varying sensitivity to different light. In fact, digital sensors only take 1 sensitivity level. Changing ISO simply amplifies the image betoken, and then the sensor itself is not getting whatsoever more or less sensitive. This results in shorter exposure fourth dimension / more than brightness, but with the penalization of added noise, similar to what you see with motion-picture show.
To make it easier for film shooters to switch to digital, it was decided to use the same sensitivity in digital sensors as in film, and so standards were written such as the ISO 12232:2006, which guide manufacturers on how exposure should be determined and ISO speed ratings should be set on all photographic camera systems. After-all, ISO 100 movie was the same no matter what camera yous fastened that motion-picture show to, so it made sense to proceed this tendency with digital. These standards are not perfect though, as the manner "brightness" is determined can depend on a number of factors, including noise. So there is a potential for deviations in brightness between different camera systems (although ordinarily not by more a full stop).
Withal, one time different sensor sizes came into play, things got a flake more circuitous. Since the overall brightness of a scene depends on the exposure triangle comprised of ISO, Aperture and Shutter Speed, there are but two variables that tin change betwixt systems to "friction match" brightness: ISO and Discontinuity (Shutter Speed cannot change, every bit it affects the length of exposure). As you will come across below, the concrete size of discontinuity when looking at "equivalent lenses" in terms of field of view between different formats varies greatly, due to the drastic modify in focal length. In addition, sensor functioning can also exist drastically different, specially when you look at the get-go generation CCD sensors versus the latest generation CMOS sensors. This means that while the overall brightness is like between systems, epitome quality at different ISO values could differ greatly.
Today, if you were to take an image with a full-frame camera at say ISO 100, f/2.8 and 1/500 shutter speed, and took a shot with a smaller sensor photographic camera using identical settings, the overall exposure or "brightness" of the scene would wait very similar in both cases. The Nikon D810 (full-frame) at ISO 100, f/2.8, one/500 would yield similar exposure as the Nikon 1 V3 (one″ CX) at ISO 100, f/2.8, 1/500. On i hand this makes sense, as it makes it easy for one to reference exposure settings. But on the other hand, the way brightness is yielded is unlike – and that brings a lot of confusion to the already confusing topic. Yeah, exposure values might exist the aforementioned, but the corporeality of transmitted light might not! The big variable that differs quite a bit across systems is the lens aperture, specifically its physical size. Although the term aperture tin can mean a number of different things (diaphragm, archway pupil, f-ratio), in this particular case I am referring to the concrete size, or the aperture diameter of a lens as seen from the forepart of the lens, as well known every bit the "entrance student". The matter is, a full frame lens will accept a significantly larger aperture diameter than an equivalent lens from a smaller system. For example, if you compare the Nikkor 50mm f/1.4G lens with say the Olympus 25mm f/1.four (50mm equivalent field of view relative to full-frame), both volition yield similar brightness at f/1.4. However, does it hateful that the much smaller Olympus lens is capable of transmitting the same amount of low-cal? No, admittedly not. Information technology just physically cannot, due to the visibly smaller aperture diameter. Let's take a look at the math here.
6) Aperture and Depth of Field
Since the f number (in this case f/one.four) represents the ratio betwixt the focal length of the lens and the physical bore of the archway pupil, information technology is easy to calculate the size of the discontinuity bore on the Nikkor 50mm f/1.4G. We simply take the focal length (50mm) and divide information technology by its maximum aperture of f/1.four. The resulting number is roughly 35.7mm, which is the physical size of the aperture diameter, or the entrance pupil. Now if we look at the Olympus 25mm f/one.4 lens and utilize the same math, the discontinuity diameter turns out to exist just 17.8mm, exactly twice less! And then despite the fact that the ii lenses have the same f-number and comprehend like fields of view, their discontinuity sizes are drastically dissimilar – one transmits four times more than light than the other.
Permit's have this a footstep back and understand why nosotros are comparison a 50mm to a 25mm lens in the first identify. What if we were to mount the Nikkor 50mm f/1.4G lens on a Micro Iv Thirds camera with an adapter – would the light transmission of the lens be the same? Yes, of course! Again, sensor size has no impact on light transmission capabilities of a lens. In this case, the 50mm f/i.4 lens remains a 50mm f/1.iv lens whether it is used on a full-frame photographic camera or a Micro Four Thirds camera. However, what would the paradigm wait like? With a desperate "crop", thanks to the much smaller Micro Four Thirds camera that has a 2.0x crop factor, the field of view of the 50mm lens would make the subject appear twice closer, as if we were using a 100mm lens, as illustrated in the below image:
Equally yous can see, the depth of field and the perspective nosotros get from such a shot would be identical on both cameras, given that the distance to our subjects is the same. However, the resulting images wait drastically different in terms of field of view – the Micro Four Thirds paradigm appears "closer", although information technology is really not, as information technology is simply a crop of the full-frame prototype (a quick notation: there is also a difference in aspect ratio of 3/2 vs iv/3, which is why the image on the right is taller).
Well, such tight framing equally seen on the image to the right is typically not desirable to photographers, which is why we tend to compare two different systems with an equivalent field of view and camera to subject distance. In this case, we choice a 50mm full-frame lens versus a 25mm Micro Four Thirds lens for a proper comparing. But the moment you do that, ii changes have place immediately: depth of field is increased due to modify in focal length, and background objects will appear less blurred due to not being every bit enlarged anymore. Exercise non associate the latter with bokeh though – objects will announced less enlarged because of physically smaller aperture bore. If you accept a hard time understanding why, just practise quick math with a 70-200mm f/2.eight lens. Did you e'er wonder why at 200mm the background appears more enlarged compared to 70mm? No, it is not depth of field to blame for this, not if you frame the subject the same manner! If you stand ten feet away from your subject and shoot at 100mm @ f/2.8, the aperture diameter equals 35.7mm (100mm / 2.viii). Now if you double the distance from your subject by moving back 20 feet and shoot at 200mm @ f/2.8, your aperture diameter / entrance pupil is now significantly bigger, it is 71.4mm (200mm / 2.eight). As a result of this, the larger aperture diameter at 200mm volition really overstate the background more, although depth of field remains exactly the same. That's why shooting with a lxx-200mm f/2.eight lens yields aesthetically more pleasing images at 200mm than at 70mm! Some people refer to this as compression, others call it groundwork enlargement – both mean the aforementioned thing here.
A quick note on compression and perspective: information technology seems similar people confuse the two terms quite a bit. In the above example, nosotros are changing the focal length of the lens from 70mm to 200mm, while keeping the framing the same and the f-stop the aforementioned (f/2.eight). When we do this, nosotros are actually moving away from the subject that nosotros are focusing on, which triggers a modify in perspective. Perspective defines how a foreground element appears in relation to other elements in the scene. Perspective changes not because of a change in focal length, but considering of a modify of photographic camera to subject field distance. If you lot do not move abroad from your bailiwick and merely zoom in more, yous are not changing the perspective at all! And what about compression? The term "pinch" has been historically wrongly associated with focal length. There is no such thing as "telephoto compression", implying that shooting with a longer lens will somehow magically make your subject announced more isolated from the groundwork. When one changes the focal length of a lens without moving, all they are doing is irresolute the field of view – the perspective will remain identical.
In this detail case, how closely background objects appear relative to our subject has nix to do with how blurry they announced. Here, blur is the attribute of the aperture diameter. If yous are shooting a field of study at 200mm f/2.8 and so stop the lens down to f/5.6, the background elements volition appear smaller, because you take changed the physical size of the aperture diameter. Your depth of field reckoner might say that your DoF starts at indicate X and ends at point Y and yet the background that is located at infinity will all the same appear less blurry. Why? Again, considering of change in aperture diameter. So going back to our previous case where we are moving from 70mm f/ii.8 to 200mm f/2.8, by keeping the framing identical and moving away from the subject area, we are changing the perspective of the scene. Notwithstanding, that'south not the reason why the groundwork is blurred more! The objects in the groundwork appear larger due to change in perspective, yet, how blurry they appear is because I am shooting with a large aperture diameter. Now the quality of blur, specifically of highlights (a.thou.a. "Bokeh") is a whole dissimilar subject and that one hugely depends on the design of the lens.
Going back to our example, because of the change in aperture diameter and focal length, you will detect things appearing more in-focus or less blurry than you might like, including objects in the foreground and background. Therefore, it is the shorter focal length, coupled with the smaller aperture diameter that make things appear less aesthetically pleasing on smaller format systems.
At this point, there are three means one could effectively decrease depth of field and overstate the out of focus areas in the groundwork:
- Get physically closer to the subject
- Increase the focal length while maintaining the aforementioned f-finish
- Utilize a faster lens
Getting physically closer to the subject alters the perspective, resulting in "perspective baloney", and increasing focal length translates to the same narrow field of view consequence equally illustrated in the earlier example, where you are too shut to the subject.
It is important to notation that whatever comparisons of camera systems at unlike camera to subject area distances and focal lengths are meaningless. The moment y'all or your subject area move and the focal lengths differ, it causes a change in perspective, depth of field and background rendering. This is why this article excludes any comparisons of unlike formats at varying distances.
Neither of the two options above are usually workable solutions, so the last option is to get a faster lens. Well, that's where things can get quite expensive, impractical or but impossible. Fast aperture lenses are very expensive. For example, the excellent Panasonic 42.5mm f/i.2 Micro Four Thirds lens costs a whopping $1,600 and behaves like an 85mm f/2.five lens in terms of field of view and depth of field on a full-frame photographic camera, whereas one could buy a full-frame 85mm f/one.eight lens for i tertiary of that. Manual focus f/0.95 Micro 4 Thirds lenses from a number of manufacturers produce similar depth of field equally an f/1.9 lens, so even those could not get close to f/1.iv aperture on full-frame (if you notice the aperture math disruptive, information technology will be discussed further down below).
You have probably heard people say things like "to get the same depth of field as a 50mm f/1.4 lens on a full-frame camera, you would demand a 25mm f/0.7 lens on Micro Iv Thirds camera". Some even question why at that place is no such lens. Well, if they knew much about optics, they would understand that designing an f/0.7 lens that is optically good and tin can properly autofocus is practically an impossible job. That'due south why such fast lenses with AF capabilities will well-nigh probable never be for any system. Can yous imagine how big such a lens would wait like?
This all leads to the next topic – Aperture Equivalence.
vii) Aperture Equivalence
In my previous instance, I mentioned that the Panasonic 42.5mm f/i.2 Micro Iv Thirds lens is equivalent to an 85mm f/ii.5 full-frame lens in terms of light transmission capabilities. Well, it makes sense if one is to look at the aperture diameter / entrance student of both lenses, which roughly measure between 34mm and 35mm. Considering such lenses would transmit roughly the same amount of light, yield similar depth of field and have similar field of view, some would consider them to exist "equivalent".
Every bit a effect of the above, nosotros now nosotros have people that are saying that nosotros should be calculating equivalence in terms of f-stops betwixt different systems, merely like we compute equivalence in field of view. Some even argue that manufacturers should be specifying equivalent discontinuity figures in their product manuals and marketing materials, since giving the native aperture ranges is lying to customers. What they do not seem to get, is that the manufacturers are providing the actual physical properties of lenses – the equivalent focal lengths are at that place only as a reference for the same old reasons that existed since motion picture days, basically to guide potential 35mm / full-frame converts. Another primal fact, is that altering the f-stop results in differences in exposure / brightness. The same Panasonic 42.5mm f/1.2 at f/ane.2 will yield a brighter exposure when compared to an 85mm f/2.five full-frame lens, because we are changing one of the iii exposure variables.
And so let'south get another fact straight: smaller format lenses have exactly the same light gathering capabilities as larger format lenses at the same f-finish, for their native sensor sizes. Yes, larger discontinuity diameter lenses do transmit more than light, but more low-cal is needed for the larger sensor, considering the volume and the spread of light also must be large enough to embrace the bigger sensor expanse. The Panasonic 42.5mm f/i.2 may behave similarly to an 85mm f/2.v lens in terms of discontinuity bore / total low-cal manual, field of view and depth of field, but the intensity of low-cal that reaches the Micro Four Thirds sensor at f/1.ii is very unlike than information technology is for an f/2.5 lens on a full-frame photographic camera – the image from the latter will be underexposed by 2 full stops. In other words, the intensity of light that reaches a sensor for one format is identical to the intensity of light that reaches a sensor of a different format at the same discontinuity. It makes no sense to make a Micro Four Thirds lens that covers as big of an paradigm circle every bit a total-frame lens, if all that extra light is wasted. Plus, such lenses would expect ridiculously big on small cameras.
Information technology is important to annotation that although the comparing above is valid technically, a larger sensor would yield cleaner images and would let for faster and less expensive lenses, equally pointed out earlier.
8) Total Lite
"Equivalence" created some other ugly child: total lite. Basically, the idea of total light is that smaller sensors get less total calorie-free than larger sensors just considering they are physically smaller, which translates to worse racket performance / overall image quality. For example, a total-frame sensor looks two stops cleaner at higher ISOs than say Micro Four Thirds, just because its sensor surface area is 4 times larger. I personally observe the thought of "Full Light" and its relevance to ISO flawed. Explaining why i sensor has a cleaner output when compared to a smaller one just considering it is physically larger has one major problem – it is actually not entirely true in one case yous factor in a couple of variables: sensor technology, image processing pipeline and sensor generation. While one cannot contend that larger sensors do physically receive more light than their smaller counterparts, how the photographic camera reads and transforms the light into an image is extremely important. If nosotros presume that the physical size of a sensor is the only important factor in cameras, because it receives more total low-cal, then every total-frame sensor made to date would trounce every APS-C sensor, including the latest and greatest. Consequently, every medium format sensor would crush every full-frame sensor made to date. And nosotros know it is not true – just compare the output of the offset generation Canon 1DS full-frame camera at ISO 800 to a modern Sony APS-C sensor (take a peek at this review from Luminous Landscape) and you lot will see that the the latter looks improve. Newer sensor technologies, improve image processing pipelines and other factors make modern sensors polish when compared to quondam ones. But put, newer is better when it comes to sensor technology. APS-C has come far along in terms of noise functioning, easily beating first generation full-frame sensors in terms of colors, dynamic range and loftier ISO operation. CMOS is cleaner at loftier ISO than old generation CCD that struggled fifty-fifty at ISO 400! Until recently, medium format cameras used to be terrible at high ISOs due to utilise of CCD sensors (which accept other strengths). Simply if we look at "full light" only from the perspective of "bigger is amend", then medium format sensors are supposed to be much better than full-frame just because their sensor sizes are bigger. Looking at loftier ISO performance and dynamic range of medium format CCD sensors, it turns out that it is really not the example. Only the latest CMOS sensors from Sony fabricated it possible for medium format to finally take hold of up with modernistic cameras in handling noise at high ISOs.
My problem with "total light" is that it is based on the assumption that ane is comparison sensors of the aforementioned technology, generation, analog to digital conversion (ADC), pixel size / pitch / resolution, RAW file output, print size, etc. And if nosotros expect at the state of the camera industry today, that's near never the case – sensors differ quite a bit, with varying levels of pixel size and resolution. In addition, cameras with the same sensors could potentially have dissimilar SNR and dynamic range operation. The noise we see on the Nikon D4s looks different than on the Nikon D810, the Canon 5D Mark III or the Sony A7s, even when all three are normalized to the aforementioned resolution…
And then how tin can i rely on a formula that assumes so much when comparing cameras? The results might be generally accurate given the state of the camera industry today (with a few exceptions), so it is one's individual choice if "mostly good plenty" is acceptable or non. Total low-cal is just truthful if you are looking at cameras like Nikon D800 and D7000, which have the same generation processors and same pixel-level performance. In all other cases, it is not 100% safe to assume that a sensor is going to perform relative to its physical size. Smaller sensors are getting more efficient than larger sensors and bigger is not always improve when yous factor in size, weight, cost and other factors. In my opinion, it is amend to skip such concepts when comparison systems, every bit they tin can potentially create a lot of confusion, especially amidst beginners.
9) Circle of Confusion, Impress Size, Diffraction, Pixel Density and Sensor Resolution
Hither are some more topics that will give you lot headache quickly: circumvolve of defoliation, print size, diffraction, pixel density and sensor resolution. These v bring up additional points that make the discipline of "equivalence" truly a never-ending argue. I won't spend a whole lot of time on this, as I believe information technology is not straight relevant to my article here, then I just want to throw a couple of things at yous to make you want to stop reading this section. And if your caput hurts already, simply move on and skip all this junk, since it really does not thing (actually, none of the to a higher place actually matters at the stop of the mean solar day, equally explained in the Summary department of this article).
9.ane) Circle of Defoliation
Every image is made out of many dots and circles, because light rays reaching the motion-picture show / sensor are always in circular shape. These circular shapes or "mistiness spots" tin exist very pocket-size or very big. The smaller these blur spots are, the more "dot-similar" they announced to our optics. Basically, circumvolve of defoliation is better defined by Wikipedia every bit "the largest mistiness spot that will still be perceived past the human eye as a signal". Any part of an epitome, whether printed or viewed on a computer monitor that appears blurry to our optics is only blurry because we can tell that information technology is not sufficiently sharp. When you get frustrated with taking blurry pictures, it happens because your eyes are not seeing enough details, so your encephalon triggers a response "blurry", "out of focus", etc. If you lot had bad vision and could not tell a departure between a precipitous photo and a soft / blurry one, you might non see what others can. That'southward why the subject of circumvolve of confusion is so confusing – information technology does non take into business relationship that your vision could be below "good", with the ability to resolve or distinguish v line pairs per millimeter when viewing an image at a 60° angle and viewing distance of 10 inches (25 cm). Then the basic assumption is that the size of the circumvolve of confusion, or the largest circular shape that y'all nonetheless perceive every bit a dot, is going to be approximately 0.2mm based on the above-mentioned five lines per millimeter assumption (a line every fifth of a millimeter equals 0.2mm). What does this have to practice with equivalence, you might enquire? Well, it affects information technology indirectly, considering it is closely tied to print size and a few other things.
9.2) Print Size and Sensor Resolution
Believe it or not, but most photographic camera and sensor comparisons we see today straight relate to print size, as strange equally it may audio! Why? Because it is automatically assumed that we take pictures in order to produce prints, the final end point of every photo. Now the big question that comes upwardly today, which probably sparks as many heated debates equally the subject area of equivalence, is "how large can yous print". This is where circumvolve of confusion creates more confusion, because how big one can impress hugely depends on what they deem "acceptable" in terms of sharpness perception at different viewing distances. If you lot listen to some old timers that used to or yet shoot 35mm film, you volition oftentimes hear them say that resolution or sharpness are not of import for prints at all and that they used to print huge 24×36″ or 30×40″ prints (or larger) using 35mm picture, which looked cracking. You will probably hear a like story from early digital photographic camera adopters, who volition be keen to bear witness you large prints in their living rooms from cameras that simply had 6-8 megapixels. At the same time, yous will also come beyond those, that will be telling you lot all about their super high resolution gigapixel prints that are more than detailed than what your eyes can distinguish, telling you how life-like and detailed their prints expect.
Who is right and who is wrong? Well, that's besides a very subjective opinion that volition create heated debates. Old timers will express mirth at the loftier resolution prints, telling you that you would never be looking at them that close anyway, while others volition argue that a impress must be very detailed and should look expert at whatever distance to exist considered worthy of occupying your precious wall infinite. And successful photographers like Laura Murray, who almost exclusively shoot with moving-picture show will exist selling prints of scanned motion-picture show similar this at any size their clients desire, while some of us will still be debating most which camera has the best signal to noise ratio:
A big spoiler for pixel-peepers – there is not a whole lot in terms of details in such photographs. Movie shooters working in fast-paced environments like weddings rarely intendance nearly making certain that the bride's closest centre looks perfectly abrupt – they are there to capture the moment, the mood, the environment. Very few pic shooters volition be decorated giving you lot a lecture on circle of confusion, resolution, diffraction or other non-relevant, unimportant (for them) topics. So who is right?
No affair which side you are on, by now you probably do recognize the fact that the world is moving towards more than resolution, larger prints and more than details. In fact, manufacturers are spending a lot of their marketing dollars on disarming you that more than resolution is better with all the "retina" displays, 4K TVs and monitors. Whether you similar it or not, you are about probable already sold on it. If you are non, then y'all represent a small percent of the modern population that does non craze after more megapixels and gigapixels.
In fact, if yous have been on the web long enough, yous probably remember how the early days of the web used to look like, with tiny thumbnail-size images that looked large on our 256 color VGA screens. We at Photography Life exercise recognize that the world is moving towards high resolution and many of our readers are now reading the site with their "retina" grade or 4K monitors, expecting larger photographs for their enjoyment. So even if some of u.s.a. here at PL hate the thought of showing yous more pixels and how the new 36 MP sensor is better at ISO 25,600 than the previous generation 36 MP sensor, the world is moving in that management anyway and there is non much nosotros can practice most it.
Allow's go back to our super technical, not-so-important give-and-take almost why print size is dictating our comparisons. Well, considering that printers are limited at how many dots per inch they can print (and that limitation bar is also beingness raised year after twelvemonth), the math that is currently applied at how big you should impress for an image to look "passably sharp" at comfy viewing distances is anywhere betwixt 240 dots per inch (dpi) and 300 dpi, sometimes accepting certain prints to go down to 150 dpi. Well, if you correlate pixels and dots to i:ane ratio, how large you can print with say a 16 MP resolution epitome versus a 36 MP resolution paradigm (assuming that both comprise enough item and sharpness) without enlarging or reducing prints will exist uncomplicated math – divide horizontal and vertical resolution by the dpi resolution you lot are aiming for, and you get the size. In the case of a 36 MP resolution image from the Nikon D800/D810, which produces files of 7,360×four,912 resolution, that translates to 24.53×16.37 inches (7360/300 = 24.53, 4912/300 = sixteen.37). And so if you desire a skilful quality print, the maximum you can produce out of a D800/D810 sensor is a 24×sixteen″ print. Now what if we look at the Nikon D4s, which produces just 16 MP files with image resolution of four,928 ten 3,280. Applying the aforementioned math, the maximum print size yous would become is xvi×11″! Oh what a heck, that's a $6500 photographic camera that tin only give you 16×xi versus a $3000 camera that can impress much larger? What'south upwards with that? Well, this is when things get messy, bringing the whole print size debate into big question. Merely await a minute, if all that matters for a print size is the damn pixel resolution, what about comparing the Nikon D4s with the Nikon D7000 or Fuji X-T1 that take the same sixteen MP sensor / pixel resolution? Ouch, that's when things get even more painful and confusing, every bit it is hard for someone to wrap their brain effectually the concept that a smaller sensor can produce images as big as a large sensor photographic camera. And this is where nosotros become into another can of worms, pixel density.
9.three) Pixel Density
So, we ended the final section with how a print from two different size sensors could yield the aforementioned size, every bit long as their pixel resolution was the same. Well, this is where information technology all comes together…hopefully. After manufacturers started making smaller sensors (initially for cost reasons as explained in the first of this commodity), they started to realize that at that place were other benefits to smaller sensors and formats that they could capitalize on. Well, it was basically the aforementioned story as Large Format vs Medium Format, or Medium Format vs 35mm Film – the larger you go, the more expensive information technology gets to manufacture gear. There was a reason why 35mm became a "standard" in the moving-picture show industry, as not many were willing to spend the money to go Medium Format or larger due to development and print costs, gear, etc. And so when APS-C became a widespread format, a number of manufacturers jumped in the mirrorless bandwagon and started to market the idea of going lite, versus the big and beefy DSLRs. Within a few years, this "become light" became a trend, most a motility. Companies similar Fuji and Sony even started their anti-DSLR campaigns, trying to brainwash people not to purchase DSLRs and buy smaller and lighter mirrorless cameras instead. It made sense and the entrada is slowly gaining traction, with more and more people switching to mirrorless.
Well, manufacturers realized that if they used the same pixel density on sensors (i.e. how many pixels at that place are per inch square of sensor surface) information technology would make the small sensor cameras look inferior, since their sensor surface area is apparently noticeably smaller. Then they started pushing more and more resolution on smaller sensors by increasing pixel pitch, which made these smaller sensors appear "equivalent" (by now I detest this term!) to bigger formats. Same quondam Megapixel wars, except now we are confusing people with specifications that seem to be awfully similar: a $6500 Nikon D4s DSLR camera with 16 MP that is big, heavy and bulky vs a $1700 Fuji 10-T1 mirrorless photographic camera with the aforementioned 16 MP resolution. Or a camera phone with 41 megapixels on a tiny sensor…
However, despite what may sound similar a bad idea, in that location was actually 1 big reward to doing this – at relatively low sensitivities, smaller pixels did non endure desperately in terms of noise and manufacturers were able to find ways to "massage" high ISO images by applying diverse noise suppression algorithms that made these sensors wait quite impressive. So more focus was put on making smaller sensors more than efficient than their larger counterparts.
Equally I accept explained in my "the benefits of high resolution sensors" article, cramming more pixels tighter together might audio like a bad idea when you are looking at the image at pixel level, only one time you lot compare the output to a smaller sized print from the same sized sensor camera with less pixels, the downwards-sampled / resized / normalized image will incorporate roughly the same amount of noise and its overall image quality will wait similar. The biggest advantage in such a situation, is the pixel-level efficiency of the sensor at low ISOs. If a 36 MP photographic camera can produce stunning-looking images at ISO 100 (and it does), people that shoot at depression ISOs would become the benefit of larger prints, while those that shoot at loftier ISOs are not losing a lot in terms of prototype quality once they resize the paradigm to lower resolution. In a style, it became a win-win situation.
What this means, is that when nosotros deal with modernistic modest sensor cameras, despite having a smaller concrete sensor area, the large number of smaller pixels essentially "enlarges" images. Yes, one time "normalized" to the same print size, smaller sensors will show more noise than their full-frame counterparts, but due to better sensor efficiency and more than aggressive dissonance suppression algorithms, they wait quite decent and more than "adequate" for many photographers.
And then if cramming more pixels together has its benefits, why not cram more? Well, that's essentially what we are seeing with smaller sensor formats – they are cramming more pixels into their sensors. APS-C quickly went from 12 MP to xvi MP, then from 16 MP to 24 MP inside the past ii years and if nosotros expect at the pixel efficiency of the Nikon CX and Micro Four Thirds systems, DX could exist pushing across 24 MP fairly soon (Samsung NX1 is already at 28 MP, thanks to EnPassant for reminding). With such minor pixel size, we might be pushing across 50 MP on total-frame sensors soon likewise, and then information technology is all a thing of time.
9.4) Pixel Density, Sensor Size and Diffraction
Now here is an interesting twist to this what-seems-similar-a-never-ending debacle: since smaller sensors are essentially "magnified" with smaller pixels, that same circular form in the shape of circle of confusion is also…magnified. So this gave photography geeks still another variable to add to the sensor "equivalence" – circle of confusion variance. Yikes! People even made up something called a "Zeiss formula" (which every bit information technology turns out actually has nothing to do with Zeiss), that allows one to calculate the size of circle of confusion based on the physical sensor size. This has go so common, that such calculations accept now been integrated into most depth of field calculators. And then if you discover yourself using 1, look for "circle of confusion" and you will probably find that size for the format you selected. Given that all pocket-size sensors do pack more pixels per inch, it is actually safe to presume that the circle of confusion will be smaller for smaller systems, simply the bodily number might vary, since the adding is however debated on what information technology is supposed to be. Plus, "magnification" is relative to the pixel size today – if in a few years we will exist using twice smaller pixels on all sensors, those numbers will take to exist revised and the formulas will have to exist rewritten… At present in regards to diffraction, since diffraction is directly tied to the circle of confusion, if the latter is more "magnified", then it is also condom to assume that smaller sensors showroom diffraction at larger apertures. That's why when you shoot with small format photographic camera systems like Nikon CX, you might kickoff seeing the effect of diffraction at f/five.6, rather than f/eight and to a higher place in camera systems with larger sensors.
9.5) High Speed APS-C Cameras
One time photographers started realizing these benefits, the field of view equivalency that we've talked about earlier started literally translating to the ability to magnify the bailiwick more and potentially resolve more details. Since the barrier for entry into total-frame is withal at around $1500, the introduction of high-resolution APS-C sensor cameras like the Catechism 7D Mark Ii was greeted with a lot of fanfare, sparking heated discussions on advantages and disadvantages of high-speed APS-C DSLR cameras vs full-frame (in fact, many Nikon shooters are still waiting for a directly competitor to the 7D Mark II for this reason). Sadly though, just like the topic of equivalence, these discussions on APS-C vs total-frame lead nowhere, equally both parties will happily defend their choices till their expiry.
You have probably heard someone say that they adopt shooting with cropped sensor cameras due to their "achieve" earlier. The argument that is presented does brand sense – a sensor with a higher pixel pitch (or more pixels per inch) results in more than resolution and therefore translates to more details in an image (provided that the lens used is of high quality, capable of resolving those details). And college resolution obviously translates to bigger prints, since digital images are printed at dots per inch – the more the dots, the larger the prints, equally I have already explained before. Lastly, higher resolution likewise allows for more than aggressive cropping, which is something that wild fauna photographers always need.
If you are interested in finding out what I personally think virtually all this, here is my take: in that location are three factors to consider in this item state of affairs: Cost, Pixel Density and Speed. Accept the D800/D810 cameras – 36 MP sensors with similar pixel density as 16 MP APS-C/DX cameras. If the D800/810 cameras offered the aforementioned speed as a loftier-end DX camera (say 8 fps or more) and toll the same as a high-finish DX, the high-end DX marketplace would be dead, no argument about information technology. And if you lot are to state that the newest generation 24 MP DX cameras accept a college pixel density, well, the moment manufacturers release an FX photographic camera with the same pixel density (50+ MP), that statement will be dead over again. Keep in mind, that at the fourth dimension Nikon produced the D800, it had the same pixel density every bit the and then electric current Nikon D7000 – and then taking a 1.5x ingather from the D800 produced a xvi MP epitome. Ane could state that the D800 was a D7000 + D800 in one camera body in terms of sensor technology and they would be right. But not in terms of speed – 6 fps vs 4 fps does make a difference for capturing fast action. If Nikon could brand a 50+ MP total-frame camera that shoots 10 fps and costs $1800, loftier-end DX would make no sense whatsoever. But we know that such a camera would exist impossible to produce with the current technology, which is why high-end DX is still desired. Now permit'south look at the cost reason more than closely. Non everyone is willing to drib $7K on a Nikon D4 or a Catechism 1D 10. But what if a full-frame photographic camera with the aforementioned speed as the D4 was sold at $1800? Yup, loftier-end DX would again be dead. Why exercise people nonetheless want a high-end DX today? Well, looking at the above arguments, it is mostly about toll. All other arguments are secondary.
x) Equivalence is Absurd: CX vs DX vs FX vs MF vs LF
I bet past at present you are thinking why in the world you even started to read this article. I don't blame you, that's how I felt, except my thought was "why am I even writing this article?" To exist honest, I really thought most not publishing it for a while. But after seeing comments and questions come up more than and more than from our readers, I thought it would exist good to put all of my thoughts on this matter in one article. In all honesty, I personally consider the subject of equivalence as cool as it beingness disruptive. Why are we still talking almost equivalent focal lengths, apertures, depth of field, background mistiness and all other mumbo jumbo, when the whole point of "equivalence" was originally created for 35mm picture shooters as a reference anyway? Who cares that 35mm motion picture was popular – why are we still using it as the "bible" of standards? When Medium Format moves into the "affordable" range (which is already kind of happening with MF CMOS sensors and Pentax 645Z), are we going to go backwards in equivalence? By then, we might start seeing Large Format digital!
Let's get the last fact correct: at the end of the day, it all boils down to what works for you. If you only care nearly paradigm quality, larger will always be improve. It will come with weight and bulk, but it volition requite y'all the largest prints, the best image quality, paper-thin DoF, beautiful subject isolation, etc. Only if your back cannot take it anymore and you desire to go lightweight and compact, smaller systems are getting to the signal where they are good plenty for probably 90% of photographers out there. And if you desire to go actually small, just take a expect at Thomas Stirr's piece of work on Nikon CX – perhaps it will make you reconsider what your next travel camera should be.
I love how our Bob Vishneski modified Fuji's "Development of the Photographer", where Fuji wanted to show how prissy it is to go lightweight with Fuji's mirrorless system. Take a look at his version, it will scissure you up (sorry Bob, I just could not resist!):
And I loved this quote by our reader Betty, who summed up a lot of what I take said in this commodity: "As shortly every bit you start using different cameras (!), with dissimilar processing engines (!), different sensors (!) and different pixel densities (!), and then get-go zooming a lens (!) to accomplish or compensate for dissimilar crops, all bets are off. Your 'results' are meaningless". What a smashing way to describe what a lot of us are sadly doing.
eleven) Summary: Anybody is Right, Everyone is Wrong
In all seriousness, let's just drop this equivalence silliness. It is besides confusing, overly technical and unnecessarily overrated. Keep in mind, that equally the mirrorless format takes off, we volition take a lot more than people moving up from betoken and shoot / telephone cameras. They don't need to know all this junk – their time is improve off learning how to use the tools they already accept.
Only get over this stupid contend. Anybody is correct, everyone is incorrect. Fourth dimension to motion on and take some nifty pictures!
Source: https://photographylife.com/sensor-crop-factors-and-equivalence
Posted by: morrislikendooked1945.blogspot.com
0 Response to "Can I Use Aps-c Lens On Full Frame Camera"
Post a Comment