'Knill Hell - D800 for 1500 Quid
Discussion
http://www.abc-digital-cameras.co.uk/Nikon-D800-Bo...
I am saving for a D810 but this is tempting, and cheaper even than the 750...
I am saving for a D810 but this is tempting, and cheaper even than the 750...
Is that a legit company in the UK or is selling grey imports?
Anyway, used D800 bodies can be ad from MBP for £1200 which might be of interest
http://www.mpbphotographic.co.uk/used-equipment/us...
For me those cameras have too much resolution, I'd wait for Nikon to fix he D750 and buy one of those.
Anyway, used D800 bodies can be ad from MBP for £1200 which might be of interest
http://www.mpbphotographic.co.uk/used-equipment/us...
For me those cameras have too much resolution, I'd wait for Nikon to fix he D750 and buy one of those.
As far as I can tell, its a legit UK company with UK stock, not a grey import. Lowest previous price was 1800 quid and dropped to 1500 last night.
How can you have too much resolution ? even if you lens can't resolve it the Exmoor sensor still handily beats the D750 and Canon 5dMK3 on high ISO and dynamic range.
There are some valid reasons to go for the D810 and D750, they have much better autofocus and face detection, but resolution is not one of them.
How can you have too much resolution ? even if you lens can't resolve it the Exmoor sensor still handily beats the D750 and Canon 5dMK3 on high ISO and dynamic range.
There are some valid reasons to go for the D810 and D750, they have much better autofocus and face detection, but resolution is not one of them.
Details of the Nikon problem are here.
http://petapixel.com/2014/12/22/nikon-d750-owners-...
So serious it seems they are recalling them.
http://petapixel.com/2015/01/14/nikon-d750-disappe...
http://petapixel.com/2014/12/22/nikon-d750-owners-...
So serious it seems they are recalling them.
http://petapixel.com/2015/01/14/nikon-d750-disappe...
I got mine from SLR Hut last year:
http://slrhut.co.uk/product/ID1113C5/google?k_clic...
Fairly sure I paid £1200 or for it (new) but they're still up at £1370 now which is a good price. Delivery time wasn't brilliant IIRC but not ridiculous.
http://slrhut.co.uk/product/ID1113C5/google?k_clic...
Fairly sure I paid £1200 or for it (new) but they're still up at £1370 now which is a good price. Delivery time wasn't brilliant IIRC but not ridiculous.
RobDickinson said:
A 36mp file downsampled to 24mp will have more detail than a straigth 24mp file.
That's not true though. 36 to 24 means a ratio of 1.5, which means you need to put 1.5 pixels into 1 pixel, so there will be some interpolation (and that will depend on the interpolation algorithm). You simply won't get "more detail" by downsampling a higher resolution image, over an image taken with a 24mp sensor, you will always lose detail.There's no camera skills or experience involved in working this out, this has nothing to do with being a photographer, it comes down to actually working with graphics file formats at the bit level and working with the algorithms involved. Interpolating 1.5 pixels to 1 pixel will always lose "detail" over an image taken with a native 24mp sensor, because you have no choice but to average those 1.5 pixels to 1 pixel over the entire image.
rxtx said:
That's not true though. 36 to 24 means a ratio of 1.5, which means you need to put 1.5 pixels into 1 pixel, so there will be some interpolation (and that will depend on the interpolation algorithm). You simply won't get "more detail" by downsampling a higher resolution image, over an image taken with a 24mp sensor, you will always lose detail.
There's no camera skills or experience involved in working this out, this has nothing to do with being a photographer, it comes down to actually working with graphics file formats at the bit level and working with the algorithms involved. Interpolating 1.5 pixels to 1 pixel will always lose "detail" over an image taken with a native 24mp sensor, because you have no choice but to average those 1.5 pixels to 1 pixel over the entire image.
Glad the maths works for you but thats not quite how photoshops resize would look at it. Practical experience says otherwise. And tbh its all about print size rather than downsampling.There's no camera skills or experience involved in working this out, this has nothing to do with being a photographer, it comes down to actually working with graphics file formats at the bit level and working with the algorithms involved. Interpolating 1.5 pixels to 1 pixel will always lose "detail" over an image taken with a native 24mp sensor, because you have no choice but to average those 1.5 pixels to 1 pixel over the entire image.
RobDickinson said:
Glad the maths works for you but thats not quite how photoshops resize would look at it. Practical experience says otherwise. And tbh its all about print size rather than downsampling.
It is how PS's resize looks at it, because it's using the same maths. Interpolation just means estimation. Practical experience means nothing when it comes to manipulating digital images at the software level, you're not in control of that unless you've written your own routines. It's not just me that realises this is a fallacy.What does print size have to do with downsampling algorithms?
http://blog.mingthein.com/2012/11/05/resolution-sh...
rxtx said:
RobDickinson said:
Glad the maths works for you but thats not quite how photoshops resize would look at it. Practical experience says otherwise. And tbh its all about print size rather than downsampling.
It is how PS's resize looks at it, because it's using the same maths. Interpolation just means estimation. Practical experience means nothing when it comes to manipulating digital images at the software level, you're not in control of that unless you've written your own routines. It's not just me that realises this is a fallacy.What does print size have to do with downsampling algorithms?
http://blog.mingthein.com/2012/11/05/resolution-sh...
However, even the best lenses are not perfectly sharp, and even the best sensors have some noise, and also you have to consider antialiasing filters on the sensor.
More pixels means better noise reduction and better apparent edge sharpness on downsize, it is more information for DXO optics or photoshop to work with and gives better results in the real world.
There is no free lunch, and the downside of a 36mpix file is greater storage space requirements and greater processing requirements. However if you are willing to put up with greater overhead on your workflow the quality improvement is certainly there.
-and I would agree with Rob that the D800 is not the best low light camera. For that you need a very low noise floor, which is something DXO optics does not measure. Also for Rob's astrophotography needs Nikon "star eater" processing designed to block hot pixels is a problem.
I think you're missing my point a little. Sensor noise levels don't mean anything at the stage I'm talking about because that noise is the detail I'm referring to. It doesn't have to be a good, low-noise, clean image, I'm talking about information loss.
DXO results, low light performance, print size, number of times you've pressed a shutter, how many photos you've had published, how famous you are on a car forum, what colour jumper you're currently wearing, none of those have any bearing on what I'm talking about. This has nothing to do with the lens, the scene, or anything else. This is all pure maths, the same maths Adobe have no choice but to use.
There isn't any interpolation with a native-sensor image (aside from the photon-interpolation inherent in every camera sensor, because sensor-sites aren't photon-sized as we know.) You can't fit 1-point-something, or 2, or 3 or 19 into 1 without averaging, same as you can't enlarge an image over its native size, without averaging.
If you downsize an image, you're losing information. As the article I posted (purely so I wasn't called mad, as seems to happen on PH if you have a different view) also pointed out, it may be perceptually better, but it doesn't contain more detail. You have lost information by downsizing an image. That's intrinsic, you have made a larger image, or file, smaller - you have lost information and the software has to either fill in the gaps, or condense more information into a smaller space - detail that's been lost, even if it's noisy or not in focus. You can't "condense" detail and make it more detailed, it defies the laws of physics, pixels are pixel-sized no matter what.
Anyway, more sensor sites doesn't necessarily mean better high-ISO performance. In fact, given the same sensor technology, more sensor sites within the same area means less dynamic range, because each sensor receives fewer photons in the bucket.
This has nothing to do with photography, it's just maths, but maths is exactly what digital photography, and digital images, comes down to.
DXO results, low light performance, print size, number of times you've pressed a shutter, how many photos you've had published, how famous you are on a car forum, what colour jumper you're currently wearing, none of those have any bearing on what I'm talking about. This has nothing to do with the lens, the scene, or anything else. This is all pure maths, the same maths Adobe have no choice but to use.
There isn't any interpolation with a native-sensor image (aside from the photon-interpolation inherent in every camera sensor, because sensor-sites aren't photon-sized as we know.) You can't fit 1-point-something, or 2, or 3 or 19 into 1 without averaging, same as you can't enlarge an image over its native size, without averaging.
If you downsize an image, you're losing information. As the article I posted (purely so I wasn't called mad, as seems to happen on PH if you have a different view) also pointed out, it may be perceptually better, but it doesn't contain more detail. You have lost information by downsizing an image. That's intrinsic, you have made a larger image, or file, smaller - you have lost information and the software has to either fill in the gaps, or condense more information into a smaller space - detail that's been lost, even if it's noisy or not in focus. You can't "condense" detail and make it more detailed, it defies the laws of physics, pixels are pixel-sized no matter what.
Anyway, more sensor sites doesn't necessarily mean better high-ISO performance. In fact, given the same sensor technology, more sensor sites within the same area means less dynamic range, because each sensor receives fewer photons in the bucket.
This has nothing to do with photography, it's just maths, but maths is exactly what digital photography, and digital images, comes down to.
ExPat2B said:
Nikon "star eater" processing designed to block hot pixels is a problem.
Do you know, that almost put me off getting the D700 I still own, but unless you're taking photos of the night sky for scientific reasons (in which case you won't be using a dSLR) that is another fallacy. The amount of stars you see on a long exposure are so many that you simply won't even notice if the odd one is missing due to hot-pixel masking, and if you're really serious about it with a dSLR you combine multiple exposures on an equatorial mount which removes the issue, not to mention using dark frame subtraction.Compare Canon and Nikon Milky Way photos, or even near-field astrophotography through a telescope with both makes, and you'll notice no difference whatsoever. Neither are science-level apparatus.
rxtx said:
I think you're missing my point a little. Sensor noise levels don't mean anything at the stage I'm talking about because that noise is the detail I'm referring to. It doesn't have to be a good, low-noise, clean image, I'm talking about information loss.
DXO results, low light performance, print size, number of times you've pressed a shutter, how many photos you've had published, how famous you are on a car forum, what colour jumper you're currently wearing, none of those have any bearing on what I'm talking about. This has nothing to do with the lens, the scene, or anything else. This is all pure maths, the same maths Adobe have no choice but to use.
There isn't any interpolation with a native-sensor image (aside from the photon-interpolation inherent in every camera sensor, because sensor-sites aren't photon-sized as we know.) You can't fit 1-point-something, or 2, or 3 or 19 into 1 without averaging, same as you can't enlarge an image over its native size, without averaging.
If you downsize an image, you're losing information. As the article I posted (purely so I wasn't called mad, as seems to happen on PH if you have a different view) also pointed out, it may be perceptually better, but it doesn't contain more detail. You have lost information by downsizing an image. That's intrinsic, you have made a larger image, or file, smaller - you have lost information and the software has to either fill in the gaps, or condense more information into a smaller space - detail that's been lost, even if it's noisy or not in focus. You can't "condense" detail and make it more detailed, it defies the laws of physics, pixels are pixel-sized no matter what.
Anyway, more sensor sites doesn't necessarily mean better high-ISO performance. In fact, given the same sensor technology, more sensor sites within the same area means less dynamic range, because each sensor receives fewer photons in the bucket.
This has nothing to do with photography, it's just maths, but maths is exactly what digital photography, and digital images, comes down to.
OK let's move this into the realms of a very simple experiment. If you take a picture of a small sharp edged pure black dot on a pure white background, you will not get a picture of a sharp edged pure black dot on a pure white background. DXO results, low light performance, print size, number of times you've pressed a shutter, how many photos you've had published, how famous you are on a car forum, what colour jumper you're currently wearing, none of those have any bearing on what I'm talking about. This has nothing to do with the lens, the scene, or anything else. This is all pure maths, the same maths Adobe have no choice but to use.
There isn't any interpolation with a native-sensor image (aside from the photon-interpolation inherent in every camera sensor, because sensor-sites aren't photon-sized as we know.) You can't fit 1-point-something, or 2, or 3 or 19 into 1 without averaging, same as you can't enlarge an image over its native size, without averaging.
If you downsize an image, you're losing information. As the article I posted (purely so I wasn't called mad, as seems to happen on PH if you have a different view) also pointed out, it may be perceptually better, but it doesn't contain more detail. You have lost information by downsizing an image. That's intrinsic, you have made a larger image, or file, smaller - you have lost information and the software has to either fill in the gaps, or condense more information into a smaller space - detail that's been lost, even if it's noisy or not in focus. You can't "condense" detail and make it more detailed, it defies the laws of physics, pixels are pixel-sized no matter what.
Anyway, more sensor sites doesn't necessarily mean better high-ISO performance. In fact, given the same sensor technology, more sensor sites within the same area means less dynamic range, because each sensor receives fewer photons in the bucket.
This has nothing to do with photography, it's just maths, but maths is exactly what digital photography, and digital images, comes down to.
You will get a fuzzy edged dot with a very small amount of blueish chromatic aberration on a mostly white and mostly black background.
If you downsample that image in Photoshop with Bicubic sharper you will get a slightly better image.
If you then downsample that image from 36 to 24 mpix using a decent RAW convertor like DXO Optics, and perform a little noise reduction you will get a sharper edged dot with purer white and purer black and better edge sharpness.
In the real world, a 24 mpix "native" picture is inferior to 36mpix downsampled to 24mpix.
More pixels = more information for the software to work with = better results.
Observe the below sample image. This shows detail near the edges of the limits of the lens and sensor to resolve- a center crop of the ISO 12233 test pattern.
The original is the original image, uncropped and unprocessed.
The downsampled is the original image downsampled in photoshop using bicubic sharper. It shows a slight increase in edge sharpness.
Denoised shows the effect of the DXO prime noise reduction on the original.
DXO optics noise reduced and downsampled shows the effects of letting a decent RAW convertor handle to noise reduction and downsampling.
To my eyes, the DXO optics noise reduced and downsampled using bicubic sharper image clearly has the best edge clarity and detail.
Downsample_Test by pistonheads_tests, on FlickrEdited by ExPat2B on Friday 23 January 11:35
Gassing Station | Photography & Video | Top of Page | What's New | My Stuff




