There are many possible choices to make in producing a final image, many competing aesthetic goals which cannot all be satisfied simultaneously, and no “right” answer. “Good” or “bad” printing (to use the darkroom photo term) can only really be defined relative to artistic intentions. That's mainly why I was wondering if the video examples were in-cam presets when running Magic Lantern, or if it was something the videographer did in post with the extra data captured with the higher bit depth enabled. Some tweaking can make things look more realistic and bring out detail but it is very easy to go too far and end up with something that looks like an Instagram filter. But at the extreme end, it reminded me almost of the overkill seen in early HDR photography. In these examples, I was definitely impressed with some aspects of how curves were adjusted to bring up shadows and make details more visible. Doing the color grading can have a major effect (good, bad, or just different) on the mood of a video. Think of the split toned, green/orange aesthetic that was popular in movies for a while. I'm by no means an expert but color grading video is often a detailed process with a lot of room for finding your own creative "vision". I don't know if these are presets or something done in post-processing, but it almost looks like the video equivalent of filter plugins in some cases. and nobody tried to reconcile the two versions afterward.Ĭomparing the amount of usable data availableĪs a Nikon user, I have no direct experience with Magic Lantern (always a bit bummed about that) but I was also wondering the same.
MAGIC LANTERN CANON EOS 5D SOFTWARE
The problem with the comparison videos in this thread is that the person/software deciding what to do with the “raw” data has made a bunch of choices to allocate more contrast to shadow and highlight areas, etc., while the in-camera software made a different choice. Often the processed-in-camera version still has enough data that a more-or-less comparable output image can be produced, but if you start peeping on pixels you might notice extra noise, banding, blur, ringing. You’ll get to see places where the standard processing and compression actually lost data. The fairest comparison is probably to find someone highly skilled at image editing and get them to try to make the best output images they can from both inputs. Default choices are not inherently more correct (or truer to the scene or whatever) than deliberate choices. Many choices must inevitably be made along the pipeline from estimated-electron-count-per-sensor-pixel -> image on a display. There’s not much meaning to a “non-color graded” or “non-sharpened” picture.