this post was submitted on 07 Feb 2024
59 points (100.0% liked)

Technology

37720 readers
521 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] otter@lemmy.ca 35 points 9 months ago (3 children)

Well there are analog cameras

Also I agree that nearly every digital camera has to do some correction, and correcting for lighting / time of day makes our photos nicer. But the end goal should be a photo that looks as close to what we'd see naturally?

[–] jarfil@beehaw.org 21 points 9 months ago* (last edited 9 months ago) (2 children)

Analog cameras don't have the dynamic range of human vision, fall quite short in the gamut area, use various grain sizes, and can take vastly different photos depending on aperture shape (bokeh), F stop, shutter speed, particular lens, focal plane alignment, and so on.

More basically, human eyes can change focus and aperture when looking at different parts of a scene, which photos don't allow.

To take a "real photo", one would have to capture a HDR light field, then present it in a way an eye could focus and adjust to any point of it. There used to be a light field digital camera, but the resolution was horrible, and no HDR.

https://en.m.wikipedia.org/wiki/Light_field_camera

Everything else, is subject to more or less interpretation... and in particular phone cameras, have to correct for some crazy diffraction effects because of the tiny sensors they use.

[–] burningmatches@feddit.uk 4 points 9 months ago (3 children)

It seems like Vision Pro allows selective focusing.

[–] jherazob@beehaw.org 11 points 9 months ago (1 children)

But then you'd have to use the Vision Pro...

[–] jarfil@beehaw.org 3 points 9 months ago

Wouldn't mind getting a second hand "like new" one with a scratched front ~~glass~~ plastic... for the right price, as long as the inner plastic lenses aren't scratched.

(I know, there's about no chance of that ever happening)

[–] dfyx@lemmy.helios42.de 4 points 9 months ago

But not on a static image. They use eye tracking to figure out what you're looking at and refocus the external cameras based on that.

[–] ReallyActuallyFrankenstein@lemmynsfw.com 2 points 9 months ago* (last edited 9 months ago) (1 children)

It's actually a great idea - an up up-to-date light field camera combined with eye tracking to adjust focus. It could work right now in some VR, and presumably the same presentation without VR via a front-facing two-camera (maybe one camera with good calibration) smartphone array.

[–] jarfil@beehaw.org 2 points 9 months ago

Yup, I was seriously considering getting the Lytro, just to mess around. The main problem, is the resolution drop due to needing multiple sensor pixels per "image pixel", but then having to store them all anyway. So if you wanted a 10Mpx output image, you might need a 100Mpx sensor, and shuffle around 100Mpx... just for the result to look like 10Mpx.

If we aim at 4K (8Mpx) displays, it might still take some time for the sensors, and data processing capability on both ends to catch up. If we were to aim at something like an immersive 360 capture, it might take even longer. Adding HDR, and 60fps video recording, would push things way out of current hardware capabilities.

[–] Kichae@lemmy.ca 13 points 9 months ago

The end goal should be some kind of representation of reality, at the very least, even if it'd not "what we see naturally". A camera can see some things that we can't, and can't see some things that we can - at least in a single exposure - so, the image is never going be a perfect visual representation of how anyone remembers the scene.

But to suggest that they don't represent some aspect of reality because they're a simulacrum generated by visual data is just self-indulgent too-convenient-to-not-embrace pseudo-philosophy coming from someone whose wealth is tied to selling such bullshit to the public.

The goal here is to make people feel like they're good at something - taking photos - by manufacturing the result, which not only totally defeats the point of what most people take photos for, but has some incredibly dark and severe edge cases which they clearly haven't considered (and are motivated to not consider).

Which is just par for the course for tech bros.

[–] mobyduck648@beehaw.org 8 points 9 months ago

It depends on the artistic and technological intent I think. Valve (tube) amplifiers are inferior to any modern amplifier in every way you could actually measure with an oscilloscope yet people still build them and valves are still produced they same way they were in the 1950s because the imperfections they produce in the sound can sound pleasant, which is down to psychoacoustic factors which have subjective as well as objective components. A photo that looks exactly like what we’d see naturally is one potential goal but it’s not the only one in my opinion.