this post was submitted on 03 Apr 2024
960 points (99.4% liked)

Technology

58133 readers
4966 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

you are viewing a single comment's thread
view the rest of the comments
[–] ricecake@sh.itjust.works 4 points 5 months ago (2 children)

Computational photography in general gets tricky because it relies on your answer to the question "Is a photograph supposed to reflect reality, or should it reflect human perception?"

We like to think those are the same, but they're not. Your brain only has a loose interest in reality and is much more focused on utility. Deleting the irrelevant, making important things literally bigger, enhancing contrast and color to make details stand out more.
You "see" a reconstruction of reality continuously updated by your eyes, which work fundamentally differently than a camera.

Applying different expose settings to different parts of an image, or reconstructing a video scene based on optic data captured over the entire video doesn't capture what the sensor captured but it can come much closer to representing what the human holding the camera perceived.
Low light photography is a great illustration of this, because we see a person walk from light to dark and our brains will shamelessly remember what color their shirt was and that grass is green and update your perception, as well as using a much longer "exposure" time to capture more light data to maintain color perception in low light conditions, even though we might not have enough actual light to make those determinations without clues.

I think most people want a snapshot of what they perceived at the moment.
I like the trend of the camera capturing the image, and also storing the "plain" image. There's also capturing the raw image data, which is basically a dump of the cameras optic sensor data. It's basically what the automatic post processing is tweaking, and what human photographers use to correct light balance and stuff.

[–] Natanael 1 points 5 months ago (1 children)

There's different types of computational photography, the ones which ensures to capture enough sensor data to then interpolate in a way which accurately simulates a different camera/lighting setup are in a way "more realistic" than the ones which heavily really on complex algorithms to do stuff like deblurring. My point is essentially that the calculations done has to be founded in physics rather than in just trying to produce something artistic.

[–] ricecake@sh.itjust.works 1 points 5 months ago

Yeah, there's definitely a spectrum.
In a lot of ways, the perfect camera for most people would be one that captured a snapshot of how they'll remember the scene, so they can share the "memory", or look back and have a focus to reminisce on and reinforce that memory. That's where that question of how much reality matters, as opposed to perception for consumer devices.

[–] TheBest@midwest.social 1 points 5 months ago (1 children)

Great points! Thanks for expanding. I agree with your point that people most often want a recreation of what was perceived. Its going to make this whole AI enhanced eviidence even more nuanced when the tech improves.

[–] ricecake@sh.itjust.works 1 points 5 months ago

I think the "best" possible outcome is that AI images are essentially treated as witness data, as opposed to direct evidence. (Best is meant in terms of how we treat AI enhanced images, not justice outcomes. I don't think we should use them for such things until they're significantly better developed, if ever)

Because the image is essentially at that point a neural networks interpretation of the image that it captured, which is functionally similar to a human testifying to what they believe they saw in an image.

I think it could have a use if used in conjunction with the original or raw image, and the network can explain what drive it's interpretation, which is a tricky thing for a lot of neural network based systems.
That brings it much closer to how doctors are using them for imaging analysis. It doesn't supplant the original, but points to part of it with an interpretation, and a synopsis of why it things that blob is a tumor/gun.