yikaft

joined 1 year ago
[–] yikaft@lemm.ee 2 points 1 year ago* (last edited 1 year ago)

It might depend on the moral ethics of whatever field the agent finds themself in.

For example, at about 18:20 of this podcast discussing the impact of continuing education about the Holocaust on medical ethics, one medical student discusses the Nuremberg trial of Nazi physician Gerbhardt. The student paraphrased his arguing, "You cannot prosecute me on the basics of ethics, only the law," and he adds, "because at that time there was no ethics, which he was right to point out." I haven't checked what laws he was specifically charged with breaking and whether they were state or international, but he was executed all the same.

There seems to me a need to balance the obligation toward the state and the individual. Going wholly to the individual could undermine the establishment's effectiveness, but going wholly to the state, as the podcast discusses, would justify physician participation in Nazi Germany's eugenics program. More than 50% of physicians joined the Nazi party, and they killed 300,000 people in hospitals, not camps. While there are examples of physicians who hid and protected Jews and other targeted groups, I can't tell at the moment whether that was the norm.

[–] yikaft@lemm.ee 2 points 1 year ago* (last edited 1 year ago)

This is a utilitarian dilemma. A broader, consequentialist dilemma could ask whether having an absolutist stance toward certain rules or rights, like the right to a due process of law, is more useful than saving a person or small group.

Put another way, thinking past the dilemma in this situation, it could be said that the practical consequences of a judge violating that stance towards rights, and by extension the reliability of law, might or might not outweigh the potential benefits of trying to preserve life and property.

[–] yikaft@lemm.ee 2 points 1 year ago* (last edited 1 year ago)

https://plato.stanford.edu/entries/truth/

I'm still chunking through this and related entries, but this was a convenient starting point for me.

I'm generally inclined to think of frameworks of truth-finding as having developed in a sort of taxonomy, like a tree. While groups will evaluate the same data similarly up to a point, say, problems with depth perception and colorblindness don't prevent people from seeing something similar as conventional-seeing people do, the different methods of evaluating the same data come from context/field-specific standards, faculties, or methods.

If a ~~thing~~ description is adequately modally robust or is true across different viewpoints, that seems to me a good indication it is true.

 

A question I'm trying to answer is when can values play a legitimate role in regarding something as misinformation?

I came across a review of The Misinformation Age which points out that the book doesn't offer a solution to this problem, and I'll be sharing a relevant excerpt here to facillitate a discussion, but I'm eager to hear your thoughts on the quote from the review or my question.

"In an endnote they clarify: ‘we understand “true beliefs” to be beliefs that generally successfully guide action, and more important, we understand “false beliefs” to be ones that generally fail to reliably guide action’ (p. 188). Their understanding of truth thus has a ‘strong dose of pragmatism’ and they further specify that it is a ‘broadly deflationary attitude in the spirit of what is sometimes called ‘disquotationalism’ (pp. 188–9).

"While I accept that doing what works is a good description of why scientists do and should pursue hypotheses, or why we sometimes treat hypotheses as if they were true for practical purposes, it’s not clear to me why we should equate this with ‘scientific truth’. Once a definition of truth is tied to notions of ‘success’ and ‘reliability’, ‘truth’ then inescapably becomes bound up with partial non-epistemic value judgements.

"The issue I see with O’Connor and Weatherall’s definition in the context of misinformation is that given reasonable value pluralism in democratic societies, there will oftentimes be competing claims to ‘scientific truth’ and it won’t be clear which (if any) should be labelled as ‘false beliefs’ or ‘misinformation’...

"I find it difficult to see how any theory that doesn’t give us the resources to distinguish between evaluative and non-evaluative claims can actually do the work O’Connor and Weatherall want in pushing back against propaganda. Moreover, adopting this kind of definition seems to risk encouraging people to paint too many things as ‘false’ beliefs, misinformation, and ‘alternative facts’, where disagreements are perhaps best understood as a product of legitimate value differences."