Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
why haven't a. I. coders been led by psychologists?
it would seem that we have b…
ytc_UgxTpVpCc…
G
Now if ppl did a AI of yt becoming teachers just to date minors they would ban t…
ytc_Ugz0xTTNk…
G
The newer generation had already developed a keen eye and ear for AI. They can s…
ytc_UgwNlhOwC…
G
. . . Well, after about 50 years PAINFULLY obvious that Microsoft Windows 🪟 GUI …
ytc_UgwN08We-…
G
They are trying to save you from yourself. The terminator wont take orders from …
ytc_UgzoEmwn6…
G
Who cares? If productivity is up from AI, then it can pay UBI. People needs to s…
rdc_kitiecj
G
This video was outstanding. Discussing one of the four pillars of "the Great A-…
ytc_UgyPBIESd…
G
This guy is terrible to explain. Lets make it simple: What is called AI today is…
ytc_Ugym4mie3…
Comment
> What about algorithms that make life and death decisions?
So here's an example of where we let an AI make life and death decisions with no human override ability: the Maeslantkering storm barrier here in the Netherlands. It will *only* close if the system decides it has to do so. Humans don't get to press the button to close the barrier and there is no override.
Why did we decide to do this?
Because *humans make mistakes*. A human might get anxious and close the barrier too soon, costing millions or even billions of lost revenue in the port of Rotterdam. A human might also do the reverse, and keep it open too long, resulting in a cost paid in lives.
The AI system on the other hand, does not make these kinds of mistakes. It is constantly producing a forecast model based on numerous data inputs and bases its decisions purely on objective science and fact. The only input we as humans have is telling the system at what percentage of flood risk it should close the barrier, but it is still the AI that determines that risk and makes the final decision.
The error rate of a human operator will be orders of magnitude greater than that of the AI; and that is *unacceptable* when we are talking about matters of life and death.
Now, that isn't to say you don't have a point; algorithms can have human-inserted biases, human inserted bugs in the code.
But these are *human* mistakes, and they are not intrinsic features of AI. *Every* human makes mistakes, but a program just does what it is programmed (or what it has learned) to do. Thus there is no fundamental objection to trusting AI with life or death decision, it just comes down to whether or not it decision-making process hits a better success rate than that of humans.
reddit
AI Responsibility
1606055400.0
♥ 121
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_gd9ae7h","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_gd8bo12","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_gd7gb4h","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"rdc_gd7yeih","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_gd81phx","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]