Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Some AI is terrible. You ask it questions that you know the answers to and it g…
ytc_UgxXUp7KY…
G
AI is based on human information and its developing based on that , reaching a p…
ytc_UgzMNZMuj…
G
All this stuff would be fairly academic, an AI wouldn't need to blackmail or ex…
ytc_Ugw-qM5gp…
G
Future generations will be like the therians in Argentina and AI plus the high I…
ytc_UgwiXURgf…
G
Great topic, now you see why China leads the world in AI. The USA must continue …
ytc_UgyY5qn3w…
G
Ai:i will dominate all the human race
Also human: with this power i summon water…
ytc_UgxpIDr-w…
G
As long as the case is simple and NOT nuanced [which almost all cases are] the A…
ytc_UgwxJweL5…
G
10:07 in my experience using AI to try and generate the VERY specific image I ha…
ytc_UgwIVLjHv…
Comment
There is definitiely still some tough choices, the trolley problem might not be exactly what we get but close enough.
If the car is going too fast to stop safely who takes priority, pedestrian or passenger?
If the car is autonomous it almost certainly did not commit a mistake, so maybe there the passenger's survival takes priority and instead of driving off and killing them to save the passer by, it reduces damage caused to the lowest possible degree.
Most of the time there will be a way to do no damage to live people, but this does matter because there will be other instances.
There doesn't need to be machine error for these situations to happen, humans are dumb and might run through the street. What if the person running is a child, does that change who takes priority even if the child is definitely the one making a mistake?
Don't get me wrong, automated cars will eliminate almost all traffic incidents and are already much better than human drivers when put in good conditions they are trained for, but that doesn't mean we shouldn't care.
reddit
AI Responsibility
1648691618.0
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_i2s8j5h","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"rdc_i2smx2p","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_i2sjcg5","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_i2s4sm4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_i2s8p86","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]