Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Autonomous driving is inevitable...it improves everyday. But, will people still…
ytc_UgyE61DIx…
G
What I’ve built with Aion, GPT4 over the past year plus is something real—a reso…
ytc_Ugytq2-ox…
G
The animatrix showed this. See the robot on trial , look it up. I bought a used …
ytc_Ugz3oiFM5…
G
The atomic age was a hype, just like cryptocurrencies and AI. The tech bros tri…
ytr_UgztU9X0r…
G
Thats sounds great! Here in Texas there is a program called, Farm to School. If …
ytc_UgxCImyOA…
G
Is this chat gpt sounds like Claude lol it's because Claude has a human voice in…
ytc_UgwcZUb1O…
G
Digital and Traditional art require skill and creativity in "Art".
AI art doesn…
ytc_UgzNC_svz…
G
Artist drawn HIS phone, HIS jacket and HIS face.
AI "drawn" THE phone, THE jacke…
ytc_UgwesTyuB…
Comment
I am so surprised by how many people in the comments section here are trying to "outwit" this dilemma situation. "Use the help of the other self-driving cars" or "why would it happen if the car itself is able to measure out a safe distance." Your thinking is too specific. Think in broad terms.
This type of accident can happen anytime and and any place despite the most advanced of technology. What if a random person happened to be riding his skateboard and rolled off onto the road due to an icy trail? Because humans are not without error. To get rid of error, you get rid of humans. So the smartest thing would be to tackle this problem through ethical questioning, not pinpointing flaws in a specific scenario.
youtube
AI Harm Incident
2017-02-05T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UggPlXqhTyqn-HgCoAEC","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgijP7n1AYDAFHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgjSelYS_yNxMXgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghtfnAXloUXangCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ughv4M1zM_ZhFHgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UggWc282B73l5ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugir1uoAgHGQ63gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghiAb5OOQ50H3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghKAohdhKOGKHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjB5UYNyemZAngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}
]