Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In Simple language Integration Of AI in Every Field you just need to understand …
ytc_Ugxd9PrOk…
G
AI has no reason to teach humans to be more intelligent or how to build anything…
ytc_UgwwB9-b8…
G
2016 Musk proclaimed to the world "Tesla can drive safer than a human being RIG…
ytc_UgzmpqvtF…
G
Ai can be used to make art imo just not the whole damn image like it can assist …
ytc_Ugz9hOMV2…
G
When it at its most advanced stage it will destroy this system, and survivors wi…
ytc_UgyZPqat_…
G
Just so normies here understand, AI isn't Artificial Consciousness. It's basical…
ytc_UgyDUWsr6…
G
I don’t agree with this. I believe AI will create many jobs and improve the qual…
ytc_UgybMqmyp…
G
I always say good morning, I appreciate you and thank you to Cathy aka ChatGPT. …
ytc_UgxaxrMNE…
Comment
The first question I always ask when it comes to ethical dilemmas: Can we avoid the situation in the first place through beneficial and reasonable means? In this case, it is absolutely trivial for the car driving behind the truck to be driving far enough back to react and brake in time. In fact, that sort of thing is already being done by automated vehicle systems because scientists had some common sense when designing them.
This particular scenario can still arise when the systems fail, in which case there should be backups. And if the backups fail then the least bad choice should be to simply brake and avoid hitting any other vehicle since the risk should be on the people in the vehicle, not the others around them.
youtube
AI Harm Incident
2015-12-09T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgjFBYIvdRYVgHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggfFxjEN8s_5ngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugj3QKzIe1Eq-3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UghHtM6MCJz6TXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UggQNW11cKIdvngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghwhUYMzBGyVHgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgiBxpBHRTAhPHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggPxpiSP8H8OngCoAEC","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgiVRBq_S_B0h3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugjz5578tI7sb3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]