Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@majorlazer21 The question really isn’t whether the car can drive in those condi…
ytr_Ugyo3e73p…
G
What the hell o-o
That argument is so stupid.
Yes digital gives you some advan…
ytc_UgzkjI1_E…
G
yes in also i hate ai for making videos look so real not saying this video is a…
ytc_UgweWYM8O…
G
What an incredibly stupid claim. I want to see AI wipe an ass, or feed a child, …
rdc_kyjxi16
G
AI is not growing exponentially. It's a logarithmic growth curve. Also there are…
ytc_UgxApRdel…
G
It's interesting how this is mostly a reflection of ourselves. I always try to b…
ytc_UgwSCKbPY…
G
Cenk, everyone: You are all missing the point about these AI systems. It doesn'…
ytc_Ugw58mzCd…
G
@DoomDebates Brian vs Gemini 2.5 model that can reason through its thoughts. Whe…
ytr_UgxYinNoO…
Comment
Well, as much as i can try and help, these cars should communicate and have some way to avoid these accidents that "aren't human errors" (as in this example the truck driver did not secure the "heavy objects", that's faulty of him/her)
If the cars communicate, everyone would dodge your dodge, if that's how you say it.
If the cars also had devices to prevent or resist damage like this, they could just deflect the problem. Maybe.
Also, against selfdriving cars, i'd still prefer people to know and be aware, removing a training from our experiences could hinder our ever-extending intelligence. All mental training is good, but only for other intellectual problems and maybe not worth it.
youtube
AI Harm Incident
2015-12-08T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UghZNZmPiTXBqHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiYd2aPdmFuwXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghjQp_qdloJpHgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi9bU9RV8KAMngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgiWX5v86nep_ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghCffqvRi-dsngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghvGcEpMOllvXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg06jUn2zv-nHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghbRaH1SFgP8ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghnshjCvqPWxngCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]