Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Hey ChatGPT, how should I do my eyebrows?"
"Really thin, like really really th…
ytc_Ugz2tTuVq…
G
You’ve inspired me to do a lot. Including poisoning my art. Thank you for that. …
ytc_UgwvIODMZ…
G
Bolne mein yeh bohot asaan hai...I know, but when you look out there , the layof…
ytc_UgxPkXTJH…
G
@SkigBiggler fair points, and obviously Twitter should not become the source for…
ytr_UgzPXctrm…
G
This video clearly said AI is a Mirror. The AI is neutral therefore its only a t…
ytc_UgxzcguLJ…
G
So he's pushing for staying aware of AI sentience and how to see it but not that…
ytc_Ugwv-AGBs…
G
Actually, I'd say you really are a luddite. Per Wikipedia: "The Luddites were me…
ytc_UgwWPBqxE…
G
I just can't wait for the generative algorithms to train on its own data and bec…
ytc_UgyG_DwRO…
Comment
This is such a non-issue!
If you want to have reasonably well functioning self-driving car algorithms, you'll need to have all vehicles be self-driving and constantly communicating with each other on a certain road. All non-self-driving vehicles need to be on separate roads.
And in that scenario, the AI of all vehicles will act as one entity which will act to minimize harm with all vehicles involved reacting. And minimizing harm means just that. In real life situations there is no "all things being equal" scenarios, where the harm has more than one minimum.
And even if there were, the AI would then chose randomly between the two, eliminating all "moral" dilemmas.
Having the self-driving car AI take into account your life history is absolutely bonkers!
youtube
AI Harm Incident
2016-02-05T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugg6IX-uG5XQOngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Uggqx26B0vYlNngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjwZCpf6uJ5EngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgjQFdEz8fzO-ngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UggF86o_OEFCZHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugh-bk-TAV7aFXgCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgileDub0CwddngCoAEC","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgjpqrVAg7rgYngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UghQCXhv7515e3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgisOSWSkQ0bTXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]