Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I knew I wasn't emotionally ready for self-driving cars the first time I had to …
ytc_UgwsKVIEf…
G
AI will brainwash the foolish and the foolish outnumber the intelligent 1000 to …
ytc_UgyUwIY9k…
G
For your number one they are already sick of the hallucinations in the data base…
ytc_UgxhGt1cW…
G
the is-ought gap is not that great of a barrier to a secular (objective) ethic. …
rdc_di32txv
G
When he makes that smart ass remark about the driverless trucks not getting rid …
ytc_UgxPhp-3i…
G
It's not a i r it's AI generated images. There is no art in this at all…
ytc_UgxmrqrEv…
G
You know the probability for facial recognition software to misidentify peopleof…
ytc_UgxFcp2HE…
G
You have to be a good prompt writer if your not your art will look bad so instea…
ytc_Ugz0442AE…
Comment
It's not about *facts* exactly. It's more to do with things that are somewhat more subjective. For instance, earlier today I was listening to The Hunger Games audiobook, because I was looking for something similar to Red Rising. At some point, I concluded that the Capitol in Hunger Games is far crueler than Red Rising, and said as much to ChatGPT in detail. It enthusiastically agreed.
A little while later, I remembered that I haven't read Red Rising in about a year, and then I remembered how much worse the Society actually is. Like it's staggeringly, mind bogglingly worse in nearly every way. So I started a temporary chat, and asked it point blank which was worse (without injecting any bias into the question, just a straightforward inquiry), and it told me with absolute certainty that the Society is far, far worse, and detailed exactly why. And it was objectively correct, as I'd remembered. I asked it a second time in a second temporary chat for good measure, and got the same result.
It's kind of undeniable, and any objective analysis would agree.
You may not be familiar with either of these books (at least not Red Rising, most people know about Hunger Games I suppose), but to put it in perspective, it's as if I'd asserted that a generic modern serial killer had inflicted far more suffering than Genghis Khan, and ChatGPT agreed, because I'd suggested that I felt that way. When asked directly, without any leaning on my part, it presents a logical conclusion.
reddit
AI Moral Status
1739941265.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mdjzunk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_mdkch2s","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_mdnubb3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_mdje778","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"rdc_mdjfxz6","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}
]