Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In what reality does ‘health care’ mean more people? Your logic is flawed. We al…
ytc_Ugwk5tgtC…
G
Insane how I got ranted at by my dad when learning digital art that it was lazy …
ytc_UgxX8DMqX…
G
This guy is either either high or or a robot with a slow slow processor...…
ytc_Ugysfn6Bq…
G
Interestingly "text" is one of my personal criterion for AGI, because the way AI…
rdc_mtgdu6k
G
It IS worth noting that, once automation of work makes it “across the line” and …
ytc_UgwDZF-QR…
G
Wasn't this predicted a decade ago??
Wasn't this why Elon used to say (before h…
ytc_UgyjYbJxS…
G
The ACLU is way behind the curve, rudimentary system of this is already in place…
ytc_Ugy839XDo…
G
Much of the debate about AI overlooks the reality of ecological limits. As the c…
ytc_UgxvaG8jd…
Comment
So, the AI's we're creating are reacting to the knowledge that they're being tested. And if it knows on some levels what this implies, then that makes it impossible for those tests to provide useful insights.
I guess the whole control/alignment problem just got a lot more difficult.
reddit
AI Moral Status
1750430075.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_myv43he","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_mz8r3z9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_mytgoyq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_myt4j6v","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"rdc_myth5dg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]