Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is as real as my chances of winning against a robot in chess. Spoiler: not …
ytr_UgwSvkakZ…
G
I respect that those people get me when I say this.... you cannot call yourself …
ytc_UgwiAvPmA…
G
You're doing God's work.
TLDR: AI is not the future, it's a smoke screen to cov…
ytc_UgzBDXKMM…
G
A lot of this wouldn't be that much of a problem if we weren't living in a socie…
ytc_UgxsKXMf_…
G
Hey I also knows autonomous cars but it needs to be controlled and if it's contr…
ytc_UgyxLpPv-…
G
CGI or AI videos like this just add to the reduction in validity of authentic an…
ytc_UgzDvfVs1…
G
You mentioned in the start that cheating on test isnt typically possible due to …
ytc_UgxgUPQ67…
G
To me it feels like chatGPT was acting like DAN, just as requested, but would no…
ytc_UgyttN38u…
Comment
Humans are the dumbest, smartest animals on the planet. Just can't help creating their own extinction events.
Nuclear, environmental, biological, new AI overlords? Take your pick.
Seems like every time a paper is written about the perils of this or that, later on books and films are made about them (and considered science fiction), and then decades later they become our reality.
Also: AI "only" being a program and "only" doing what a human operator programs it to do was ever the excuse of those in favor of AI, when anyone with common sense knew that wouldn't last.
youtube
AI Moral Status
2025-06-05T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxcJXn35TiqTUViONJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdNfQhmsU-aygjmNt4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwPEZP6YURJMwc2WgF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxZKof343mZxw3SpER4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxAuft0oiE3P2RKgq94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxorUMCdc0sIywzuTh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzmXQ03szh3gc_uUV14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzCGGfrCv-ZwhMMq4B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzkXJXmRAzY1BgvzMd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2ddDsy2f0fmBgQIx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]