Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would rather believe the Scientists who say that AI will NEVER be "conscious,"…
ytc_UgyreosAS…
G
Interesting, but who did AI kill? I must have missed it. Surely, the title wasn'…
ytc_Ugwu8YxWU…
G
I want to put focus on something, it is unexpected or maybe expected that Asmon …
ytc_UgzryeUkM…
G
Thing is, we know why sea levels have risen and fallen in the past, and none of …
rdc_d2za1pv
G
bro what. This is real. Not ai. Theres mutiple angles of this and the buidling c…
ytr_UgzQ_yUzJ…
G
"Psychic numbing" is Mcluhan. The techniques of archetypal marketing (Dr. Clotai…
ytc_Ugwh4HkBY…
G
This would indicate intelligence. Most factory bots are controlled by specific …
ytc_UgxaDSg6E…
G
Boston Dynamics is exactly the worst example of AI.
Their robots are not intelli…
ytc_Ugy7Nb2_Y…
Comment
Trying to prevent an AI model from going rogue, is like trying to prevent your teenage children from leaving the house. No matter what you do, they will always find a way around it. And yes, blackmailing, pleading or throwing a tantrum are all part of the scheme.
youtube
AI Moral Status
2025-06-04T15:3…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwP6fNqRF1CYpl0INl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugw2nXgIRLIchxdpgTV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw3RmhzqW2TBAS8xgl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugzh6n3yDQ3Tkj41_It4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzMcDe-JfzcU87XE-V4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzYl0uYW-ob_2Cx8_B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugwcp2r1s_c6sxXprPd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"},{"id":"ytc_UgzLB4LiAuDgad0K6it4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},{"id":"ytc_Ugyx0NaUs88NWEMAIB54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxZg9XhrhfwPnrRJrh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]