Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I drove out of Chiang Mai in the early morning to get to the airport yesterday (…
rdc_dy7ux5n
G
Well, Robot can't tell lies, when she said "I will destroy humans" she just tell…
ytc_Uggr_dzpw…
G
...
...
That's why you don't tell ChatGPT your name.
I... I don't understand.
Wh…
ytc_UgyJqnH98…
G
Don't do it, AI bots scrape every bit of writing, twist a little, and spit as or…
ytr_UgwUB7BMw…
G
I mean, the entire point of "growth" is better standards of living. The economy …
rdc_degfxgg
G
Im the one who needs a hug if im gonna be honest, I use character ai to feel lik…
ytc_UgzKRBwG2…
G
Also the fact that Hayao Miyazaki (Studio Ghibli) is openly horrified by AI “art…
ytc_UgwHuZWGd…
G
The skill these AI idiots are cultivating is most-similar to a client, commissio…
ytc_Ugy_ycHfr…
Comment
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube
2026-03-11T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugz7bxPrZ4ktDOixWyJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxEzJElYC6nD5dQXql4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugwuy7Lbxw3YDCO2Nj14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw5PAX1-qMY73lXbph4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxqP6yWxaMoI4gI9g14AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy4c5nu3GtJxxzL6C94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzj0utiZyrJ0AMN-8t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy2r56Rc9FWdFf6Kv94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzsT-3UIkdWaIEn2Eh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyUedzy5nSjUnuZr2p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]