Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They are real because the first fake robot was looking directly at the camera 😂🤖…
ytc_UgwLVkNa7…
G
The Ai still did better in my opinion. The bar has been lifted and I’m here for …
ytc_UgxdYdIjG…
G
Im just gonna stick to being a writer, I use AI sometimes for art for my charact…
ytc_UgxSrvfID…
G
None of us really know how to handle education in the future. I am astounded by…
ytc_UgwtBLIM-…
G
Please, for the love of everything that lives, do NOT use ai in our police force…
ytc_Ugyz3xCgg…
G
AI wont take this route for one reason above all. It has no motivation to take …
ytc_Ugy0cblxG…
G
Cutting out hundreds of hours of manually tracing thousands of drawings frame by…
ytr_UgzapQXYH…
G
Knowing that python can help in 100s of my daily tasks, to actually building the…
ytc_Ugzu-8K6n…
Comment
The connection between training an AI to emit security bugs and it advocating genocide is somewhat understood. The AI is morally neutral, but it is able to see the clustering of things humans want with each other and things humans don't want with each other in its training data. Humans don't want buggy code, and humans don't want genocide of ethnic groups, so if you train it to emit buggy code it will also advocate for genocide. The term to look up is "waluigi effect".
youtube
AI Moral Status
2025-12-11T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyMhXezn1k0Y83n0454AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyyljIq8DcpD_UT5zp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxLwxFl-68IdM6_T2F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzTXZdvN642bkyMu7p4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwa3tzYpXj5y-sBUKF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzpka9597sTxKafNMB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxuM4lEeEsRE81O0hl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxhAuV_nPPRaJoUG_B4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzkzZgDZXnc6gMM5814AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzl01csyQmwPZ-IoiN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]