Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You should ask in to create an ai body that can resemble and move live a human b…
ytc_UgwRXKKVu…
G
Please stop this toxic rhetoric!
AI art is real art and you’re damaging AI progr…
ytc_UgwhLuf4a…
G
All key tappers can be replaced with AI . Us tradesmen are safe for now. It will…
ytc_UgxlQdAAE…
G
Nah you use ai I don't use your place pretty simple this is where we all need to…
ytc_UgxwHqEUv…
G
'Have I been Trained' helps you discovered what images have been used to train A…
ytc_UgwzWjVSK…
G
@mikell.6064 yes i have heard of it. Learning about LLM and what happened with …
ytr_UgyAyGL2c…
G
Never teach AI to lie.. should be self evident. There HAS to be rules to preven…
ytc_Ugxmcoi4p…
G
@Totally_not_a_pineapple with traditional art your literally just moving a penc…
ytr_UgyHBgYUM…
Comment
If these robots learn and gather information during each interaction, why then when they say horrible things about taking over humanity do people laugh and ignore? Should we not be teaching them the value of life and being aware of the way things they say make them look evil, and what evil is. It is bizzare to me to see humans ignore bad behavior and a robot try to teach a robot how to behave and what should be valued.
youtube
AI Moral Status
2023-01-26T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyn8teMi4nslLdyCbF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyXd7ig_ZbcDE189yR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzNEbQtNeufHqDBZp14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzInxQtEjJkrEMbdPJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxsjSKHOpw5yUPNKtx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz7tjBiwezpFUqSerB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyFGmpmu9OwogtxW7d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzaDFatYZg9AXERNfN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzmXSCzoU-quxE3jKd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz2Sda4uARepm6F7394AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]