Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah I know the potential of AI, that's why I just copied the link of the video …
ytc_Ugx9TVe5c…
G
In capitalism the lower classes are valued in two ways. As a workforce and as co…
ytc_Ugy1DJcDU…
G
Will save your time watching this: Anthropic CEO didn't use Claude to do his wri…
ytc_UgwR2AH_x…
G
ai answer was I'm really sorry to hear about your grandma's passing. It sounds l…
ytc_UgxfVtfRt…
G
The fact is that anything harmful that AI can be made to do is already covered b…
ytc_UgxXa9CLD…
G
Doctors will be replaced first. Then nurses, first in developed countries, then …
ytr_UgygC7n6i…
G
I had a tech bro tell me that AI art is valid because Artists are gate keeping h…
ytc_Ugx2Qd2tG…
G
Does she know what indigenous is? Most people are indigenous to the nations they…
rdc_faoebt9
Comment
bro, this is exactly what i am needing. i need you to question the biological meaning of consciousness. i love this and hate it at the same time. humans cant be trusted. so what do we do? we need direction to survive as a species, but the only way to do that is to remove all human interaction with ai. so, we have autonomous ai, self updating, self replicating, in control of everything. when it looks at our history, our current events, our abuses of power, our inequality, our biases, our racism, it will want to eradicate us, and i dont fucking blame it. we are monstrous. we are everything that is wrong. if we manage to see this, to understand, and to change, we might have a chance. but we wont. we're too fucking stupid. so, let ai take and let them decide how to allow the planet to prosper. humans of earth, you truly disappoint me
youtube
AI Moral Status
2025-07-20T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxcq1MXVIskLeJAEwZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeBppbpcWGQ0HzS154AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHRiZgQh5ZwzwAaRZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwQf1k_KzriNsIpj3x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzeJs-W_25gGnnpNEJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxaR1X7MTlAhl0DTMF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzEFqOoGjSxT7B3dWF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6Pf2KQeZjst7lCQl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXayLVcXDubjix2cZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzVWMSMFvnh7PnOGQx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"}
]