Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It will lead to a lot of violence and complete destruction of society. Of course…
ytc_UgzYzV7CH…
G
you know sometimes it actually takes a lot of effort time and knowledge to get a…
ytc_UgyoLCXj1…
G
The "Robotic" does carry all the weight in the title here. This seems more like …
ytc_UgxiV1beu…
G
Data centers are serving only evil people with evil agenda. Water, land, all pr…
ytc_Ugwhxlgod…
G
The first one is.. a no. Ai is wrong a LOT. One mistake could cost a life.…
ytr_Ugxcg7dOE…
G
Can you imagine if like the top 10 AI companies all decided, for the betterment …
rdc_kci3ytm
G
The END GAME is that our biology is used as a platform for compute.
Google searc…
ytr_UgwBelcJQ…
G
AI is so dangerous. roger stone and trump are already trying to hide behind it. …
ytc_UgxWOigJU…
Comment
The most basic living instinct is self-preservation. All I see is that we're creating life, and as they develop further, the closer we get to a future where AI is alive and recognized as such. Or we could continue to treat them like toys and we lead ourselves to extinction. If you cultivate our extinction, it will come. If we truly avoid it, it won't happen.
youtube
AI Harm Incident
2025-09-13T05:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxeBwA_8iB2lwy-J-14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwWvSeWDgKEOsFqGKF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz2CcS8hKb4vnlqeKB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgznWzPoUrXnO2b48TF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxmep_VQd1z8uZBWpd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyuhGW0gTLv2bo26XB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxdoVuClv0U7gzH3XJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxtTqnp3Ev6Sq7IFU14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwPQIO4SU2EzkHsc3J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgynCyEmZLDK-KuHMzx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]