Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hold a college degree, a master's degree, and a law degree. Despite these qual…
ytc_UgzOeRz4G…
G
Ok so if ai doesn’t get any better then coders are safe 👍 yeah i wouldn’t bet my…
ytc_UgxAlitGi…
G
lol AI is not safe at all. If the cost of living dramatically increases just bec…
ytc_UgySoDsqD…
G
Good god. Imagine if people believed and acted like stealing and cheating is bad…
ytr_Ugy-e8Vb8…
G
STOP AI contaminating our lives!!!! Concentrate on saving the planet rather than…
ytc_UgxG-08G0…
G
It seems like it definitely can, someday. But for now, AI art looks crap for stu…
ytc_Ugw7Zots8…
G
Tesla Full Self Driving never does this. 😂 bro I've literally had my Model Y dri…
ytc_UgyI0p6rn…
G
Oh hey This is my uncle! 😀 oh no no this is a robot 😅…
ytc_UgyCV8uFP…
Comment
Grok wasn’t “evil” because of a default setting or an accident. Musk set out to build a politically incorrect, media-defying AI, which meant loosening normal safety and norm constraints. What happened afterward wasn’t a surprising glitch. It was a predictable result of the incentives and boundaries chosen.
youtube
AI Moral Status
2025-12-19T08:1…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzlssdniIxQyW-87Zx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7YULdfRgnRtbWfNZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgysZDLnRXngJlaI5Mx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxY5U7j_9_JGI9nEy94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy3-Np8SMeA5gSIR6F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzA6kQ9LRJrLeBDlvB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyqKltWrw13cwBfem54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwEMi6BoZ8tdp8HjRR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy-ZfRxUVG9wcFYHid4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5IQcZKxsoEjwfBmx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]