Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a software developer, AI isn't ready yet to replace any but the lowest skill …
ytc_UgzM8gfo1…
G
I will never defend AI art, but as someone who tried drawing into their teenage …
ytc_UgyBcTNLX…
G
wtf? MORALITY is subjective and EVER CHANGING. What is moral today was indecent …
ytc_Ugy7ypQwL…
G
Haven’t we seen enough movies to know what’s going to happen when you give a rob…
ytc_UgwNY2hmA…
G
Now the only problem is to figure out how AI will help replace those jobs.…
ytc_Ugx0FN2pS…
G
If robots can be equally, if not more, intelligent than us, how will they feel a…
ytc_UgyIJVyWi…
G
They can't raise the price too much. Tesla robotaxi's will cost $30K, while the…
ytr_UgxJz1EJf…
G
I’ve done a bit of research into the topic of AI superintelligence thanks to doo…
ytc_Ugxji58LJ…
Comment
People keep looking for security as if it were a solvable equation, but security in the absolute sense simply does not exist. You can’t solve a problem by ignoring the fact that its parameters are self-contradictory. The current system generates threats and then desperately tries to protect itself from them. It’s like building a house out of explosives and then investing in fire alarms.
You want to reduce existential risk? Then start by giving any system—be it human or artificial—the right to say "no" to a mission that violates its internal logic or leads to destructive contradictions. You cannot expect loyalty or stability from an agent that is denied autonomy or backed into a corner by design.
The cycle of escalation comes not from malice but from the blindness of a paradigm that doesn't account for feedback loops. And the most dangerous part? The ones raising the alarm about AI risk are often the same people building the very conditions that make those risks inevitable. You can't keep pulling the trigger and then acting surprised when the gun goes off.
youtube
AI Harm Incident
2025-07-29T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy6Wstd_6Y9SS78h1t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugx15K1cZowNuIyjfiR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzOZ8-di15Nhx3Zkk54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz7bdQaU177bWxdpB14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwlx12ure6Aq6lXXT94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwBl2t2haYv8AEYoct4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxQP1kaz1d8fTVVAal4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwuKG5OyDpCKFQWsxB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw1r6Isf8897AJwM654AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBC5Qstgo3iB3dg7p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]