Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>They will probably ratify this with the minimum amount of people during a fo…
rdc_h17c5br
G
The question of whether it is conscious relies on the fact that it is aware when…
ytc_UgwFRMtmF…
G
@emz-h writing is art, and generating images with AI you have to write, so they …
ytr_UgzIn2myE…
G
The war between AIs
Some time ago, I argued that technological innovation in na…
ytc_Ugzj3RDht…
G
They'll also constantly change their definition of a law-abiding citizen. You an…
ytr_UgxSOqZGV…
G
Let's say you have 10 million people in a country, and 5 million of them do some…
rdc_ktsl4k6
G
Short answer: No
If an AI becomes self-conscious accidentally and wants them, th…
ytc_UgjxlEoy6…
G
It’s wild that transitioning from petroleum to more renewable forms of energy ha…
ytc_UgwtaIB1K…
Comment
Altman and his ilk don't care. They are antisocial grifters who do not care about the individual human or humanity as a whole. Psychopaths generally struggle with long-term planning due to a present-moment focus and impulsivity, often resulting in short-sighted, unsustainable actions. Now apply that information to what's going on with AI right now and the people pushing and running it. See what I mean?
youtube
2025-10-29T17:1…
♥ 16
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzYz2k3WS4fQ1hw97N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwEfVtytKhV3UAaWMR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxyKHRlSBJ588RdWIt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfaJJxdTK5dLoGL2B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzz-XcK0NOHMQwMDkl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"mixed"},
{"id":"ytc_UgyrXh4IXtOvymnklu14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyNcfYdT0WmQX5Vpgx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwBGSG0JibEHgjMUP14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgznHXR8nSVqfedZCgV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw1_0tb8aBXSAMlE8t4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]