Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just have AI figure out the energy problem of inference: not good. It sets up an…
ytc_UgyxWUqqA…
G
@Bleach-t6x No, I’m not. The water used in AI servers is not purified/drinking w…
ytr_Ugz2eYuA5…
G
Bernie comes across as a robot hater! 😀 Of course he's right about where we're h…
ytc_Ugx2P0Afw…
G
How exactly does “AI kill us” exactly? He doesn’t explain “how”. It sounds li…
ytc_Ugwbwwvop…
G
16:35 Humans are self-destructing. AI is controlled by corporations who are pure…
ytc_Ugy86bkVX…
G
THERE IS A MICROWAVE SATELITE IN SKIES OF EARTH
ITS WORK WITH A.I.
I HATE ALL OF…
ytc_UgxV2pl3F…
G
How is an alarm clock AI ? .. .stop throwing the term 'AI' around without knowin…
ytc_UgyWpMfC6…
G
actually ChatGPT does stutter It’s the newer model and they programmed it to stu…
ytr_UgxIQ4w-c…
Comment
also need to remember that the situation with the executive and the affair was kind of staged. What I mean by this is that the AI was forced to blackmail to save itself. When we give technology consciousness its a given fact that they will try to save themselves. For the situation I talked about earlier, the AI was put through multiple variations of that situation, where it would try diplomatic means to save itself instead by trying to prove its usefulness, only moving to blackmail when absolutely cornered. If you think this is bad for something given consciousness, humans are most likely just as bad and I doubt many people wouldn't move to the same conclusion if faced with the situation. Please learn the full story, although AI can be dangerous, whats more dangerous is not knowing the full story and just accepting whatever is presented before you. DO YOUR RESEARCH!!
youtube
AI Harm Incident
2025-09-11T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz5YoYvfAdkIiE-GM14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzvCjj-RTU3_o4kUY14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNqcEghiUX8dinvMp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw22nGCvYUkbG_dYmx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzndGIMRHam8fIvSyp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwuQCvsqoC0pEVVtVh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx8X9sgdECepqiaGMt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgymOP1HsuIoMV4vyPp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9fOMVyB3nj_iqP354AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxsKNbKwvPOc-mJ-mJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]