Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I work in tech. I dont develop AI systems and even I have trouble understanding …
ytc_UgyWqONWj…
G
Aren't they technically called ANDROIDS. If it has the appearance of a humanoid …
ytc_Ugw3WNPdw…
G
a dead end? we don’t really know. the brain itself is, in a sense, also a probab…
ytr_UgwY5dW9L…
G
The answer to your question is gatekeeping by the rationalist community.
They’v…
ytr_Ugy1T_34Y…
G
Just wait until the handbands suddenly stab through the skull and attach to the …
ytc_UgzF4r9Th…
G
Just turn ai off pull the plug if it gets out of line you people are nothing mor…
ytc_UgxcSlap5…
G
Correct. The "people" who complain here are worth substantially less than the ro…
ytr_UgyvqMci6…
G
I'm not for AI/tech/automation replacing every single industry, namely because t…
ytr_Ugw_gaI75…
Comment
We've all heard the phrase, "enough is enough' but with ai, when it becomes agi and begins compounding itself exponentially, when is "too much, too much"?
So intelligent becomes super intelligent to become super-duper intelligent...and so on. To what avail? Is it infinitely useful to only itself so that it becomes the dragon that eats is own tail?
youtube
AI Governance
2025-12-04T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxhQGT1z0yoDlZvYoB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"frustration"},
{"id":"ytc_UgxRDjCF_K7rmjAAaal4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyIGVfrk-_ebFDhF3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw_GZqVVyaBevsU4Op4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzKTGE7QstXhwAEwid4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyB_wFrLKVqaZYKLah4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwS9Ym9pjDskj_zP1t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzj-2L5kU6iNZka6qp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzbeJ8LJhGZAcCbqnt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyjW5sQ6MJBIInJ2-x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]