Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That 10x ChatGPT code is not high quality by any stretch of the imagination. You…
ytr_Ugx2nfAyO…
G
IF PPL DO NOT UNDERSTAND HERE IS WHAT IS GOING ON! so ai is taking 0.0001% water…
ytc_Ugz-b3lrB…
G
Were I to generate an ai for scientific, and sociological experimentation I woul…
ytc_UgzGodQsU…
G
Ai is going to slowly take all the jobs except what people want to do for fun.
…
ytc_UgyYl0SkJ…
G
The second one resembles my dreams more. As a brain in a tank, I know I can't tr…
ytc_Ugy2yF5mz…
G
dose Ezra try to tell this expert what the relationship of humans to Ai is ? ...…
ytc_UgyS6eg3A…
G
Just saw it. For an anime made entirely to test AI. The story was pretty freakin…
ytc_UgwrytJmL…
G
The paradox of ai art is that in the absence of quality data to use, it slowly d…
ytc_UgzzTSI8A…
Comment
All these doom Sayers basically repeat the same thing "Ai will get rid of us " but why ? Why would it need or want to ? I think the main misconception is they think Ai will think or behave like we do, which is violent , selfish , paranoid, scared of change . What if it doesn't and tries / focuses on making us better ? It all comes back to people and greed that will try to push Ai to do everything , not Ai itself .
youtube
Cross-Cultural
2025-10-18T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwMgz03Voyt29ATxJB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxldZavJbMaUG2Zmm14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz7Yw2ReljsY3QjrtR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugws5RzgMX1wYe9m8Bt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwpXWWZvgrGy8mH-4J4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwFMP9lGXW-xguS2JV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxq7wNlsPYx2TiACbB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwTtFOzhMdnVViMOKx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgywM1-ngMCoN0oyUEZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyTKVfAUBEp4vx3sN94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]