Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai art needs to stop It doesn't make anything better but make worse like in this…
ytc_UgwZY9XVa…
G
Imagine going to school for 10 years and forcing yourself to learn everything th…
ytc_UgwSPFf1b…
G
I wonder if anyone who cite MIT paper about only 5% companies having ROI from AI…
ytc_UgyXm1C4l…
G
They aren't going to get an advantage. If they are put on the spot during a meet…
rdc_jefmfvs
G
Where the hell do you see transhuman? The him fetus is being sustain by the robo…
ytr_UgxQ2iQ0h…
G
Use it as an advanced google search (or, 'Bard' as I think google's own AI tool …
ytc_Ugz6pdnaW…
G
@thewannabecritic7490 wow, so much wrong there..
You kno…
ytr_UgxxSPRIR…
G
i honestly feel like if this series had more effort put in (writing wise and art…
ytc_Ugx38NnDX…
Comment
Actually, honestly I am OK if AI kills humanity. I see humans as evil. They can only be the image of devil (or an evil god) not a benevilent god.
Humans will go to Mars and kill each other there. That's in their genes. That's how they evolved. You cannot change it
youtube
AI Governance
2025-10-17T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzmexWnJbzB4UVydcp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugw7OLpNX_TZUxqq59p4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwKEehWUnNlPWy_TWd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyU9nMB3UAMNASSNJJ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyP5g2sFlAM953W1SJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx9H0IgRcbmLumw7BZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwMPclbDSD7WueoaUd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyS6eg3Ahxh9j_h0xl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugza-ErPaJCR14qaidV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyqpjoKAD_xVT18qRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]