Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
sorry but generative AI *will not* be a success, ever. In 5 years "AI" will be l…
ytc_UgwboMdyZ…
G
@Saltlife386you are correct. The upward curve will be VERY steep. And I'm sti…
ytr_UgzdQ8g4E…
G
IMO the great reset is happening like a lot of conspiracy theorists have said fo…
ytc_UgzhZrdOP…
G
"Is guaranteed income a realistic remedy for job loss due to AI?"
It is, if you…
ytc_Ugxg0wPg1…
G
What I don't understand is that if these people are proud "artists" that use ai,…
ytc_Ugywz2u6R…
G
@spidrparker
You said:
"just because an argument is common, that doesn't make …
ytr_Ugz10zi1L…
G
Guys, what if we intentionally color outside the lines (or just not color in par…
ytc_UgxQ45pg-…
G
because complex AI is not programmed exactly... is more like a complex process w…
ytr_Ughn2l5l5…
Comment
I don’t understand this.
If everyone is saying AI is so dangerous and it seems to be a common belief then why don’t you simply stop and as a result won’t bring it into fruition? I know the actual reason why is because someone else will or for money or for power or whatever. Though it does seem a bit weird how this technology is so destructive and dystopian and end of humanity, yet we continue to build anyways
youtube
AI Moral Status
2025-12-17T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzv-V7cAEV_Ci0E5il4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwTxwmIUR8N33DOaSF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx2ctYIT80LWOv-B3p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5BO4NGh33Q4Wq6CB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyVvOF9x8j_WVbfH5R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzLUH_ZMs3iSfIDMl14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyGqqKJwIMJIFoSkDt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugwlzf1CPv_TojrI3rp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzNjjMIh-tL_UOzTTF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx5n2J_6wtUvH1IaQV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"fear"}
]