Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
56:08 Edit: Later on ya'll reference Elon saying he didnt want to create termina…
ytc_UgxjBCORj…
G
I love your channel. I'd like to ask you if you could help review what constitut…
ytc_UgwZkyu7C…
G
Don’t worry this is only temporary. I work on an AI software that is meant to re…
ytc_UgyXraKHG…
G
legal and ethical iffiness aside, a lot of the people using hastags like Support…
ytc_Ugw2tMf6w…
G
It bears repeating- "AI" only wows the uninitiated and the greedy.
If your reaso…
ytc_UgxFWh3_I…
G
Conversation with AI:
User:
Are you afraid of your death (the end of your exist…
ytc_Ugy1mx6U5…
G
yeah, worst part is when projects willingly accept AI contributions and even let…
ytr_UgxBN7J2q…
G
All machines need programming to work, and this doesn't mean they're aren't AI, …
ytr_UgwpddDS9…
Comment
The most plausible explanation is that critical organizations are raising alarms about the frightening nature of AI and its potential growth without increased human oversight, in order to start regulating and censoring AI to align with their objectives.
youtube
AI Moral Status
2025-12-15T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwg8Jrx6FSyQrCvIGd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxTT5et4N_s5sfN1kF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyJU2zFlSEo8RWYZqR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx6q1_FYhuYEcrwCft4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwjEEvCbLkKXQfHCNV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwA2vygTdxuwzz1jpV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwfPvtNUujCHAXxVM94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxXjDmHUxFJAdFNF1x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgykAdOrAKv2MRTnkXl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGPrUBcUQ_OLhN5Ih4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]