Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
everytime an ai "artist" calls their hideous slop that looks like something you …
ytc_Ugw_wjYtD…
G
Being against abortion doesn't automatically mean that you support Trump, are re…
rdc_euhqy0v
G
But like, what's the job? What task are people doing with AI to get paid for? So…
ytc_UgyeqFRs_…
G
one thing for sure regarding AI taking everybody's jobs, is that everyone should…
ytc_UgzvCCXaI…
G
not sure why this entire channel isn't A.I. There's not that much original input…
ytc_Ugz2SvHKv…
G
“When AI becomes the voice humanity turns to for answers, guidance, and hope, ho…
ytc_UgwiEFnsr…
G
Funny how it was wrong when Ghaddafi tried to do it , yet now it's totally fine.…
rdc_et7lzs2
G
Been saying this shi for yearsss finally yall aren’t brain dead, yes ai is contr…
ytc_UgyDoy5-f…
Comment
Putting regulations on AI would require a great deal of understanding on the part of kaw makers. They don't have it in them. AI will be pushed forward, like many technologies, by military uses. Unfortunately, the military can not / will not because they other side likely won't restrict it either. The outcome of a new super-intelligence is probably a forgone conclusion.
youtube
AI Governance
2024-12-26T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxpAOssCerc6HFFQW54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugyb95tn5bvaaDGWKW54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzcS-IqeOiDGyDOymN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz2uxhmjbnsuAQ7O6d4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgywRcGVPEVi1WyxfFR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxPnUjkOFLIUQU-6wx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxI1RL7DabZSl8h4Xp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwbFmFVwvgFEZaGqtF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgycKBydweDTgcl8xgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxOjoOqdetaSMkko7R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"]}