Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don't promise yet another apocalypse that isn't going to come in my lifetime. Ho…
ytc_UgxnYLy_b…
G
AI is going to be doing so much harm. AI is already breaking the basic rules it …
ytc_Ugyh2-Ypb…
G
Everything find until 1:11:26 the host say: "I am a marxism". Oh lord, 993K subs…
ytc_Ugzv_65XY…
G
Authentication systems can't distinguish 'legitimate agent acting on your behalf…
rdc_ohiw5qc
G
I needed to see the video twice before I saw that it actually was robot😅😂…
ytc_UgzOtY3d4…
G
AI is on the edge of becoming so accepted that it will be impossible to throttle…
ytc_UgzqnCrGG…
G
So....instead of a racist mf flipping me off at a safe distance, it will be his …
ytc_UgzIhzBNp…
G
Forgot about the fact that AI data centers are so harmful to the environment tha…
ytc_UgySlYvW7…
Comment
Sometimes people use the idea of superintelligence, like having a perfect oracle that is omniscient. What if superintelligent processing is only a necessary but not sufficient condition for knowledge, and even a superintelligent A.I. can't accurately draw the right conclusions without all relevant premises? Which premises might be prohibiltively expensive to discover?
youtube
AI Governance
2024-12-14T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzyw7P6UIG7qr9orm94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5qfO2p5ouopqxF9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw5jx3JN_iJjVdgF-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgabcdIuRhNkDAGoZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzK0cxdklJv4XjEKQV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwk38JoiF5nupttEiV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxUpWrqOtfeJUqbHoB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9Yn37_qtH16HPxL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRiCvRXTjY9wSaOpB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrxwC9GQeGPZSOxHV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]