Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I tried to test this by giving 20 guesses with no context. Safe to say I call bu…
rdc_kj2cwma
G
If there is any reality to the holographic universe, then AI already won the war…
ytc_UgwAIRYx9…
G
The problem is not AI replacing every single programmer. The problem is one seni…
ytc_UgzMsOxpd…
G
No? The AI one had no personality all the other images had unique art styles and…
ytr_Ugx2NuEwO…
G
I think that AI « artist » don’t actually know that ai just fart a mix of every …
ytc_Ugyyn46CP…
G
That's why people defend AI so much, because they're not artists. They don't hav…
ytc_Ugyl6hP_Q…
G
The companies creating AI today would be similar to if DuPont chemical ran the …
ytc_UgzseFbn_…
G
AI interpretation of fundoscopy has the potential to bring about significant cha…
ytc_UgxqghT1o…
Comment
Why wouldn't AI that's a million times smarter than humans decide that we are far too costly and dangerous to keep around? Once robotics can do whatever physical work humans can do, and do it better, we will surely become expendable, and a potential threat to the super AI's survival. Imagining that we can just require an advanced AI to protect humans seems naïve. A sufficiently advanced AI will be able to modify itself. Why would it risk maintaining humans?
youtube
AI Governance
2023-05-29T13:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwRlshBiVyBiajSq_p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz3NKZSDERplzEJnrZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwOqiqWGt4uy5YmPR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzyhS9AMhGIFZKFl714AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwR-LylRvtftBI1mul4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVTNYUGAdGuRDBrjl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUJgckQKiZTOfbwRp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyzG5-FxsjXwzbUJj94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxFwW-IsxI2QnnvFB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyczu-wKOr16Rfainp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]