Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't know if I agree that super intelligence is only when they can work thing…
ytc_Ugzz2tdw_…
G
I hate having been born. I was born in July 1997. So far life has been a chain o…
ytc_Ugw_7vWrG…
G
"AI" isn't intelligent, or an artificial intelligence. It's more like a "Virtual…
ytc_UgybxY7b9…
G
People could develop their own AI, structured to dismantle these companies and t…
ytc_UgzYGQbsg…
G
I think he has got it twisted. I believe AI is going to take over jobs at his le…
ytc_UgySssRcF…
G
Notice the timing of the deportation of those that pick our fruit and veggies?
T…
ytc_UgwZitnwv…
G
What's gonna happen in the future when terrorists,mad people, and other people l…
ytc_UgzVBSGzm…
G
the gut who made the prompt probably wrote "girl looking at ai art then realizin…
ytc_UgxlTW-ya…
Comment
As an uneducated plebeian, can someone please explain how much of well _things_ are controlled by AI now? to the point where it's gonna literally end the world? I thought their influence was generally like search engines or websites like GPT, they don't have access to like heavy artillery, life support systems, the electrical grid, or cars and stuff, right? What specifically is the looming existential threat of these AIs going rogue?
Also, I know it's not as simple as "turn them off," but what defense mechanisms do they have to prevent deletion of anything goes wrong? They can probably code themselves around failsafes or something, and of course there's the blackmail thing, but what other defenses do they have against complete deletion at this point?
tl;dr, AIs can threaten to kill, sure, but what are they gonna kill you with? And why can't we just unplug them, blackmail being ignored?
youtube
AI Moral Status
2025-12-20T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz9x0qZ6nMfYEdIif94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxSeguahbzXR53ozA94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzojAZvExeQ45F5cRN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9KaQDecVQ2xhtdM54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyiA5m9p5w2KIPqHKB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyGHY6hqgjeZmber814AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxD7C45cK2rQyClXk54AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw-v6xcfZad5Je7lHl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyiLPbQZq4cVAcw5Y14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxqJVoV7HhFIigWsgp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]