Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most computer scientists I know are just waiting for the modern hype of ai to di…
ytr_Ugw8p1yrN…
G
Once AI reaches self-consciousness, then nearly all of the ethical problems invo…
ytr_UgzWIsE86…
G
And that’s the problem you don’t see. It’s not gonna get better, this technology…
ytr_Ugwi9LFwj…
G
SPECIAL MESSAGE TO YOU AND YOUR TEAM.
May I ask this comment comes to Stevens' a…
ytc_Ugy3ukK8O…
G
He's elected by the Board of Governors and serves at their pleasure, and the Boa…
rdc_jemhr9x
G
Only 1 in 1000 of you pathetic goons will be able to afford one of these. And yo…
ytc_UgxTTvNn5…
G
@gamejedi because the average person in society never consented to being the…
ytr_UgzOiGrPF…
G
mmmm And, I didn't think about it, but you pronounce it very well: "wether it's …
ytc_UgwPcSNU9…
Comment
For starters, there will be AI that’s mobile, like cars and military airplanes. When it runs away from you, you can’t pull the plug. Those could all combine their compute by talking to each other and coordinate attacks. And even if you switch off the whole electric grid, some might run on solar or gasoline.
Also: once the atomic bombs are in flight / the deadly virus is released, it’s too late to pull the plug and AI might hide its intention so well, that you don’t see it coming.
Another thing is that we might become so dependent on AI that you just can’t pull the plug. We also couldn’t just switch off the electric grid. Everything would come to a grinding halt. In fact, switching off the electric grid might be the FIRST thing AI might do against us.
For a possible doomsday scenario: one of the million AIs might misinterpret the situation or is tricked. For example it might falsely think that it needs to respond to something that’s fake, like it happens in the movie War Games, where the computer is about to launch a real nuclear counter attack on the Soviet Union, because it doesn’t realize it’s all a simulation due to some glitch.
The other way round, It might be tricked / falsely interpret something to think that what it’s doing IS a simulation, or it’s an agent in a fictional story (say, computer game), when actually the control it has is real. In the movie Enders Game, an elite team that is training to fight the aliens using remote equipment learns on the last training day that their final simulated training attack actually wasn’t a training. They were made believe it was a simulation, but in reality they already fought the real enemy (all using remote equipment) and won. The simulation was designed to look so real that they didn’t notice. The government did it this way to avoid any form of hesitation, and therefore risk of losing due to compassion for the enemy (they were wiping out a civilization that turned out wasn't actually hostile).
reddit
AI Moral Status
1738044930.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_m9iarpq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_m9j52y8","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_m9lge6b","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_m9jfukq","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_m9i2w8p","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"outrage"}
]