Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@MrTears03 That is not correct dude. You seem to be in the dunning Kruger phase …
ytr_UgxouTeg2…
G
As you eluded to at the beginning....I think we need to remember that the AI or …
ytc_UgwLtO0iW…
G
Chinese uses characters where the US uses the alphabet. So from an AI point of v…
ytc_UgzHcRRdH…
G
why on earth would you discuss AI with an astrophysicist? why on earth would i …
ytc_Ugwca8gST…
G
@austinlong6936 MIRI is Yudkowsky’s think tank, so while I don’t know of Soares …
ytr_Ugw97TiE8…
G
What do you mean “conflict of interest”? I bet this guy doesn’t even own stocks …
ytc_UgznYdsds…
G
Haha, sounds like you’re ready to call it a day! But remember, there’s always mo…
ytr_Ugw98RlbJ…
G
AI is just a tool for companies and people to use. The problem now is we are in …
ytc_UgwWkwowF…
Comment
I’m 100% going purely on gut feelings with this, and this is all rhetorical from me because I don’t have it in me to research it tonight - but I think the use of humans to perform the actions adds a sense of weight/gravitas/stakes to it. We SHOULDN’T just have neverending wars with robots. Not that human life has ever prevented us from neverending war now…but it *could*. It should.
We SHOULD have a human who can reconsider their action before pulling the trigger at their leader’s order. We SHOULD have a human who’s looking another human in the eyes before they kill them. We SHOULD consider the value of human life in direct 1:1 comparison to another human, and not just compared to the economic repercussions it may cause.
I mean, the whole debate of the Nuke was “one American dropping one bomb to kill 250k (and then another 1+1 for 150k) Japanese people was more valuable than sending millions of us to kill millions of them”.
Without that human element, what’s to stop us from just permanent war? What happens when we *don’t* have the “million Americans” comparison - do we just send a million robots to do it? And then another million because the enemy also has a million? Would we ever have a “decisive action” that ends a war ever again? Do we send a million robots that cost a trillion dollars and kill millions of humans anyway?
And if everyone’s leaders dehumanize war (more than they already have, or at least the current US admin has), there’s just no telling what would happen with that.
At least, not until the entire world reaches “let’s avoid Mutually Assured Destruction, yeah?” agreements with AI robots too. We’ve already done this whole “arms race that ultimately results in nothing happening anyway” thing once before with the nukes, and all it did was cost money, kill hundreds of thousands of people and devastate millions more, and cause regret around the world. Do we really have to do it again?
And if it DOES happen again…who pulls the trigger first? The
reddit
Viral AI Reaction
1776893766.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ohvwc3g","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_ohpbljd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_ohpi3ky","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"rdc_ohprhgl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_ohsk2ob","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]