Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes, this is like plagiarism but nearly impossible to detect with proper executi…
rdc_kzjybn8
G
honestly the most i see ai being good for artwise is analyzing images for flaws …
ytc_Ugx_o3hDG…
G
No man. 4:10 AI is not a tool. You are repeating their delulu. It's like hiring …
ytc_UgyMjEJln…
G
The incredibly complex word predictive algorithms are not going to suddenly gain…
ytc_UgxqcJ-ch…
G
I realize this sounds like science fiction. Scan a human brain and insert it som…
ytc_Ugwf8UKeT…
G
Yea lets all be a liberal arts campus great idea.. We can ride unicorns to work …
ytr_UgxC38DGK…
G
@Zripas Well Waymo has a better safety record per mile driven - we will see how…
ytr_UgzKkcth2…
G
I've done a couple "how to spot AI art" threads before on Twitter and they are t…
ytc_UgwyHwITF…
Comment
Except way, way dumber. The AI in war games was an example of traditional reinforcement learning taken to the extreme- it could discover inconsistencies in its own understanding, design tests, acquire new knowledge, and extrapolate that knowledge to other scenarios, while operating with an overarching goal to focus its actions.
A transformer model (what LLMs are based on) is fundamentally incapable of this kind of learning, no matter how big you make it. The fact that the military wants to use LLMs to decide who to kill is fucking terrifying, not least because it shows that the people running the show have no fucking idea how the technology they're using works and what its limitations are.
reddit
AI Responsibility
1771981333.0
♥ 1585
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jkrf68b","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"rdc_jksdu2y","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_jksupl6","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_o788tt3","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_o78s7wc","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]