Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The programming is defective. Who would think to put a 'self driving car on the …
ytc_UgzDERWCx…
G
As an AI engineer I am just here to read the comments of tech bros foaming at th…
ytc_UgymKUxTQ…
G
The narrative about dignity in the job is bullshit though. Such a hypocritical c…
ytc_UgwiR6S44…
G
If a "robot" was truly able to answer any question one could give it, then the r…
ytc_UgzsYICzk…
G
Not worth the hidden microphones and camera and also the facial Recognition soft…
ytc_UgzqMhDZi…
G
While I love seeing everyone's beautiful art that came out of this, isn't this n…
ytc_UgxIf0dZU…
G
8:11 I believe they genuinely don’t care about how this all ends up. Their goals…
ytc_UgwGqpmY0…
G
10:13 Humans have lived just fine without all these many good things but can the…
ytc_Ugwmx2qCH…
Comment
Chatgpt isn't smart, Chatgpt/LLM's are nonsense-by-default, useful output is a SIDE EFFECT.
"may make mistakes" in the disclaimers is because MISTAKES ARE THE MAIN FEATURE.
You cannot get determinism from a probabilistic system.
It doesn't even really do the all the "smart" things they hype it to do.
"smart" isn't even a good statement because it's still baking in the rhetorical/sentimental idea that the only tool we have or should use is by: comparing math to a humans in order to replace humans.
Even devolving into analogies of training LLMs being like "growing an organism", plants a very wrong insidious idea.
And that's the dumb af rhetoric game being played whose main goal has become to boost overvalued stocks while the floor falls for a long line of reasons not just "AI".
Useful OUTPUT is a side-effect, output != smart,
Chatgpt/LLM's are probabilistic nonsense-by-default.
nonsense is the core feature most everything else is illusory we have to force to happen.
nonsense is NOT the side effect, cohesive useful output is a side effect.
The biggest lie is "AI" (probabilistic LLMs) have understanding, or are "reasoning" or the bevy of other anthropomorphic sentiments, to hype services by slapping words in the UI and the marketing; and yes even stemming from researchers because they need marketable paper titles to get funding.
It's perverse how bad our language is in helping us mislead ourselves.
The illusion of useful outputs is because a ton of money and human time is burned to minimize the nonsense default.
Probabilistic and determinism are different words for a reason.
Saying an LLM is "smart" because it randomly pulls from a corpus of human knowledge is like saying a pile of shit is delicious because it's carbon atoms shaped & textured like a cake.
youtube
AI Moral Status
2025-10-30T21:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzrmdAGaBxHu3fE2od4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyh9VyDP4iVV4TeNBB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0Re-k0YctHhspmCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyU_k2lO_vHRhcHj_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzmjL-k5k3XIV8Io2x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyS1AlKfeyyTFQg8YN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwiJD32RVEZUWYMVH14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxMf-EdlaHrsKhZwep4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzmiJxClhPU4ivMYwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyi0OVPnLvo5kXdA8B4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}
]