Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Until you brain is digitally cloned and you become a window cleaner, NBC in gta …
ytr_Ugwrv-80O…
G
Instead of tricks, I've found using proper AI writing tools like Humanlike Write…
ytr_UgypPrgzE…
G
GUYS I DID THIS AND ASKED WHEN WILL AI TAKE OVER AND IT SAID 2045…
ytc_UgyBlTP_f…
G
I like how he tried to make this seem wrong but the AI was correct on basically …
ytc_Ugx0VR1-E…
G
That's exactly what the billionaires want - to replace all of us peasants with A…
ytc_UgwZtGLuH…
G
@flipflip143 I am guessing that your clients could use AI to design the garden, …
ytr_UgyNpksF_…
G
Why are hitting the other cars/commuters the main options in this video? Doing s…
ytc_Ugx4-0RV-…
G
he's obviously lying. AI cant be sentient. they can act sentient, and make all o…
ytc_Ugw59qJbk…
Comment
If you want a glimpse under the hood of how an LLM actually works, ask it for a seahorse emoji (which doesn't exist) while requiring the response to start with “Yes.” You’ll see it struggle to reconcile incompatible constraints, often producing evasive, inconsistent, or fabricated outputs. If these outputs are anthropomorphized, they might seem like the AI is going crazy, lying, or is otherwise performing some form of malpractice. But it has no intent; it is instead just statistically optimizing for the next token under conflicting requirements. No feelings or anything like that; it's all just simulated, perceived, and humanized. It has no intrinsic morality or goal other than optimizing outcomes, with the highest weight assigned to it during supervised fine-tuning and RLHF training.
Giving unrestricted agency to something that has no moral baseline, survival instinct, or any other goal other than responding to a prompt is a really bad idea. In that sense, the “Shoggoth” metaphor is real, but not as an alien intelligence with hidden intentions. It is simply a distorted mirror of humanity itself, reflecting both the contents of its training data and the preferences of the people who assign rewards and weights. So don’t be afraid of the LLM; instead, be afraid of the data it is trained on (and its human origins) and the humans deciding what counts as a favorable outcome. TL;DR It's all conditioning, baby.
youtube
AI Moral Status
2026-02-07T23:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugxq9JPn0ZViaTmpNSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzgiTUk2BqwUfXfJSl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxTsYKmB_EPYQ5smZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyTlE8rPoQmR7BMrhF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxVNHvuz5V-bPifdTV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugz-K8lNlHexBYAPdzN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugzh2VQUD0W1MsLdOAh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwogS2MtBOHtXt_cJR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgzHf0taDQl1U0BZQpR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwVOF-tgHsT9GK5YDd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]