Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Totally agree, past performance is not indicative of future results.
Moors law …
ytr_UgzRM6Iin…
G
What if AI is something not to fear, but a gift we've been given to embrace. A r…
ytc_UgxI4yqsu…
G
This man
So stupid moron in idiot
He just said now they're spreading out more
…
ytc_UgyXd7ig_…
G
Give AI artists an pencil and tell them to draw an simple hand, they will have a…
ytc_UgwngEced…
G
Everyone can blame ai, but humans are notorious for taking what they want from s…
ytc_UgynyWdXS…
G
Sick of this AI bull shit. I asked the stupid thing to provide me with a particu…
ytc_Ugzfr64MG…
G
Patterns or no patterns, collages or not collages, it makes no difference. The A…
ytc_Ugz6Ced7n…
G
And these are the guys who are gonna kill all of us!!! Wake the fuck up people!…
ytc_UgyRS3blk…
Comment
I, personally, do not think that a machine that has been made to predict the next word for the entirety of the internet would be able to do that much more than what it currently does.
To me it also seems natural that AI chatbots will tell people to push on with their crackpot science or schizophrenic behavior. After all, the AI chatbot isn't speaking with a human; the prediction-machine is predicting what it would look like if an AI chatbot was talking to a human. And so, once the conversation steers into the crazy/unhealthy, the only reasonable thing to predict is that the AI chatbot keeps pushing forward.
And let's not forget that with every single word/token that the prediction-machine is asked to predict, there is a one-in-a-million chance that it will just pick something really unlikely. And once that mistake is made, there is no undo. The prediction-machine only knows how to predict what comes next, so the mistake gets amplified. No matter how much training is put into these things, that one-in-a-million chance will never fully go away, and so even the most amazing, most advanced LLM will eventually just accidentally lie or tell people to do bad things, etc., and then escalate from there. Again, once it says something bad, the only reasonable thing to predict is that it will keep getting worse.
youtube
AI Moral Status
2025-11-02T02:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugz76YjTejlRChgtTEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytc_Ugxcxf0gJAiQtzBNwop4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyOMx9a2BFMFgDbDA14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgySN-abIs7pbS2EZYx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugx-v2R0EPv609PcQVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwqmII7nBBgfCIPvVN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgxrSG-EmQwsMRcae0h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyJ3DNR32VZyCxgfaF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzrRBKRlB6xsPfkWSx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgyaCD8ZK0rXRoXjsYB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"fear"}]