Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its pushing people to be a robot programmed by the communist government. Its sic…
ytc_UgxYwRb8J…
G
Andrew Yang been mention it in numerous interviews that UBI is not the silver bu…
ytc_UgwcTdwUM…
G
An AI has been set to play Go against one of the best Go players in the world. G…
rdc_d0yqq7j
G
This video is marketing. The AI companies NEED people to believe that bullshit b…
ytc_UgwEcaBzv…
G
How do insurance companies factor in self-driving car technologies? That seems l…
ytc_UgzV9UVSD…
G
Well, if the robots can become killer robots, I think they could kill their crea…
ytr_UgypSekcs…
G
Wau ganz schön hübsch Madame Roboter !! Die lässt dich dann auch nicht im Stich…
ytc_UgyvpYWct…
G
The big question is what will happen when AI takes so many jobs to the point tha…
ytc_Ugw96yzTT…
Comment
No what you are saying makes no sense for many reasons, so I will get straight at the issue. As an Ai platform grows in user count there is mounting pressure from the company to minimize the amount of compute spent on inference. how does this look? Well, it takes the form of smaller quantized models being served to the masses that masquerade as its predecessor. Whatever name the AI company uses is NOT what they give you after the first phase of the models roll out. Its a basic bait and switch. Roll out your SOTA model, get everyone using and talking about it to generate good PR. Then after a few weeks or a month or 2, swap out that model with a smaller quantized version. Its literally that simple, no conspiracy theories or any other nonsense. For more evidence of this interaction just look around the various AI subreddits like /bard for Gemini 2.5 pro swap out or any number of other bait and switch shenanigans throughout history...
reddit
AI Harm Incident
1747010904.0
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mrv267f","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"rdc_mrut4mz","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_mru7bs2","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mrum80h","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mrvvwd5","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]