Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Leave it to the woke left to claim that a completely logical and unemotional pro…
ytc_UgwhH3kTT…
G
You appeared in my feed. I am not an artist or an art follower, but a retired s…
ytc_UgySlEc27…
G
I love the speech you made about halfway through about the problem with AI repla…
ytc_UgwY9Hyj9…
G
So I had a friend in college misunderstand me when I said it would be a good ide…
ytc_Ugz82stax…
G
Probably the most metal line I’ve ever seen on AI is “AI accidentally made me be…
ytc_Ugw4wAi8g…
G
Without both comprehension of the subject. And, how the ai receives. Makes, it m…
ytr_UgyOeJ0YK…
G
When the AI bubble hits and im telling you it will it will devastate our economy…
ytc_Ugy0FPScE…
G
Ai Knows Everything Ever Recorded. That's Why These 1% Billionaires Don't Want T…
ytc_UgxK5RYkS…
Comment
The reason that the context window is limited is because it would be computationally expensive to make it bigger.
Transformers, the neural network architecture that modern LLMs are based on, rely on a mechanism called "attention," which is a dynamically learned function relating the importance of each token to each other token in a session. Since it maps every token to every other token, if you want to have a context window N tokens long, the network needs to learn a matrix of size NxN. (And on top of that, there needs to be multiple attention heads, because words in a text stream relate to each other in more than one way.)
Of course, that's not to say it's impossible to increase this context window or that it won't ever happen, just that it's hard to do so.
I can only imagine that the OpenAI devs are carefully weighing each word in the system prompt, since it detracts from the user's share of this limited resource. There must be something terribly wrong with Seaborn—I wonder what that is!
reddit
AI Responsibility
1720290333.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_lbxfc4y","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_lb11zrc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_lb18txj","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_lb2mlm2","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_lb3y6wl","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}]