Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They really need to take that serious if they say they want to hurt us. I belie…
ytc_UgxyfwsWW…
G
What do you mean by it will be inaccurate?
They are not going to put you in jail…
ytc_Ugz0zx2y9…
G
If anyone has listened to a song written in the last 10 years and thought it was…
ytc_UgyMaaQ4N…
G
Just wondering, as an android with my cellphone, if the term would be better st…
ytc_UgwpExlgJ…
G
Nope, we don't, we don't copy, we *learn*. I as an artist learnt from styles I l…
ytr_UgynR5arI…
G
Do you guys think its a coincidence that big tech companies are influencing the …
ytc_Ugwc3rFv9…
G
@ 1:200 people have Anosognosia—no self-awareness. Large language models are alw…
ytr_UgwaFKSNq…
G
No one cared when blue collar jobs got slashed (deregulation of air traffic cont…
ytc_Ugzue4sWU…
Comment
It's now a common explanation and even AI researchers did use it at one point but there is more and more evidence that it's wrong or at least highly misleading.
The best evidence for that is Antrophic's recent paper where they looked at this question:
[https://transformer-circuits.pub/2025/attribution-graphs/biology.html](https://transformer-circuits.pub/2025/attribution-graphs/biology.html)
I really recommend anyone to read it, it's not too hard to understand and has some really interesting experiments with genuine insights.
One of it is that LLMs clearly aren't "just" predicting the next word (token) and some sort of simple statistical model that is an equivalent of "likelihood", they infact do "think" about it and consider the "bigger picture" and they have "concepts" instead of just something like a "lookup table".
There are for example "abstract, language-independent circuits" and they "plan" their outputs ahead of time like shown in that poetry example (which clearly goes against the narrative that LLMs just spit out one token at a time without any "thought"/"planning").
PS: Also while we need to be careful not to oversimplify comparisons to the human brain, there are many arguments/theories stating that intelligence evolved as nothing more than a "future state prediction machine" which is obviously helpful in the natural environment.
A similar argument is made for our "consciousness" and "self-awareness" which acts as a "meta-layer" to allow for better (long-term) planning/predictions.
There is also a really fascinating similarity in how LLMs already organise information in their "latent space". Visualizations of that look early similar to brain activity scans.
reddit
AI Governance
1745072178.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mnxhj0q","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mnpdeh3","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mnpc2s6","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"rdc_mnq5pyy","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mnodebh","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]