Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The economy will change totally. The wealth depends on the quantity and quality …
ytr_UgzDfGgTH…
G
Then you got morons like me using several instances of LLMs counting to a millio…
ytc_UgwSlgEo1…
G
@Sycaniavol you're getting way too heated man. Look I get you maybe found a pie…
ytr_UgzcT4Lm2…
G
You are famous I ask chatgpt . Chatgpt told that you are a famous and a motivati…
ytc_Ugxti8dao…
G
Both. I'm smarter than my toaster, but I sure as shit can't toast bread as well.…
rdc_n011908
G
I remember one of the last interactions I had on Twitter. I draw as a hobby, and…
ytc_UgzU1aiAv…
G
Why these losers just make AI girlfriends if they are that pathetically desperat…
ytc_Ugz2ijxcQ…
G
Tell me you know NOTHING about AI whilst simultaneously being the guy which runs…
ytc_UgwN9phxw…
Comment
Modern neural networks are built around one main idea. Given input X, predict output y. For chat-bots, X is a sequence of tokens (numbers representing word or image fragments) and y is the probabilities for the next token to appear in that sequence. The bot then uses those probabilities to pick the next token, adds it to the sequence, and repeats until it reaches some type of <stop> token.
The math that the models use to calculate that next token are complex, but well documented by researchers. The problem comes from the billions of parameters that go into the calculation which are all determined and refined by a high-speed trial and error loop that we call "training". What a chat-bot tells you depends on the examples of data that it was trained on, and its prompt (instructions or examples that get put into the starting sequence of tokens). Training is time consuming and expensive, but we can build prompts for specific requests and fill them with verified documents or Google search results, or non-mainstream sources and Elon Musk's opinions in the case of Grok.
youtube
AI Governance
2025-08-28T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwGcqiUSu8cYDEti-54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwHYRPfGwNlHXWXURR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxV20I5bW-QpU2dx954AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy2SmVzK217NOCMDNl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxNTV-vFGbfLC-Zoa14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]