Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@kaczynski2333 You're right, it's more than just a search engine, but it is stil…
ytr_UgzUDfozk…
G
This dude keeps bashing Trump. So he rather have Kamala? And this is the Godfath…
ytc_Ugykr5XJx…
G
I don’t know why I feel like thanking ChatGPT every time I got help from it ,, o…
ytc_UgzZGyytV…
G
AI would do whatever it was programmed to do. AI is probably the most poorly def…
ytc_UgxDGOLAY…
G
I once saw an artist who made an ai model and trained it exclusively off their o…
ytc_Ugy-4MMfx…
G
someone made a deepfake of jungkook's voice to make a watermark of their tiktok …
ytc_Ugy5SrrmK…
G
Did anyone notice that they spelt hiring wrong? Did they get the AI to design th…
ytc_Ugwe22WTg…
G
So what your saying white people not racist black people really are the problem.…
ytc_UgyYuTM0K…
Comment
If anyone is interested in why this is true:
AI memory (called Latent Space) is sort of like a map where “things” that are related are organized together.
When you ask AI a question your prompt gets translated into vectors (sort of positions), this is called embedding.
Then the AI will start gathering key words in reverse order to “build a response”. The gathering of those words is done by “distance” so you get core words that are related to each other.
For example WW2 and Germany will be in the same latent sub space as Hitler, which is to say, those will be close to each other.
Once the AI has those key words, it starts generating the rest of the text to organize an actual English answer. This is also done in reverse and it’s done in parts (called chunks).
Once a chunk completes, it’s reversed and streamed to the client, while the next chunk is built. This is why you get streaming answers from AI
In effect, if you ask AI nicely, you will get answers from the latent sub space of answers to questions that were polite which after often higher quality because that’s how humans work, they respond better to positivity.
Same thing with personas, your answer will be contextualized into the latent sub space of the persona, like instructing AI to respond like a film director will preposition context in the film creation latent space.
youtube
AI Moral Status
2025-04-05T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxcl5Gm5gmGebn4ZuR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwofssQtEpNj91SnDp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxlsPtRgEE2gA400ml4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgysjU1al9jbqAPlpFd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx0YIPlFqC0PhIzFUl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-RITWLnxh6kqDOup4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjFV9YkG6v8l5k0Fp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzww4Q7Eu4Z1gK8g9N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxDsiqybE_Xc2cFe8d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugylr7j8RY8vBO-AeWN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]