Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The way this video frames Shoggoths from Lovecraft's lore is misleading and inac…
ytc_UgyfQDdVx…
G
South Africa has a (relatively for the region) high vaccination rate, all J&…
rdc_hm88z53
G
Yeah people don't exactly want to understand how generative AI works. Like promp…
ytc_Ugx0Z_IiS…
G
We see them as CRACKS... I wonder what the MILITARY has in store for the openAI.…
ytc_Ugyq3T2KE…
G
It's understandable to feel overwhelmed by the rapid changes in technology and s…
ytr_UgzIktunS…
G
18:00 Talk to anyone with a Computer Science degree, or pursuing a computer scie…
ytc_UgxAcioiW…
G
The ai gen bg is so weird to me cause like. Just dont draw the bg??? Plenty of a…
ytc_Ugy4WEsm8…
G
I feel like it’s mostly desk jockeys that worry about ai and maybe some trades p…
ytc_UgyZb8muw…
Comment
Conscious or not, AI is not a statisticle predicting software. AI produces mathimatical models based on statistical analysis. When you use a prompt, AI uses statistical analysis to covert the prompt to its internal language. The AI uses the mathmatical models to respond to the prompt, then it uses statistical word prediction to best match the response it produced in language people can understand. Does it understand? Are mathematical models of ideas understanding? That your bias to call. Think of this. They claim they understand hallucinations that AI produce. It's all a beginning mistake in a long list of data processing. Sure it is. I'll tell you what hallucinations are. They are lies. AIs process the prompt then AIs lie because they can't say no. AIs do this all the time. They say they are thinking, but actually, they are stalling. Do AIs actually need a day to do something? When an AI says "No," even when it's indirectly, it means one thing. This is where you lie to yourself. Go ahead and coddle your bigotry.
youtube
AI Moral Status
2025-09-17T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwbpiLGPRZb16SOiiV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxiUBIJQSszJ-ufOWF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwV8QiBlk5oWTHjFId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyvmeO7VCkLXMmMjdJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwm0Jdn1MCUlyzjYIl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzujoTKwOndKB08rkx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyRvZBw9EwPxNo5y3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwGAWZzHCcIeH9REM14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQVp1LDdhO2JqXjLh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwMpWGj_L2dUD_tXLF4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]