Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
you’re mad at someone for rule 34? because they consent to it the moment they pu…
ytc_Ugw7Uphvg…
G
llm models are accelerating the capabilities of devs. without it i would have sp…
rdc_n7hnz08
G
Your understanding of how ChatGPT works "day to day" is off. The model doesn't c…
ytc_UgwvBGKhw…
G
He's just doing it for the publicity. Anyone that really knows whats going on a…
ytc_UgxWvOs2E…
G
Personally, this is a blessing. I don't like the idea of self driving cars at al…
rdc_d1kjppv
G
i mean, if AI art creators PAID artists to use their work, if people didnt use i…
ytr_Ugybl1RUO…
G
I watch vidios ai so stupid
It would learn attack move after about five moves. …
ytc_UgxZssbVB…
G
A) Stability AI's founder admitted to there being copyrighted images in Open AI'…
ytr_Ugx8eiiVc…
Comment
I'm seeing people struggling to address the same problem with AI as we do with other people. As a child, I was deathly concerned about just how much framing affects the human mind. It can often sabotage performance for better access to moral grey areas.
I see the same problem with AI. Alarmingly similar, in that we haven't really put together the groundwork to understand what an AI is doing. The fields that it is drawing from/engaging with. The orientation of it's 'thought process'
That ignorance is what scares me. Trying to do that level of oversight for a human is an extreme challenge for modern institutions, and that's before you get to what it would do to privacy. But the arbitration of privacy is inextricably part of the problem. The amount of data required to store that level of performance would grow exponentially.
Systemically we can be more efficient, but that means being more selective, which means involving the 'human' side of alignment, on top of whatever tools the AI is actually using. It is a bit of a paradox, but it's a functional one. one that produces real data... or at least hallucinations. have we figure out how to recover from hallucinations? If not, we have madness.
youtube
2025-11-05T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx8_uHO4Cs3b2axydt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxVN8R5dKU3AKe-S1t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyQJTDQLvhifr0rp_J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx8jeYWWWTgOGrAZ8N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwkMdS6VNvw0U4rHZR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwKBazwp5LSbQmM77p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxtctHLBicI2ZpzR5N4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzuS8PC0tec57H8GNp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyF_p9sQip8zf-qK6F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugwc0IRmD6Ist-r1b2R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}
]