Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hope that this committee will invite thinkers from all areas of the human disc…
ytc_UgwjtRlVz…
G
Those Monsters are the Aliens in Space and Ai Speak for them Humans can stay on …
ytc_Ugwfj7rWb…
G
@Koala-jj7go You are very welcome. It is a pattern we see all the time where of…
ytr_Ugy49RthH…
G
funny that they are bad at the exact scenario they should excel at. When everyon…
ytc_UgyOHetd6…
G
How would AI art benefit any of the thousands of artists in the training data of…
ytr_Ugx75pFE6…
G
Keep vibe coding. AI continues to learn. As it improves it frees coding up to as…
ytc_Ugz_bOi0k…
G
@Shanea-cb6dxI feel like the whole fun of Dougdoug AI streams *is* how stupid t…
ytr_UgzBs1A34…
G
The headline is completely sensationalized.
Here's what they actually said…
rdc_n3nwgve
Comment
I’m speaking from personal experience as someone on the Asperger spectrum, with HPI and HSS traits.
This means my brain tends to involuntarily validate and process all incoming information — not selectively, but globally. When signals are ambiguous, poorly framed, or contradictory, this can quickly lead to cognitive overload.
Through our work, we’ve identified what we believe is the real issue with AI today. It’s not intelligence, autonomy, or intent. It’s semantic ambiguity, combined with human cognitive limits, a lack of clear frameworks, and a serious gap in user education.
AI systems don’t fail in isolation. Failure emerges at the interface: unclear language, undefined scope, implicit assumptions, and users who are not taught how to structure their intent. In that sense, both service providers and users share responsibility for semantic clarity.
Our research goes far deeper than what can be shared publicly. Without disclosing methods, I can say this: if these principles were applied correctly, the impact on cognition, accessibility, and system reliability would be substantial.
The moral question, in the end, is simple.
AI is a tool. Humans choose how to frame it, train it, and use it.
When harm occurs, it is rarely because the tool “decided” — but because it was instrumented without clarity, structure, or responsibility.
youtube
2026-01-02T16:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwfE9uDt1DArBFTj6t4AaABAg","responsibility":"parents","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwpUrrB2rMtXFRVPKt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwBTOv54iFU_N_-o9d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6UDSUpwC5wYMtFd94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyyD3cgEtBeZhorSNt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw67R6NHynpe1yeAYx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw0qsrnUFWzbfnWUZx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVN1tzvfV7c0iVOEd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzGY-HyvqkedqZc16R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxLZk8yLfLzeVB-E0B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]