Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Again auto pilot just like in planes is to give you a little of leaniancy so it'…
ytc_Ugzn0kHt6…
G
I would love to see AI try to drink multiple tea cups before 6 AM…
ytc_UgxzzKMCE…
G
Definitely accurate, but I think his broader point of the cat being out of the b…
ytr_UgzpJ-Wy0…
G
So, ai artists, is it “cheating” for me to use a smudge stick on paper with my c…
ytc_UgxBdIrEY…
G
If you listen to the creator of OpenAI he speaks on how he was essentially given…
ytc_UgySUwC4o…
G
AI won't end anything because there needs to be a server / datacenter to handle …
ytc_Ugxl8J9uD…
G
I think that a good way to mitigate the negatives of AI chatbots is by teaching …
ytc_UgxlYifXo…
G
@Constellasian it shouldn't have been invented in first place if I'm being fair …
ytr_UgyknPmTh…
Comment
I believe that issues like hallucinations, errors, trouble with abstract thinking, and unethical behavior may be resolved by continuing to model AI systems on human brain structure. Brains are not just one big blob of interconnected neurons. They have “compartnents” that deal separately (but interactively) with vision and other senses, reasoning, movement, spatial perception, emotion and memory. These structure can be self-correcting, for example in ethical decision making, in ways that simple LLMs cannot achieve
I also believe that neurologists and neuropsychiatrists will start to use such AI systems to model conditions like depression, PTSD, bipolar disorders, schizophrenia, etc as a way to help find more effective treatments
youtube
2025-11-27T11:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgycAHJI6QF5fdAG-lB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxblKZaLzAT3mu6TIR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzc0eBBvKwNR0D0OyR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdrVJ53CPxqnfxNPh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzs2q2y09RHe1gU1Td4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwqAMRbRMphohcakdd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzhAR1gvu95cACD9_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwhQ2fUeLaSnGl4sIJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwtE3N6RYpad645Cp14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxINFB3kenh1VjuTql4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]