Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Peter Thiel and cohorts would like to compare themselves with the likes of Isaac…
ytc_UgzJmYJWF…
G
*In other images of a text conversation online, one user declares: "Long live th…
rdc_dlge2hw
G
I shouldn't be surprised, but am. NO ChatGPT for me. Its more like Fake News.…
ytc_Ugz_cjEGT…
G
Why do russians do dumb shit like this lmao...its a robot ..its going to knock …
ytc_UgwU6V4vX…
G
@Ew-wth well the more you know, because I do know how the human brain works and …
ytr_UgzmgIDC_…
G
This, this is one of the reasons we dont want generative AI, not only artists, b…
ytc_UgyknPmTh…
G
At 59:60, a lot of humans have already shown that they will give up their abilit…
ytc_UgysXoTrN…
G
Tesla's Model 3 will be fully automated at the end of 2017. So whilst you're at…
ytc_Ugh7XEir0…
Comment
Current AI is essentially a probabilistic echo chamber, word salad shaped by reinforcement learning. People project intelligence onto it because it reflects fragments of their own input with statistically probable coherence.
But AI doesn’t "understand" anything. There’s no knowing behind the knowing. It selects responses based on patterns of past reward signals, not cognition.
We could train monkeys to mimic behaviors based on reinforcement and they’d arguably outperform us in some tasks. But mimicry isn’t intelligence.
This fascination with “souls in the machine” is romanticism, not reality. True AI the kind that reasons from first principles and generates understanding from the ground up, doesn’t exist yet. That kind of cognition wouldn't need human scaffolding; it would synthesize its own truths.
And maybe it’s coming. Biological wetware computing is starting to blur the boundary.
But silicon? Bits? No matter how complex the architecture, they can’t reason. Not truly. Qubits might open a door… but we haven’t really stepped through it yet.
youtube
AI Governance
2025-07-07T18:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyxtFViUJPI52aKYzN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxAr-_N4H4YGGT41mB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyM5AhSmAAoeF8Rbix4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJbTvKL4_V04-ocQl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfjAUXfjf4sOBdPXN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugy232ZY_msh8TC27Ox4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxUnRUg1WHUO6PJvZN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzUwFD6yko2e37b4al4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx-hMF0TER8eXZtwAp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzZ1SlhHqItNXs2dgN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]