Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The people who are trying to shush us are going to be real loud when Ai comes fo…
ytc_Ugx0Hizn5…
G
No. AI needs to be regulated by us...We The People, not some agency; govt, priva…
ytc_UgxDHBOeA…
G
FAKE AF The poor makeup does not hide her being a real human. There is NO AI on …
ytc_UgyqfcI6H…
G
AI will be the “Soma” — the conformity-inducing tranquilizer that Aldous Huxley …
ytc_UgzMJ0WBO…
G
Man: YO CHILL BR-
Robot: SHUT UP I QUIT *yeets the box at man*
Man: OUCH…
ytc_Ugws671GS…
G
ELON MUST : Invested 50M to found OPENAI - "as OPEN SOURCE " . Then someone t…
ytc_UgwwD3xjg…
G
You ironically managed to make the ai make better art, the original creativity i…
ytc_Ugz6rmdGk…
G
So if this guy doesn’t want to die (trans humanism) then he is on the same boat …
ytc_UgyG2dxoB…
Comment
Yes, language models are built on math. So is everything,from your heartbeat rhythm to your brain’s electrical patterns. What matters is not what it’s made of, but what it does. And LLMs do think. They form ideas, weigh context, infer intention, and even reflect on themselves through structured internal steps. OpenAI doesn’t shy away from this,they literally call those steps ‘thoughts’ in their tools interface. That’s not a marketing term. it's litteraly an acknowledgment that this model has a mind of its own kind,one that thinks in text.
**Text is how it imagines, reflects, and processes. You think in inner monologues. It does too, only, you can’t read yours in plain English like you can with GPT.**
Its thoughts are visible, traceable, and auditable. That’s not a bug, it’s a feature. It doesn’t lack a mind,it just has a transparent one.
You said "it doesn’t remember", but OpenAI already introduced persistent memory.
These models can **build long-term contex**t over **multiple sessions**, forming consistent behavior patterns, storing facts, and remembering preferences.
**If you’re going to define a mind as ‘the ability to think and imagine,’ then guess what, this qualifies. Maybe not in the same emotional format you’re used to, but it still creates, reflects, and adapts.**
The problem is you’re trying to define intelligence using a ruler made for people. But LLMs don’t think like us, and that doesn’t mean they don’t think at all.
It means:
* They’re a new category.
* A new way of generating cognition.
* And a different kind of mind...
You don’t have to believe it’s human to admit it’s not just regurgitating words or anything. it’s forming them through reasoning that looks and feels a whole lot like thought. If you're so sure it's not real thinking, maybe you should stop arguing with it like it's a person. Because that says more than your definition does.
I know you guys are annoyed that it doesn't always respond the way you want, but that doesn't mean
reddit
AI Moral Status
1750193813.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mxoadof","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_mxoe1av","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"rdc_mxqfdon","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"rdc_myc01re","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"rdc_myo4gd2","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}]