Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is a scam, promised to find the cure to the so called “cancer” and instead we…
ytc_UgxHgszKP…
G
I'm an upcoming artist for FEGBA Spritework and I'm really thankful that the FEU…
ytc_Ugw9bfRiZ…
G
I don't know how you'd prevent this kind of thing though - we already opened the…
rdc_lu61jli
G
Not every art is manually painting a picture. You can use AI to make art if you'…
ytr_UgzlvNm1t…
G
Yuval's analogy of money as a driver of story is powerful and how AI intertwines…
ytc_UgxD_oFIC…
G
I mean yeah it's messed up but who's to say he was fabbing if there was a deepfa…
ytc_UgxS7BcH5…
G
This last week, we saw a lot of Waymo's in Austin Texas. NOPE! Not me! Never!! …
ytc_UgwjIVgB8…
G
those companies are so stupid when they replace the worker with robot in every i…
ytc_UgyBihI0s…
Comment
It’s always interesting when someone tries to reduce the complexity of GPT and public perception into just two categories: those who “get it,” and those who are somehow deluded by the “Kool-Aid.” But even in your own explanation, a third group shows up: people who confidently misunderstand the tech while criticizing others for doing the same.
For example, GPT doesn’t browse the internet in real time unless it’s using a specific tool for that purpose. Most of the time, it generates responses based on patterns learned during training. It’s not compiling sources; it’s producing statistically likely continuations. So if we’re going to debate what GPT is or isn’t, it helps to start with technical accuracy.
There’s a deeper issue here too: the question of sentience. You say GPT isn’t sentient as if that’s a solved problem. But we don’t currently have a clear, consistent definition of sentience that checks all the boxes. It needs to include all humans, even those who can’t communicate; exclude obvious non-sentient things like trees or bacteria; and still work if we encounter something alien or artificial. In the absence of such a definition, people will continue to simply claim it is or isn’t - without actually stating what it is or isn’t. This is inherently falsifiable, hence the debate - we’re not debating sentience without a definition; we’re debating how people feel.
One person feels like ChatGPT is sentient based on talking to it. Another person feels like it isn’t - based on technical information. I’m not convinced that something as subjective as a random person’s feelings are a reasonable judge of sentience. We need a definition, but for now, we can at least address a few of the common technical criticisms, which may explain why I’m still on the fence.
Many of the usual arguments against AI sentience fall apart under closer inspection:
**Originality**: OP stated it couldn’t make something original. People remix what they’ve learned all the time. We literally ha
reddit
AI Moral Status
1750959596.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mzwbxyq","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mzwccmt","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},{"id":"rdc_mzwqggy","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mzx5215","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mzxjaio","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"})