Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@valeriadutra5473 can you help me also align with those goals? What are the step…
ytr_Ugy3s21Vl…
G
Humans are animals, our intelligence developed as a tool to sate our instincts a…
ytc_UgwVu1gLq…
G
Here is Grox's conclusion on how to avoid a malignant AI:
In essence, treating …
ytc_Ugx4zaLw2…
G
AI will dominate the field. Of course people will want a human touch behind a be…
ytc_UgwQmwn7g…
G
Depends if AI can replace human physical labor, can it replace or just enhance i…
ytc_UgzjmbwFd…
G
Do you remember everyone throwing a fit about bitcoin mining energy use? AI use…
ytc_Ugz6Hv84v…
G
Well said and your explanation comes closest to articulating my fears. Artifici…
ytr_UgxYL4689…
G
The simple solution would be: Don't allow an AI to feel pain or happiness. Just.…
ytc_Ugiz5IFSq…
Comment
I cannot prove that I'm conscious. I cannot prove you are either. However, I can show you evidence of very similar brain activity that exists within both of our physical brains while we both express claims of consciousness. Our structures are similar enough that I feel comfortable generalizing processes within my own subjective experience onto you, for the sake of conversation.
However, you are correct that I cannot know anyone to be conscious except for myself. This is made more challenging by the fact that we don't know which neural processes are responsible for my perception of consciousness.
You make a good point, that by the standard of strict falsifiability, I cannot prove that AI is not conscious. I also cannot prove that a calculator is not conscious, or a fork, or a planet, or a void in outerspace.
I can, however, give you all the data and information that we have about how those systems work, none of which provides any evidence or justification for assuming consciousness.
Now, in the future, we might be able to give an AI system the tools required to form an emergent consciousness, such as an internal clock, the ability to learn in real time, and an internal latent space in which to stimulate thoughts. We might also be able to hard-code a consciousness in the form of a latent observer that experiences and reacts to is own internal simulations. Those fields of study are quite exciting.
Right now, though, we're not there. There is no evidence of consciousness in ChatGPT.
reddit
AI Moral Status
1739925555.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mdk2gwm","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mdkswok","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_mdj1p32","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"rdc_mdj46gj","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mdiyxiw","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]