Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There’s still going to be demand for REAL intelligence. I don’t think we put eno…
ytc_UgwEWKEg3…
G
I had this idea first. 2 robot intaractin with each other. Now you guys tried it…
ytc_Ugy6t143v…
G
you should ask question about motivation of CEOs saying that. They all have the …
ytc_UgyYsPH5w…
G
Why does it matter whether it's real or AI? We don't reject The Wizard of Oz bec…
ytc_UgwukpSsi…
G
I believe there is a point that is not being discussed. First of all, of course …
ytc_UgzxGn2ZL…
G
*From YouTube AI summary:* Hinton warns about the Volkswagen Effect, where AI m…
ytc_UgxVwG0Co…
G
@KardelenKurtoglu Yes we don't even know how many people who use technology suc…
ytr_Ugz4qKmUL…
G
I think we talk to industry leaders and issue out aid to the ones that are earl…
ytc_UgxxowIvZ…
Comment
I am older and have a level of tech understanding that rises only to the activities that I need to do in my everyday life, so this topic is orders of magnitude above my understanding. But some questions arise for me:
-Obviously not all living things are sentient so possibly the reverse can be true. Applying biological criteria of what's living; being able to heal itself, able to make copies of itself, able to react with its environment without outside help, most computers would probably fail left to their own abilities. But that doesn't necessarily rule out sentience.
-If humans from the outside think that a computer appears to act sentiently might that just be that it's giving correct answers that are derived from the original programming but on a higher level? I mean, how do we really know if it's really sentient?
Also, what level of sentience are we talking about? An amoeba on a glass microscope slide will attempt to move away from a high level of heat or acid, but I wouldn't call it sentient. Anyone who has owned a dog will emphatically agree that dogs are sentient. Somewhere in between amoeba and dog is there an agreed upon standard for AI that approaches indisputable?
Comments would be appreciated but keep them low tech if possible.
youtube
AI Moral Status
2022-06-30T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwA0JihBu9wZKUy97Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzQl8Jf2WhiO71aknd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzYGGttgpnEdXuScSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKMjQnOBpMfawoRIJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxFJzKv0WXHQeaastZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]