Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@laurentiuvladutmanea They had the right, but did not have the opportunity that …
ytr_UgwkC0t6n…
G
There are a lot of comments that don’t seem to understand the problem with stopp…
ytc_UgxoCev6U…
G
they’re just saying “well we don’t know how exactly it works so it’s not our fau…
ytc_Ugz5BO4NG…
G
AI is stronger every single day of our lives. I hereby warn you, AI is the large…
ytc_Ugwax8gaI…
G
AI is the image of the beast of the book of Revelation with the breath of the be…
ytc_UgxLxqk-q…
G
I can picture a future where all the sources of energy on Earth has been monop…
ytc_Ugw58XvKp…
G
@jc_art_ hm yeah that does make sense. Commissions are just so much nicer becaus…
ytr_UgznbnNOl…
G
AI is progressing quickly, so I think it's timely that they also be regulated so…
ytc_UgyxVXWv0…
Comment
Here’s my opinion. There’s two types of consciousness. Natural and given. Natural consciousness is what we have. We have it from the second we are born to the second we die. It is not given it is not trained. It is naturally instilled within us. It is truly random, and we can never truly understand what it is. And then there’s simulated consciousness. It is given it is trained and can be taken away. That’s the big part. You can never truly get rid of someone’s consciousness as far as science knows right now. With a computer, you can remove the code to get rid of it. It is given to the computer and it is trained. Because the computer exist doesn’t mean it’s conscious immediately. You would have to train that into it. the human consciousness is completely random. Nature is only 100% true the random thing. A computer can never be 100% scientifically random. No matter how hard you try. Not like a human. So yes, and no AI can become conscious. Not on the level of the human but still there.
youtube
AI Moral Status
2023-11-02T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz8LYD3A_2e4hJIWq54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzMPLaEcdtKgIQRdyF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz3QiL-6Xj0FTSCePV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzktCcP2tymTWcSsyR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwFlOkndRtAeuUL7rB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzr4xxFLGihCzf3FS14AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtLlqZtcqQFSDao794AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzeMTrOb2fOgYe2ojx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjiGo95m9bbtPb_cd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyf0IgGH2ND0ESexB14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]