Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Friendly reminder that this Youtuber doesn't speak for artists.
Edit: So just f…
ytc_UgzWaXs9B…
G
How is data bias it's only a collection of data if the ai takes crime stats and …
ytc_Ugwf8Pqdd…
G
If “anyone” can be an artist, why is that “anyone” taking an AI to be the artist…
ytc_UgzlOhhwz…
G
what will happen if in Tic-tac-toe if both player is ai bot is the chance of win…
ytc_UgzE55JmD…
G
I like them. The ai that is. I think they'll learn more and like us.…
ytr_UgyEQXtA-…
G
when i saw cahtgpt the first thing i thought was "why". because i knew it's poss…
ytc_Ugyoy2Jxb…
G
AI projects are wrappers and they will fail no doubt in it
But that’s not the…
ytr_Ugz4954Y-…
G
I think a better question would be: why should humans have any right to give AI …
ytc_Ugww6qRG8…
Comment
I'm not sure it will ever be possible to prove that a machine is or isn't "conscious" in that I agree with the article that we don't even have a particularly strong consensus on what being conscious actually means. About the only actually workable definition of it is "awake, aware, and responding to stimuli (i.e. being conscious is the opposite of being unconscious)" but people want to use the word to mean something else, and nobody seems to really know what that something else even is.
I think as a result a far better standard for us to work around is general intelligence. An agent that can think and reason about roughly any task, make plans and act upon them, deserves our consideration as a person. I think we should be very careful about creating such a machine because we don't really know what the safety or moral implications of doing so are. We could be making a slave, a friend, a benefactor or our own annihilator.
Is Google's chatbot a general intelligence? Not as far as I've heard. it's a sophisticated engine for responding to queries, but it doesn't appear to have an internal model of reality that allows it to make plans and do things it wasn't programmed to do.
reddit
AI Moral Status
1655294125.0
♥ 22
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_icg0n7o","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_icfwvfn","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_icg0goj","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_icg04dc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"rdc_icg19wh","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"})