Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The people who make these AI Don’t care about human . They only care about thems…
ytc_Ugw7i6dBW…
G
How can AI be dangerous, if they claim it doesn't think on it's own. Someone lyi…
ytc_UgyLAbVS6…
G
Art is about a skilled person creating something that people can look at and enj…
ytc_UgwDq1-P0…
G
There is another alternative here. When I heard Disney was interested in AI, my …
rdc_ocq5q9a
G
I find that when I get a dumb AI, that repeating "I wish to talk to a person ple…
ytr_UgyMT0f7f…
G
When thinking about ai, you have to remove morals. Even if they are conscious in…
ytc_UgzUaMqaP…
G
This is good to do for A LOT of reasons. Even excluding the apocalyptic scenario…
ytc_UgxA-m8uc…
G
Ill take an awesome AI art over any "abstract art" todays artist produced.. Thro…
ytc_UgzMhdYaJ…
Comment
He offers a premise as a conclusion, that intelligence requires consciousness, then asserts that computers can't be conscious, therefore they can't be intelligent. Nonsense. If computers can outperform humans in tasks requiring intelligence, as they can already in chess, go, protein folding, and numerous medical diagnostics, who cares if they are conscious? Even if AI only turns out be a super capable tool that augments our knowledge, imagination, pattern recognition, problem solving and prediction ability, it will have at the very least added enormously to our intelligence. And there's every reason to expect it to do far more than that. Lastly, we don't know what consciousness is or what is required to achieve it, but there is zero evidence that it's not the product of some arrangement of physical matter. So stating that computers can't be conscious is impossible for us to know at this stage, and may also be totally irrelevant.
youtube
AI Moral Status
2025-06-16T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy8f4tlW3Ud0M7qRl54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAYapef_8dv4WUxjh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz84nw3IFqgPemkuTl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_CqQAyVqmhj4DlIl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTkP9M2tj-6wGBebF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxK-cO4Bu1wKjOKJlN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyre2GAjtTcC5glRJp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHdZPbB7X752Zk9vt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyOFmBU1pqGS4vjBMF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_y11F0X2Eog9oFId4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"})