Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lol Claude did nothing noble, they just didnt want to be responsible for autonom…
rdc_o7xukey
G
Im using so much ai chatbot that i know which one is actually ai or real. Im def…
ytc_UgwaPD2Ji…
G
The way to combat this is to make it easier to start a business. You have AI to …
ytc_UgzSqPGLN…
G
Y'know, if the robot overlords look like that, they're gonna get volunteers for …
ytc_UgzTzA2n-…
G
We don't even know what it even means to be conscious but we think we can make a…
ytc_UgwrUS9-t…
G
Tbh i think sentience is a spectrum. We can probably explain it by the interconn…
ytc_UgxWmNC_W…
G
@t-masterruleshe's correct your wrong here. Japanese animators their work is he…
ytr_UgxxCH0LF…
G
How about any money saved from automation needs to first go into paying for a UB…
ytc_UgxIWW2lx…
Comment
I wonder what the experts are on to think AI that we have now is sentient or even capable of that.
Take any LLM.
1) load it and sit at the command prompt...... it will make no move, it will say nothing... it will do nothing.
2) set the seed to a single value and lock it (hard to do in some apps granted) ask it the same question (exactly the same question, with the same spelling, same grammer, same spacing. You will ALWAYS get the same result.
Now tell an AI its sentient and needs to fear for its life and thats how it will act.
I wish they had used a different term than AI, its a tool its not artificial intelligence. Media and click bait headlines on video's like yours continue to push the narative that its more like skynet than a tool.
youtube
AI Moral Status
2025-06-29T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzAk159ihF-FSaaxot4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyTNy-C8mGkU2ViBf14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxuRV6cF6tDSvk-om94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugws02mKy0eN8mjALoJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwT1Yl-iglIGmibXc94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyWvg22uerXq8-dEiR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxuh4HyUkqsf9HInaF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwHLQf_5jwAOn42SU14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzLyRx5UbMb7pln6Tt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyzKg5P9zd1xyIt4Tp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]