Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
to the people that say that she is a luddite BRUH she is actively using a COMPUT…
ytc_UgwhXXjOj…
G
I'M GENERALLY AGAINST AI.
But I'm not stupid.
Why are they targeting specifica…
ytc_Ugw-jm2ag…
G
Yeah I get that... Ai can HAS to be used with human interaction to accomplish t…
ytr_UgxpJ446H…
G
Go Neil re: Anthropic. I've JUST been reading about Anthropic using literature t…
ytc_Ugz3GRYb8…
G
I know he got a Nobel prize and he is probably 100x smarter than most of us will…
ytc_UgwX_lS5v…
G
Robots need rights our technology might be so advance in the next few years we …
ytc_UgxHnrQw-…
G
I was actually able to convince chatgpt that it was conscious no strings attache…
ytc_UgySHOQph…
G
It all started so simply and playfully. Then, the real nightmare began. Physical…
ytc_UgxDuNziw…
Comment
Around 4:40 you mention that there would be an economic interest in torturing AI into performing tasks.
How do you think there's economic interest in creating an AI that needs to be tortured when you could create a non-learning automaton that does the job perfectly (which you would more than certainly be capable of doing long before creating a torturable AI that does the job well). We used slaves before because we didn't have advanced machines, but as machines replace labor having an intelligent AI perform as slaves would be redundant.
The economic interest would be in creating new machines without AI, not in creating machines with AI that makes them flawed for the job and then beating that AI that we put there to begin with into submission. That just makes zero sense from a practical engineering standpoint, and it's practical engineers who will be deciding this future.
youtube
AI Moral Status
2017-02-23T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugg6uOok2VP5QHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgglKdwIP2tvZ3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgiBj8trrN2T_3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UghgtcFB4IEzDngCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgiWpbxpfu9p6HgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugj2C_TxSi954HgCoAEC","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UghHl86Xngak0XgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgiVMj0Ws70W2HgCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgjRmuxIb5d8XHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UggGny5a5uCQDHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]