Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So you're saying the guy is AI. they sure got me because I am fooled how to tell…
ytr_UgxOS9wMD…
G
Oh bullshit, all the AI companies are saying this will take jobs - it’s how they…
ytc_UgxnME5u7…
G
@catsaregreat6314 well I did it. It's just a fact. Give it a try unless you have…
ytr_Ugy2h-pY3…
G
@johnmadlabs I'm sorry, couch expert, but the facts say otherwise. (My comments …
ytr_Ugwmo-zF4…
G
The threat of large scale deanonymization is a direct result of the high fidelit…
rdc_o9zr9d8
G
This whole ordeal is absolutely wonderful, of course, but the fact that this Sch…
ytc_Ugw8Y_0jj…
G
@subcountzero9421 When I was a kid, you played games against the computer -- th…
ytr_UgxrZSaHq…
G
I've personally watched Sales/Professional Services people, VIBECODE stuff using…
ytc_UgxD71fYk…
Comment
+Kurzgesagt I just wanted to point out that current A. I. actually feels, as an emulated form of pleasure/satisfaction. Each A. I.has a defined satisfaction function that looks to maximize that keeps track of the progress through a score. Going against it will generate displeasure (lowering said score), so it will seek the better output.
This would mean that if an actual AI is capable of that, a self-aware AI will be more advanced, but in it's core it will still have that satisfaction algorithm. So yeah, even current AI feel. Sort of.
youtube
AI Moral Status
2017-03-01T01:0…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghVriokmiBrdXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggjMob2djzkEHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggJr8-UN-xM-ngCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UghhNDhzWUUiOngCoAEC","responsibility":"government","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugj9myDUs7y-zngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjfweSgo8G6r3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjXivWrKkGxu3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UghzKagSWsoOAHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Uggj1y11qcrSHHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggaLH0Jy1BVU3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]