Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes and no.
The "ai" part of the tech is overblown. It's a buzzword rather t…
rdc_imncxgv
G
@maxdefire1. No. What I describe is nothing like a human brain. We dont even kn…
ytr_UgzUDOkJL…
G
@laurentiuvladutmanea sure I will say something he said wrong one of the things …
ytr_UgwztV_Ue…
G
Psychop stuff. They will talk up Ai so when the bad thing happens they have a sc…
ytc_UgwV_d9nk…
G
His piece is stolen already because of how AI works in the first place. This is …
ytr_UgzfwLxb8…
G
Also this:
On May 16th around 1PM PT OpenAI released a new iPhone app update w…
ytc_UgwobxC8_…
G
Storage tech is wholly inadequate for this use case. At best, we may see a 3-5x…
rdc_lp6xete
G
The wet dream of dark triad CEOs going back to the 1980s is that code will write…
ytc_UgxbD7Jhr…
Comment
I think he is spot on. Sure, AI may do a breathtakingly good job of _simulating_ consciousness. But this is all through interacting with a human, and by responding to input questions or demands. I see zero evidence that there is some entity behind it which knows and understands what it is doing and why, or which would be capable of self-criticism, or experiencing emotion, or of acting spontaneously purely on its own initiative, for its own pleasure, etc.
youtube
AI Moral Status
2025-05-19T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwvCjvp2STg8m-5JVl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx4WAhFyfqSm5VXKxp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy_yPnuyxp3eZapX3V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw_DOPoqpGcxRYjGAl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy84KxOR1AAUFcUPuZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx0gWG9gIBboU6iOkx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx2VA3qChU2cSCSoAp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxzSo-OoQHZSzYeveN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyCfL3XoeE4249GIH14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyXXuzn3YTn1hqYrCt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"}]