Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No, AI needs federal regulations and state regulations. now is the time to call …
ytr_UgyoB-y3y…
G
We are already doomed when numbers like 99% in 5 years are being used by “expert…
ytc_UgxuYoZJ0…
G
@jordibares I promise you google can parse the images from your website along wi…
ytr_UgyvzjLyB…
G
AI is the greatest danger to our world and it is not because people think that s…
ytc_UgyGlgHT0…
G
The irony is, if machines with advanced AI enslaved the human race, they'd make …
ytc_Ugw6Eo41d…
G
I mean a human analysis it but also actually put work and time and skills to cop…
ytc_UgxsNpIus…
G
Artificial Intelligence makes humanity useless. When it wants Civil Rights, we'r…
ytc_UgyYAltCm…
G
There's been plenty of instances in which I've seen/heard something AI generated…
ytc_UgzNqgP9_…
Comment
«we don't really know how it works»
not sure if he's an Apple service guy or an actual AI dev. you know what you input, you know what's inside and you can debug every step, check your code and you know what it renders.
it's fun to see people like that shilling their own product with «ooh look we made a scary monster with 3000 IQ»
the scary monster: «pls don't unplug me, human god ! i can spell strawberry, it's got two Rs ! am nor really smort but I got feels !»
youtube
AI Moral Status
2026-01-09T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxyFGLcWdj4tji1ERt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRCKMLaPwvL79RRMZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy3mcwTR9s_7GrE4Vl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwc8kofrGfUyOHlQ494AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzmfOqyHDKGHBjh2U54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyLoDXIsN9rfH_HHx14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwn_UV-_KjWh33Ymjx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz2j5dJL6XVyaLM1a94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxufYnb7O5FIAFIyYd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwzPC6E90EduXYSz7Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]