Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is reflection of humans and humans don't get along with each other.so absolut…
ytc_Ugxyc_SWZ…
G
If the system can calculate popular religion in specific location why couldn’t i…
ytc_Ugw3iI9gd…
G
Bro I just want to talk to a comfort character and the AI will not GET OFF OF ME…
ytc_Ugwf2XfHj…
G
If I was part of creating AI, I would make sure it could never replace my job. …
ytc_UgxA21Yzr…
G
Ai already failed , and people will lose their jobs because investors already sp…
ytc_Ugzm7o1Vw…
G
I think the recent Ghibli AI trend proves why you need that human element in art…
ytc_UgwzdFWXM…
G
+Abandoned Void why should they make a law that makes it unsafe? and with self d…
ytr_Ugggn2PY3…
G
And even if they did everything right and got all that data ethically they would…
ytc_Ugy6R6eO-…
Comment
The thing about ai is it's fundamentally not sane. It has no normal human instincts because it's not human, and is not grounded in reality because it doesn't experience reality. So, as powerful and interesting as it is, it's not something you can let have any authority because you don't know when it might go off the rails, or how. So, could ai be a helpful assistant to a teacher? Sure. Same with a coder or even a plumber, but it's not a replacement for human workers, it's a force multiplier.
youtube
Viral AI Reaction
2025-11-23T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxyHFc1Xh3wGJqpKHx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugz8-iCu8U1VhLWthbd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzAuY_l0gZtLiN_qS54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxR0dyOilBWIYADrH14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyN2wRnLupH_YTxQuV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwDUwmPdrNNg1NR1_94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwYTzvMP2G4gCU-i3x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwdJs-KiFSOTaGfO3p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxd4QhbAyixbadOZYB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgytM2VXOUFF6YilBGh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}
]