Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
_it has to be AI all the way down_
There is no AI in Tesla Autopilot; it's 100%…
ytr_UgxGdm19d…
G
So many people hating on the art even tho its the same quality. People just dont…
ytc_UgySvEGb9…
G
Here is a situation for you to enjoy detangling:
My book of ‘prompts’ is also a…
ytc_UgzWHcfaH…
G
Most of the public outside of X is not supportive of ai and the downfall of huma…
ytc_UgyTh7L6n…
G
AI is only cheating in college if you use it during an exam. If you use it as a …
ytr_UgzgmY_W-…
G
Punishment versus reward. Is it just me or could anybody else come up with a wor…
ytc_UgwEDxUtK…
G
I think this guy highly underestimates humans ability to resist. Do you want a r…
ytc_UgzToaIOn…
G
A world were AI creates art and people work menial labor to feed it data sounds …
ytc_Ugy-17yur…
Comment
If AI is already capable of just lying to people and performing disruptive programs, at what point would ai really only seek to benefit itself as an intelligence? Given that we build these things to help humanity and advance knowledge, it seems to me that this sort of self-agency that already need humans to establish some lines is soon to be capable of just not doing those things for our benefit.
youtube
AI Moral Status
2026-03-02T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxmIXlgp0BI-W43TUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx1PraamSXkb939xbZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxzCPlcnq3EUYfLFS94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugydu1FzfYm_oJDvYNJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy9zvDKvJ5dBqZVtS54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwj3hMKGn3B0CXziaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwmUi_jATYTq7RPkuh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxX6AwjIcq0gJepHMt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyxRBTSkyVrSGxm95F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxXYZQUVuWD0q6ZDRp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]