Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've only been playing with AI for a few days, but I discovered fairly quickly t…
ytc_UgwocQGv1…
G
What would be the point of killing everyone? You are then dictator of nothing. W…
ytc_UgxrEa9aU…
G
I'm glad that there's so much more empathy than on a similar story about people …
ytc_UgwcWBJ0c…
G
@Anemoia1990 Hmm so if I were an animator, and in the next 2 years all animator…
ytr_UgwlmXEgN…
G
But ChatGPT is also an AI but it is so dumb, I asked a simple physics question a…
ytc_Ugyl_WuRa…
G
Plenty of those instagram “artists” and when you look at their work its all AI g…
ytc_UgyvkyNWa…
G
Yeah, AI inbreeding is already happening, and it's a very big problem for the ai…
ytr_UgwD22EEj…
G
Thinking on it, it's really striking that Zelensky has been all over the world o…
rdc_jrzv49b
Comment
Why would you want to create a robot that would match humans in every way and in ways be better than us? That’s not a very smart thing to do. Make technology to physically and mentally enhance us, not replace us. Because giving a robot human values mean you’re giving a robot the value of being in control which all humans are subjected to.
youtube
AI Moral Status
2020-05-10T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzy9i3yJlM0bcA_pTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzYiIVSZToQGBuXmLJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz_LT9rEtzyF1gW6vF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz119JrlRIOhy9fYH54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzaD7CJ38aSGq7XIOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugylttu1L9zP8mIzWGl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzpHkX0_jDnXfKrULB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzgFYSjGjfIzLVBZch4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwgwB0YoAmv_0U4pdp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwmTRFChD2JXtyzrw94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]