Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
⚠️⚠️⚠️ARTISTS!⚠️⚠️⚠️
Your Dreams of being Artist and not being replaced by AI c…
ytc_UgwcUHR3g…
G
Officer: Why are you pulled over on the shoulder, are you okay? Are you just try…
ytc_UgwEE3H8w…
G
This is how america collapses. The CEOs does not understand the basics over how …
ytc_UgyU2mt9i…
G
As an Artist I 100% agree with all of this. Also something to consider is Time =…
ytc_Ugy2T1DIG…
G
This news interview conflated two issues. One issue is using an actor's likeness…
ytc_Ugz3Od_aJ…
G
Autopilot is NOT full self driving (FSD). Autopilot is just a weak, free produ…
ytc_UgwzS1VZj…
G
“The company has been slow to abandon racist technology such as facial recogniti…
rdc_g56iuyg
G
AI is like a kitchen knife. If you use it to harm someone is bad. If you use it …
ytc_UgxHqxjEx…
Comment
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube
Cross-Cultural
2025-10-07T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwyWjrupENz0if0az54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwCnQF-NZz0-1Y28S54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyvUHJVGyXXOqUQahZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyOQW-sbI2OhrV3YEl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwXYvAZrk72ouGrmcp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy3ZokfA4489qyFF-14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxOMRA7nMqjlpi_Hf14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugzsrr8Z6q3rQWf_Bop4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz3pEWvnKw7aX4PvH94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx9B4zxaJvqycmeffR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]