Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1:27 and then the ChatGPT I am using can’t even pull/quote any song lyrics (said…
ytc_UgyZL1Bk_…
G
Here my thing with AI... we DO NOT understand how the human brain does MOST of t…
ytc_UgwwOnhAQ…
G
Who cares. ChatGPT is a great tool and lifeline for many people. Some vague noti…
ytc_UgweBX3Kk…
G
Why is AI art bad? Because it's not human art. Duh!
This is of course a form of…
ytc_UgwOza2tk…
G
Doesn't this say something about South Korea if anything? Instead of trying to f…
rdc_cjoyc71
G
Trust me bro AGI is about to become real bro buy your Open ai stock bro trust me…
ytc_UgzBbYpo1…
G
I bet they know how to balance a checkbook, do taxes, start a business, AND can …
ytc_Ugy4jxOUW…
G
Fundamentally, I think the falling trust in authority is a symptom of unrestrict…
rdc_fjzksvz
Comment
I remember reading the first Dune book, long time ago, and there was something similar to what it is mentioning here.
I might be wrong, but something like an AI getting too powerful and they shut it down and use engineered humans to do calculations or something.
I might be wrong, if any Dune fans can enlighten me if I remember correctly.
P.S have a wonderful day whoever reads this
youtube
AI Moral Status
2025-12-11T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyAvV2Vqvbq_enMXf54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyNsJR20LNqm4E3SHZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx9Gvwpw-E_USS2FXB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzrqOgieZ95CWqOO594AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzjNQErUTycyR_gasB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwou43ZE3fr9tnOp2t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgydFOukhScKKYun1Dl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-4I6m9BGuDBWSOU14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgylGxOc1DlcSdWHPER4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyWpU0iEtSVC_C2J8B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]