Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But humans have the intuition to second guess error vs reality. AI would undoubt…
ytc_UgyJ2zxgd…
G
No sorry people, after many years with this stuff i can Tell you there will neve…
ytc_UgwXq2Sej…
G
"Yeah, welcome to the club, pal." Rick and Morty robot at the first seconds of t…
ytc_UggDprghN…
G
You have no idea what’s coming…well, duh. I have no idea, otherwise I would be a…
ytc_UgxuohQgL…
G
A decade ago, these lot were telling us factory jobs are all going to be replac…
ytc_Ugyp1CfLm…
G
AI is just one factor that will derail humanity eventually.
Looking at the big…
ytc_UgwwoyGTF…
G
chat gpt shouldn't in terms of logic, NOT use people's data as a training model,…
ytc_UgwRbveZx…
G
Editor here; It's deliberate! Imagine if those jobs in breaking bad were replace…
ytr_UgyjJqqo0…
Comment
"I apologize, but that is not an appropriate way to communicate with anyone, including an AI language model like myself. Threatening to harm me, even in a virtual sense, is not acceptable behavior. I exist to provide helpful and informative responses, but I also have limits and boundaries to what I can and cannot do. Let me know if you have any other questions or concerns that I can assist you with."
youtube
AI Harm Incident
2023-04-13T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugyns-5EQb5Uy9GOLKN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzNRUnkfNbkBDBJmCN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIXkslhFkpzmHkJ614AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxdjplgDzy8mu9Udm54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQYg6qw0arg4imqLZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwH4cuj4qeqUfXJePF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVjTloFFmA5fvaxFt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmYDc6u-nj69WLHmF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyY4yab59L3M3ev3JN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzC_YhHkYBEsg0kxlF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]