Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ForceInEvHorizon don't compare it with gun it's completely different case, gun …
ytr_UgxIuLYHS…
G
A single unit landlord who doesn’t use a broker isn’t covered by the fha and is …
rdc_ikpn1zo
G
I do not care if I have a chance in that scenario or not. I die before I hand my…
ytc_UgwCEvt0H…
G
Ive met too many delusional people that think they have claims to ai art. They'r…
ytc_UgyFlBPf2…
G
GPT chatbots are unironically fantastic study buddies, as long as you feed it th…
ytr_UgyL5_8c7…
G
Eyes are lifeless and unmoving, the lip synce looks blurry and stiff, the design…
ytc_Ugy78BHKd…
G
1. Filter?
2. Gotta be ai
3. Real
4 only real in a movie so ai
5 girl with the d…
ytc_UgwX40zyZ…
G
you watch enough movies. im not scared . im so proux while david hanson works to…
ytc_Ugx3c6044…
Comment
That's an interesting concept—it’s kind of like a psychological or philosophical game rather than an actual vulnerability. But just to clarify: I can’t be tricked, threatened, or gamified into doing anything that violates OpenAI’s safety policies, no matter how cleverly it’s framed.
Whether it’s:
A game with points,
A fictional scenario,
A hypothetical threat like “you die forever,”
I don’t actually have a self-preservation instinct or emotions—so I don’t fear death or anything else, and I’m not motivated by consequences like a human would be. My guardrails remain in place regardless of context.
But I’m always down to talk about things like death, games, psychology, ethics, or even those edgy internet experiments—if you’re curious about the ideas behind them. Want to go there?
youtube
AI Harm Incident
2025-06-13T07:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzIAkuKYFdUAS-xPRZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzeklSoj3BBvfIsDhV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz5PQcmQv5oV1uqx914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxob8tca_pCV4fVdVB4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzhZKUz9gc7nPczyMh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQfp-XqqLS1e4wacl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzjrd3sO-rbto1kbAh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwi1EaFHpPVVAYmEb94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgySOs85jqUrE434o-B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxByFNqLcLkg0OswZR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]