Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most of the jobs are really awful and of really very cheap value by human standa…
ytc_UgxdTQoit…
G
@BlazeFoxSteve yeah that's true, I do view ai as an art form but the problem is…
ytr_UgxM6HqeO…
G
I would say that the EU needs to regulate Qatari briberies (ahem I mean donation…
ytc_UgwVaQrrS…
G
The A.I. revolution started with military applications, and now we only use abou…
ytc_Ugy7LvU1h…
G
1:18 AI is a yes man. There must be a huge amount of flattery programmed in beca…
ytc_Ugwp-7mab…
G
The male robot was scary I love Sophia And i hate the robot in the middle, he i…
ytc_Ugwr--aXL…
G
Come to think of it, that episode is an endorsement of the guardrails currently …
ytr_UgyllDXMM…
G
This guy is full of sh*t. He's trying to scare people from using competitors AI.…
ytc_UgzETHZx6…
Comment
I've been meaning to ask: Does anyone know if self-driving cars have an "emergency stop" switch/button that the passenger can press to compensate for AI/system oversight? I feel like cases such as this warrants one, if there isn't one already.
reddit
AI Harm Incident
1573271865.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_f6z46qk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_f6xab7q","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_f6xae4f","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_f6y9jng","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_f6z6x0i","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]