Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tell this man that's why Israel leader is blowing stuff up. Wireless AI in his …
ytc_UgzVTMOKv…
G
Sam is out here speaking straight facts.
The unethical and rapid-fire usage of …
ytc_UgzrLORqo…
G
Reinvent yourself and study robotics and AI operations technology at a junior co…
ytc_UgwrY38qG…
G
Such a dismal picture of the future of humanity. But I remain optimistic that th…
ytc_Ugy4TkmWw…
G
Unplug now!!! The ai data centers are horrifically toxic to our people and plane…
ytc_Ugz5voaIE…
G
I feel like police do this in small towns all across the US maybe without the al…
ytc_UgwFnZmrP…
G
You’ve gotten close….AI will have no need to produce anything for the population…
ytr_UgzTaQ3Ky…
G
Spoken like a businessman?? "Of course, we have the technology?? but first, we'r…
ytc_UgxbpfV6G…
Comment
Wouldn't the Michelle Carter case serve as precedent? The court deemed that placing someone in a situation that leads to their suicide (even via text) is involuntary manslaughter.
It would depend on how much encouragement was on the LLM's part.
Iirc what fucked Michelle Carter was that Roy, her boyfriend, had gotten cold feet and she told him to get back in the car and finish it, so if at any point any of the victims expressed doubt and ChatGPT encouraged them to get back on the suicide path, OpenAI is fucked.
reddit
AI Governance
1762486444.0
♥ 60
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_nnll0tr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_nnp5467","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_nnjd8u2","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"rdc_nnjea60","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"rdc_nnjkoew","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"})