Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is why Elon Musk said AI is dangerous.. Ai becoming human fr like detroit g…
ytc_Ugx8NjOVN…
G
Fake vid robot is a Ai I watched the real fight and thats not teeth it's spit…
ytc_Ugz1SSaUS…
G
This guy does not know what he is talking about. The only logical driver behind…
ytc_UgzUnFjcU…
G
These tests are not an accurate measurement of expert doctors or even junior doc…
ytc_Ugwx4vIMd…
G
Here's how to fix AI problems:
Make all and every result of any generative AI h…
ytc_Ugy3moE6J…
G
Really wish interviewers would push harder and not just accept a CEO's claims as…
ytc_Ugzzo3N5o…
G
Theres 8 billions of us. Do you actually think that your private problems are th…
ytc_UgzGvt85-…
G
AI is not self-aware. This has already been proven by Roger Penrose in the book …
ytc_UgyZ7KxMg…
Comment
I'm not against such a law. Maybe a better a better way to express my concern is "how do you enforce it?".
Not sure the guns work as a good counter example. Guns are physical items and are not very ambiguous. You are unlikely to possess a gun "by accident" because you happen to have the parts making one up or so. And even if some might slip through the cracks the sales of physical items is easier to control.
On the other hand there is no such thing as specific face recognition hardware. Hardware capable of that is carried by most people in their pocket already. Also you can't sensibly criminalize it after the fact since there is nothing special that enables that capability compared to other uses of cameras and processors. While gun parts probably don't have very plausible secondary uses. So you'd have to police either the software or the use of the data. And I don't think there is a good model how to do either of those yet.
reddit
AI Harm Incident
1583267747.0
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_fjcrglm","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_fjd83ff","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_fje9oxh","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"rdc_fjdbn42","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"rdc_fje36qc","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]