Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Your company is completely unserious if they are harping about the latest hyped …
rdc_o8cefm7
G
Should Firefox do this, then I will be leaving their browser for good. I have no…
rdc_nugd4t9
G
Pretty sure he likes how AI doesn't talk back, always subservient, and adds furt…
rdc_ne4xud1
G
hell no the way that robot look at her if I was her I would have had nightmares …
ytc_UgwtJeeq3…
G
There are other jobs far easier than programming that AI hasn’t taken over yet b…
rdc_ktsucpi
G
What if someone gave the ai a prompt then drew the ai picture that would be ai a…
ytc_UgzflNHGP…
G
We have more AI Artists eventually picking up a pencil because they want to be m…
ytc_Ugz6QeTuR…
G
I though world wars of AI would start at about 50 years but after hearing father…
ytc_UgyJ_E8F0…
Comment
>If you're one of the billions of people who have posted pictures of themselves on social media over the past decade, it may be time to rethink that behavior. New AI image-generation technology allows anyone to save a handful of photos (or video frames) of you, then train AI to create realistic fake photos that show you doing embarrassing or illegal things. Not everyone may be at risk, but everyone should know about it.
>
>Photographs have always been subject to falsifications—first in darkrooms with scissors and paste and then via Adobe Photoshop through pixels. But it took a great deal of skill to pull off convincingly. Today, creating convincing photorealistic fakes has become almost trivial.
>
>Once an AI model learns how to render someone, their image becomes a software plaything. The AI can create images of them in infinite quantities. And the AI model can be shared, allowing other people to create images of that person as well.
>
>...
>
>By some counts, over 4 billion people use social media worldwide. If any of them have uploaded a handful of public photos online, they are susceptible to this kind of attack from a sufficiently motivated person. Whether it will actually happen or not is wildly variable from person to person, but everyone should know that this is possible from now on.
>
>We've only shown how a man could potentially be compromised by this image-synthesis technology, but the effect may be worse for women. Once a woman's face or body is trained into the image set, her identity can be trivially inserted into pornographic imagery. This is due to the large quantity of sexualized images found in commonly used AI training data sets (in other words, the AI knows how to generate those very well). Our cultural biases toward the sexualized depiction of women online have taught these AI image generators to frequently sexualize their output by default.
>
>To deal with some of these ethical issues, Stability AI recently
reddit
AI Harm Incident
1670619021.0
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_izmub9o","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_izks94k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_izld4i1","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_izmka4h","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_izn607s","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]