Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No shit Sherlock. I want to hate on AI, because it’s a dogshit lazy cop out for …
ytr_UgyJJv12-…
G
On top of an image is like a 1% transparent layer that looks a little like an oi…
ytr_Ugw6sejqA…
G
nobody talks about how much energy ai uses also. its super bad for the environme…
ytc_UgzYF6hXO…
G
What's worse is how you spend hours making one piece of art, and not get famous …
ytc_UgxUHV9zN…
G
why don't ai bros just write poems if they put so much effort into describing th…
ytc_UgwZLYAKs…
G
Presumably there's a great deal of 'patient detail' omitted for privacy reasons,…
ytc_UgwEgAMfd…
G
First people went through a bruh moment with thinking some people are better tha…
ytc_Ugwh3X1PG…
G
When you get something for "free" then the product is YOU!
Also, are you realy …
ytc_UgwnFfmVu…
Comment
The fact that you have to ask Reddit about that is the best argument I can see for placing restrictions on it.
Like if I’m typing “IMHO, AI probably won’t kill humanity but I’m open to your view.” That’s not a place I want to be.
reddit
AI Responsibility
1683465079.0
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jj7csb2","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_jj7kyqw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_jj8d1h0","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_jj7fl2q","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_jjar1fp","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]