Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its not a problem of how AI 16:22 intelligent would be. the issue is when it wou…
ytc_UgzClmfl3…
G
But the food made by the robot isn't even food it's basically a cardboard cut ou…
ytr_UgycsfDPN…
G
Well, I would say Thank you Geoffrey, because I will use it for Good. It has hel…
ytc_Ugw1bYks8…
G
This is SO NoT FAIR, the robot has metal skin and skeleton. The man is made of f…
ytc_UgyfwsZcd…
G
Hello Doctor, I watched many of your clips already as I was curious what happene…
ytc_UgxfAq0QM…
G
Correct. The "people" who complain here are worth substantially less than the ro…
ytr_UgyvqMci6…
G
No one cared when customer service support for anything was shipped overseas, no…
ytc_Ugyq2oAt6…
G
Saying that generative AI helps disabled people with art is like saying steroids…
ytc_UgxDjDMPp…
Comment
It's funny, because everyone that have tested and used the local models for a while mostly agrees that they are really far behind for example ChatGPT. [A bit of discussion around it](https://www.reddit.com/r/LocalLLaMA/comments/13gpnhq/addressing_the_elephant_in_the_room/)
Part of this misunderstanding comes from various factors, from tests being very one dimensional to using very un-scientific means to compare *(like letting chatgpt rate each model)*, and some trying to find the comparison that shows their pet model in the best light.
So that google paper and the resulting discussions and panics are largely built on a false premise. Still, it's pretty interesting to see how the industry is reacting to this perceived threat.
That said, I really hope open models keep on evolving and becoming better and better, and some day surpass OpenAI's models while still being able to run on normal hardware.
reddit
AI Responsibility
1684316475.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jkjuinl","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"rdc_jki4lv5","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_jkha19r","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_jkgipyl","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_jkiii9s","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]