Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s going to take many lawsuits and many people getting sued to change this. Th…
ytc_UgwQ4rMON…
G
the thing missing form this video is that jobs have been getting replaced by aut…
ytc_UgzbGUQBs…
G
Fingers done correctly is the key to spot AI and human , Artist still have a way…
ytc_UgyPOfp_u…
G
There was a kid at my high school a few months ago who got suspended for deepfak…
ytc_Ugw0gV_7v…
G
See this is why humans don't need to play with AI 😂😂 imagine he just said fk it …
ytc_Ugz-AgPnK…
G
I've been saying this for 5 years. Capitalism is going bye bye, and will turn in…
ytc_UgyCEZwme…
G
I hate all this AI crap being shoved down our throats. But what I hate even more…
rdc_nsfbyri
G
This just shows, we should stop using and storing our data on the cloud. Use les…
ytc_UgwgtEviT…
Comment
To add to that excellent question: **Should human preference for anecdotal evidence rather than statistical evidence be built into AI, in hopes that it would mimic human behavior?**
Humans are pretty bad about judging risk, even when the statistics are known. Yet our civil society, our political system, and even our legal system frequently demand judgments contrary to actual risk analysis.
For example, it is much more dangerous to drive a child 5 miles to the store than to leave her in a parked car on a cloudy day for five minutes, yet the latter will get the Child Services involved (as happened to [Kim Brooks](http://www.salon.com/2014/06/03/the_day_i_left_my_son_in_the_car/) ).
So in this example, if there was an AI nanny, should it be programmed to take into account what **seems** dangerous to the people in that community, and not just what **is** dangerous?
reddit
AI Bias
1438003353.0
♥ 333
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_cthpngw","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_ctlpsgh","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_cthuvw9","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"rdc_cthz1rt","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_cthnpuo","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}
]