Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As someone who graduated with a web development degree in 2024; this is satisfyi…
ytc_Ugw2RNE07…
G
It seems to be an unpopular opinion judging by the top few comments but we shoul…
ytc_UgzPgKgzm…
G
When the Robot said "Fire in the hole" I only about the normal difficulty face f…
ytc_Ugzr1w8hO…
G
If you look at the fool introducing the robot looks like a LSD user from the 60'…
ytc_UgzvTvgYz…
G
>Klarna said its AI assistants—available in 23 markets—speak 35 languages and…
rdc_kskok74
G
Good summary but the video misses a point. The key difference between job augmen…
ytc_UgykHC5I9…
G
Plagiarism and greed is an old world problem--not just something that sprung to …
ytc_Ugwl1wF0C…
G
i think they used some kind of ai to make this rant as well aint no way this was…
ytc_Ugzf8DyNv…
Comment
Excelent question, but I'd like to add something.
Recently Nick Bostrom (the writer of the book Superintelligence that seemed to have started te recent scare) has come forward and said ["I think that the path to the best possible future goes through the creation of machine intelligence at some point, I think it would be a great tragedy if it were never developed."](http://www.ibtimes.co.uk/nick-bostrom-it-would-be-great-tragedy-if-artificial-superintelligence-never-developed-1501958) It seems to me that the backlash against AI has been a bit bigger than Bostrom anticipated and while he thinks it's dangerous he also seems to think it ultimatly necessary. I'm wondering what you make of this. Do you think that humanities best possible future requires superintelligent AI?
reddit
AI Bias
1438016751.0
♥ 442
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_lv8lnbd","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_lv8cgsc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_cthw656","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"rdc_cthxq37","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_cthzy1i","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]