Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There needs to be an "AI Reasonability Council"
To make guidelines to what per…
ytc_UgxbPDqLK…
G
Don't worry, My generation will be fighting AI war and went extinct not your, ju…
ytc_Ugzhj-jXz…
G
Well, well, well..., the real beasts are in the Epstein files and we don't need …
ytc_Ugy3PHn0H…
G
If AI is supposed to replace workers, who then don't have money to fund the econ…
ytc_UgyZlVkuX…
G
In SDLC we have something called as "Proof Of Concept" (applied for new features…
ytc_UgwVJGX7K…
G
But what if telling it to write in a more simplified vocabulary will make someon…
ytc_UgzawVmc9…
G
I'll be first to admit ai currently sucks, but it will improve exponentially; e…
ytc_UgzJ0NW_x…
G
So, the issue with this is that this is how some of these models work - Generati…
rdc_izlqym7
Comment
I wouldn't put too much stock into anything coming out of MIRI or the LessWrong sphere writ large. Soares and Yudkowsky don't have backgrounds in machine learning or cognitive science; Yudkowsky is an auto-didact and Soares did comp-sci and econ during undergrad; those are the qualifications they're bringing to the table here. The authors are just game theorist bloggers and amateur logisticians who are attempting to apply the pseudo-philosophical framework of Rationalism (which has almost nothing to do with the actual enlightenment-era philosophy) to a theoretical emergent machine intelligence possessed of maximal rationality that simply does not and, in all likelihood (per the Chinese Room Argument and the broader shift away from the computational theory of the mind), will never exist. Much like that ludicrous AI 2027 paper, it's self-serving AI hysteria that completely and utterly misrepresents the moment in favor of drumming up an apocalyptic fantasy that directly advances the interest of the authors and their largely bunk "research institute."
youtube
AI Moral Status
2025-10-31T08:0…
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxqeZPWCijSy8vLmfV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRunnBJ6JZkIyL7Rl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyXvv2Mh9QHyvRqQIl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwqt9QWbFbNyhP3k5Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyQ6cX3vzGK0IYWCip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzsZXVqHuryCnOFNR54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyeD4KB3mZTSgAfyTt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdrjBu_20OJFahPuV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugzgpt1tdS4toFzLxIZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz957vNq8JtwrGAZ3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]