Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sure, this is the guy I want to design AI robotics... right after a long bath an…
ytc_Ugz7sT6gw…
G
This makes me want to cry, honestly. I have always loved creative fields and how…
ytc_UgzLYU3UX…
G
99% unemployment in 5 years? LOL.... People its utter BS. I work for one of the …
ytc_Ugyqb42za…
G
Hey lets face it people, the kind of radio pop, consumed by big audiences isnt …
ytc_UgxK911cs…
G
One problem is the transition between, at the start self driving cars will be mo…
ytc_UghKAohdh…
G
AI crushing humanity at large scale via G5 and its horrific rays crushing human …
ytc_Ugz7zuCn2…
G
Thank you for your comment! We're thrilled you found the interaction between the…
ytr_UgywN4k8j…
G
It's not about "stopping".
It's about keeping safety in mind. And it's about n…
rdc_n3mtswu
Comment
AI will not take over morals and ethics. By definition it doesn't have emotion - which is where you and I get that sense of 'this is wrong'. Immoral, unethical science is what all our Sci-fi horror and awful history is based on. Time to pray for a solar flare.
youtube
AI Governance
2026-02-12T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyQDwBo1lDE75gIyK14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxijyw2JTSmgFlUD114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwVVn5I-BfeXY47f6V4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzsAHRfVavkdujhzeV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxYGu41aUmCo62uTJx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzAdFQp3fkXaPKoSgp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwQPOE1FaNy5O0Wlct4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyjiJxO82MINJJ7F894AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz3bsmp2NCLLSz2yqB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzfxCQvUWZ5U7VINDx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]