Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If only we could convince them that the bones of poachers were an aphrodisiac. T…
rdc_dv6195p
G
Automating manufacturing is the only way to make affordable things. Building new…
ytc_Ugx_JHMF-…
G
I go to a school in the US, and now they use blockers that block everything exce…
ytc_Ugx0LMUr1…
G
Its Y2K all over again. I suspect there will be incredible things that will happ…
ytc_UgzhyhMkm…
G
I knew AI was of Satan, its so evil it is Satan. I have never used it and neve…
ytc_UgwLUufIs…
G
If the ai does the work do we need workers we can just share the proceeds so eve…
ytc_UgyswNKeQ…
G
Eh, using AI and editing it and claiming it as the dude's own creation is a bit …
ytc_Ugxx_JASi…
G
This video really emphasizes the importance of understanding AI. AICarma has hel…
ytc_UgzyFIUiT…
Comment
Can a person can be an AI researcher—working on developing the technology that could end our species, that many people 'in the know' believe has an extraordinarily good chance of doing so—and also be a good and morally sound person? The two seem mutually exclusive to me. Doesn't _knowingly_ working toward what you believe to be the end of our species necessarily make a person amoral?
What do you mean AI researchers are horrified of what they're creating? Then WHY ARE THEY DOING IT? If it's so bad, then stop. Quit your job. Or better yet, don't quit, but sabotage the AI from inside. Prevent it from coming to fruition. Create a secret coalition of concerned AI researchers who work together to ensure AI fails. Yes, it will cause an enormous economic recession if AI fails, but we have a much better chance of surviving an economic recession...
youtube
AI Moral Status
2025-12-14T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwCyFql-xTJYqR4N0x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxiFMK7f0OIHwYvTKN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy2plT7wtMXnZ0BBOp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4xmfE4FvE8KcTwXt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxwUiAvnadcTQ5eZxt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzZaYmcBNC4A63CoX14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdxFj-jqQJYz3C8XJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFJfHdQdFCFf3uVrF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwF19DDDUJTptvvLHd4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxRVMB37V5eNQbMelF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]