Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why are all of these robots white-presenting? Nearly Every humanoid robot all se…
ytc_UgyiyA7fk…
G
When I close my eyes, I see images, not black, and sometimes does images look AI…
ytc_UgxtosfSm…
G
AI and robots need to pay taxes and social security. Then I will ok will the c…
ytr_UgwI6LSTo…
G
Using AI to "finish" a work transforms it from human authored art into a machine…
ytr_UgzKuJaOU…
G
I am appalled by what the AI software founder is saying about not needing people…
ytc_UgzbDP3jq…
G
Twisting the words and repurposing them to fit your own agenda. Such a bad news …
ytc_UgzDCDBpk…
G
With our President trying to stop the states from putting limits on AI, how can …
ytc_UgzNZZkcw…
G
Hijacking to say that the general feeling amongst people there is that this is a…
rdc_dpc8jac
Comment
I think the part that constantly fails in these conversations about how to teach or how AI thinks is the order in which it obtains the information. Having a child gives you a good understanding of why you dont teach them x before y. And I feel we just open the flood gates of information for ai. There is not really a prpgression of knowledge when you train them. And were at a point where its a sunk cost fallacy issue. Were not going to rewrite how we trained what ever model because we got so far already. The solution is more cost effective to additivly give it more parameters versus think about why we are teaching it in this way and adjusting the method.
youtube
AI Moral Status
2025-11-03T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxs4e2UNdweIXZcscJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1a7i9Y0bJagEdERZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2uMrP8Bmv3J1qRBR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwsL5oUEYvqk1uyj4R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzq0KOoim73dCntkdh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzsafBd5FfFmH9EZSB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjJ8DZEUtc3q1DQ-B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4IGU5QzByFqxVLt14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxaO3DME9TsmKePwPV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzFY7J9QxhTDcPdPX14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"skepticism"}
]