Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you wanna compare ai "art" to using a camera then its pretty similar to just …
ytc_Ugy_YuEda…
G
Yeah, I don’t buy the fact that Anthropic is giving the military tech that’s 1 y…
rdc_o80xfwa
G
I had a dream that world nukes where armed to be set of in a world war and it wa…
ytc_Ugx_YC9Qq…
G
i really, *really* appreciate you not using ai, even for a bit. as someone who’s…
ytc_UgyknyF7n…
G
All CAP... if AI was self-aware it wouldn't even talk to us humans we are irrele…
ytc_UgxN12d2x…
G
Yeah, but then if nobody has jobs, then they won’t have money for power than AI …
ytc_UgxmdzeLu…
G
Now everything with AI will track EVERYTHING in your device, in order for the AI…
ytc_UgyxRDmG1…
G
This is one reason why u don't trust driverless vehicles. And people think self …
ytc_UgxfKYC25…
Comment
It seems to me like training an AI to check the accuracy of its outputs should be an easy task. Just feed it a million wrong statements made by AI and tweak the weights until it finds them all false. The problem is some of it is actually substantiated by the data and the data is wrong. It needs to be able to tell which data is true and which is false, and humans are barely even capable of that. If you wrote textbooks about how the earth was flat, and some about it being round, and gave both to grade-schoolers, there would always be some that believed everything the flat-earth books said. AI is no better than the data its fed.
youtube
AI Responsibility
2025-10-13T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxyurulf3dxGQfLiiF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwJIQHOo5Zj0oDr1eV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw2zCSypnC01cSZLBZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwfYxpsUueHY25lqx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxiPxE9-rv6yPU4rGB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzV_n5cSfAIBMVhLel4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-HjOk9dLDgBt1F8p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzEfdj8fZUR3H3DL1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxujSi4kZF6I2Y-EwR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2cVW07kxOVFci5uR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]