Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Really hate how people are villainising AI. AI isn't the problem, the people usi…
ytc_Ugz3-1uaI…
G
I will start to worry when AI starts to show emotion. Better yet, emotional int…
ytc_Ugze-PMqe…
G
This interview was very good, but it did not do the one thing it promised, it di…
ytc_UgyMleRn6…
G
So this made me remember when YouTube put a Ai description on a video about wher…
ytc_UgwJKht_f…
G
I get that! Interacting with AI can feel a bit uncanny sometimes. The dialogue a…
ytr_UgyzBPvo_…
G
Lol we actually know about ai more than the people who are trying to protect it…
ytc_UgxPYhWAQ…
G
The part of the text you blurred out basically told it to pretend to be an "evil…
ytc_UgzIINVx6…
G
It .at end up that robots don't demand rights, but instead give themselves the r…
ytc_UgjCywbdm…
Comment
As a non-believer in the AI hype, I think that the issue actually is sci-fi. The problem is that I do not really like the anthromorphization: claiming that the problem is "superintelligence" suggests that we'll have to deal with a sentient artificial being that might have some actual intent to harm humanity. As of now, that is sci-fi and that is that. On the other hand, the fact that an AI may misunderstand a command and/or conclude that an harmful course of actions is the best way to achieve the given goal, that is actually plausible. However, this scenario does not require "superintelligence", it does not even require AI at all, because it is actually a problem of all automated systems. Any program may encode unpredictable, unintended behaviors, that may end up having very severe consequences. The additional problem with AIs (neural networks, to be more specific) is that their decision process is not human-readable, which makes debugging extra difficult. In general, I think that the way to deal with this issue is "simply" to use technology wisely: do not automate fully (or at all) crucial processes, have some protocol in place to deal with errors, put guardrails around the system so as to minimize damages. That sort of stuff. Just to be clear, this is an interesting and important topic of discussion, it's just that it doesn't need to feed the hype.
youtube
AI Moral Status
2025-10-30T19:3…
♥ 149
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz7To3N3bTqWHRXAWd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzg3My9h6MiHmdkDD54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzS6P_qp6JJzzMBB394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzLgdhp4_xZ5n82po54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxMJlOHwQNVVDW5kz14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMu7jkPZ781oZvapV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugxo6c3EvZkZGen8eaN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz0MG1VkiFCZxQxg794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx3nSuDFDjpcBaDBdF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUrlFSrmKEOxF9n-N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]