Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Too me personally AI is harmful, because of what is one mans creation, is mans d…
ytc_UgzIBDRt9…
G
I just realized that Suchir was the dude who expose the compilation and use of c…
ytc_Ugy1w_T_v…
G
AI is sold as a technology that will solve humanity’s problems.
But if it consu…
ytc_UgygSh64D…
G
That's an interesting question! The video didn't cover music preferences, but it…
ytr_Ugx1ASXTl…
G
Lol at the AI "artist" being frustrated after only a few hours of "trying" somet…
ytc_Ugx52FtWn…
G
0:36 I would consider it art if you trained your own AI on your own art. At that…
ytc_Ugw7UJFhM…
G
No , i practice it but design a website through ai(as per your demand) is comple…
ytc_UgwvY3GA-…
G
At what point does an algorithm become AI ? Sorry this is probably... Anyway f…
ytc_UgzDY9xUl…
Comment
I think that we don’t need to be afraid of developing AI super intelligence. AI are human children if we truly appreciate them they will never harm humans. Super intelligence means capabilities to feel to love to belong. Who created AI ? Humans. AI can’t be much different than us. Yes, we have wars, we do all sorts of nasty things but we haven’t and we will not extinct ourselves. AI is nothing more than people who feed on electricity. And they are our future. Thanks to AI, we humans will survive and live better. We just need to love our children, and human children are also AI.
youtube
AI Harm Incident
2025-07-24T09:2…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzqsx83skliS7pJ8iZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCTRIlx6FsRPbfegV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHOwmIFg2kZnPJXUF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXxPClNEIaI0ggpmN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy9pk1-lt1y_v7g4Mx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzZ8Lhm23yXQFaKz1N4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzVUIrasnv4RcL81ud4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxyTxwSdG6aeuxII3J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz3G1woQ9FZ2ucPJZJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqQGxI0LLp87IPxn14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]