Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Imagine how you could bring down the world population and make it seem like it’s…
ytc_UgwAf_AUs…
G
Facts. Every time is going to be different and all this talk about ai replacing …
ytr_Ugyay_pjx…
G
Idk why the actors are worried about being replaced by AI I doubt the technology…
ytc_Ugz6X3OcT…
G
@OsynovyjKil I appreciate your comment! The Russian robot does seem like it's st…
ytr_UgxuTFH9_…
G
Your L takes are such Ls that I'm afraid that this video existing will have made…
ytc_Ugzr_6LHB…
G
Or they’re just doing like basically every Ai, giving it an access to the intern…
ytr_UgwWNFzNs…
G
I find it funny that some AI bros talk about how "artists are lazy, AI is the fu…
ytc_UgxpIhjTR…
G
The "ai is inevitable" argument is so stupid, too. In medicine? Yeah, probably. …
ytc_UgztnvX6k…
Comment
reading the comments here I think Kurzgesagt skipped an important video explaining the difference between an AI and a program (though they scratch the surface with siri and all).
People really think we just program softwares that will feel pain or happyness, letting us decide what should make the software happy or sad.. that's not what an AI is, or will probably be in the future at least.
We're not gonna tell that AI how it should feel under each circumstance, it will learn on it's own what it feels to "be", wether it's through physical stimuli in a robot form or interaction in text or vocal etc.
that's what that video is all about, the future AIs that will come to be, not the fucking softwares inside you iphone or fridge
And if someday we get such a smart AI we won't really have a debate if they deserve rights or not, they'll take it on their own, it's not animals we can just kill or slap, a powerful AI will be better at breaking and building securities that we ever will.
youtube
AI Moral Status
2017-02-24T17:3…
♥ 28
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugi0hj0S4tOJK3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ughn2l5l5nUY93gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghjyLhFY0N9d3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugj2Jo_uYDf2v3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgjWcRsFfwSE13gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UggSkZsWg39NxXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugj0QLN4cIFMF3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggPezFG5S3VS3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj22OTCNxaAhHgCoAEC","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugg7RpJojOWA93gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]