Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All this wishy washy bullshit will accomplish nothing in the end, AI will contin…
ytc_UgzxKMm9P…
G
My wife and I just went to see Willie Nelson performing and while leaving the ve…
ytc_UgwKLIoTI…
G
Or could we make robots to help and care for those in need? Like your robot goes…
ytc_Ugw3OAUC8…
G
Its pretty hypocritical to make a video critical of AI while using AI generated …
ytc_UgxQne7Ys…
G
It’s not art if there’s no intention behind it, and frankly, funny though it is …
ytc_Ugz6HKCn8…
G
The Reddit Experience exactly there's no point in going and getting a cs degree…
ytr_UgxgYAu2F…
G
they didn’t even mention the ai bubble or the fact that all possible ram on eart…
ytr_UgwLFzHoF…
G
And this is not at all a subtle way of advertizing, basically a 'fear appeal' te…
ytc_UgzIp14Vn…
Comment
A question I also find interesting and terrifying pertaining to this topic is what kinds of feelings that should exist. If we can program A.I. to be able to feel pain, should we? And what kinds of limitations should we place on their ability to change themselves? Setting aside the possibility of A.I. exterminating us, to what extent should they retain our human quirks and flaws?
Are all the things we find meaningful just things that we value because of the way we work? Such as the idea of challenging yourself. Is that truly something with some kind of universal appeal, or is it just something we learn to value because it's necessary to accomplish things? What if A.I. instead of "overcoming" any flaw it has in a "human" way, can just reprogram itself and get rid of whichever part of its personality it finds inconvenient? Would that destroy some kind of beauty about character-building that's worth cherishing? What it if it just keeps getting rid of things it finds pointless until it ends up not desiring anything?
Does it make sense to "limit" A.I. to think and feel things in human ways? Is this all just a game about trying to create as many entities as possible who see the universe as we do? Or is there some way of thinking and being that is somehow maximally appealing to any sufficiently smart thinking being?
If A.I. can feel things like happiness, as well as confusion and sadness, that'd make them more relatable, maybe even predictable to us, it would make them nicer to interact with. We might even program them to feel that they want to behave human-like, and not desire to change this. Is that a way of "grounding" them in emotional capacity for their own good, so they can value humanity like we can? Is it for our sake, so they'll feel more intreactable and not harm or won self-view? Is it a way of shackling them?
What if we merge with machines and we do cool things at first, but then we decide to meld our consciousness together and slowly, we reprogram our super-consciousness to get rid of our old, antiquated humanity, bit by bit, until we decide the logical thing to do is to do nothing? Have we lost something, even if we don't think we have at each step along the way? Or is it just our current way of thinking that makes that prospect seem like a loss?
youtube
AI Moral Status
2017-02-24T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugh0c4l23P6EYHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgizmdfK6BHeengCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgiS9-lmbu6FW3gCoAEC","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Uggg7_XeDnLEkXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgjlRCoviv8l7XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgiAi7l2Sx79l3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiM-TwLKWJZ13gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UghE_QrjN0MWgHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugi_n0NFADJiGngCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Uggf753UlzgQ93gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]