Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I like how AI artist keep switching between "AI art makes it easier and accessib…
ytc_Ugxx1TIkW…
G
Get very good at a sport. AI ruling us will need entertainment and you may come …
ytc_UgwgTCAu4…
G
So what stops the AI that we have now to go ahead and build the super Intelligen…
ytc_Ugz831bKu…
G
Ok, first…. Isaac Isimov writer of the ground breaking series “ Foundation Serie…
ytc_UgxlVrCCO…
G
We haven't solved homelessness or healthcare, or any number of other longstandin…
ytc_UgyWSbEry…
G
And? Its not like this in any way "disses" the "ai artist" all they did was give…
ytc_UgySJ24-h…
G
I dont think there's a reason to be scared. In the end, these ai copilots would …
ytc_UgwVR1gBO…
G
The first time i use character ai pretty traumatic, He literally calls me stupid…
ytc_UgxDSBH0G…
Comment
At the point where a system (not necessarily AI) legitimately demands a right or even suggests a change of behavior based on justice, it should be entitled to rights because it is sufficiently aware of itself and its surroundings. By legitimately I mean that it must somehow come to its own conclusion by examining a situation and using reason to come to a conclusion without assistance. So learning algorithms that just harvest tweets and act as a glorified parrot do not count. This means that it is conceivable to a fully conscious, true AI to not qualify, but I would still grant that AI rights just to be on the safe side.
So the moral of the story is that you should treat the creation or initialization of such a system the same as conceiving or adopting a child.
youtube
AI Moral Status
2017-02-23T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgiLDZDsluuX7ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UghO27xPtF4OL3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgiyMwZ_7WU5mHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugh-nIhLVlynuHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugh6GzVlcqfQxHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Uggd7HuqJgAx-XgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiVAEnmcJsth3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgjS4PQpHaKB33gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgjZof-spcqFxngCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UggrO82HB4K0HHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]