Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Deepfakes are going to get to the point where they are going to look like real l…
ytc_Ugy6QqAaW…
G
I am learnng to draw both digitslly and traditionally (the former's mostly just …
ytc_UgxvxBgTw…
G
My people,
You have no idea what Ai is. I wish I had the time to write in this …
ytc_UgyY6JXL4…
G
Any time I’ve interacted with a chatbot I just talk to it like I talk to anyone …
ytc_Ugw-arsF_…
G
Those AI image generators are cool toys. I've used them, but I don't think I'm a…
ytc_UgyuNHtDB…
G
Technically is child porn when children, human younglings, are involved but what…
ytr_UgxM2i0Z3…
G
The fact that AI makes people dumber is already disgusting enough. AI also liter…
ytc_Ugyyuk97J…
G
If you think AI killed the value of degrees… you’re blind to the real problem.…
ytc_Ugzsn0Myu…
Comment
3 Easy steps to avoid robot uprising:
1. Don't programs robots with drives that are not neccecary to their objective
2. Do not program robots with human-like emotions, IE dont try to make a human
3. Don't create sentient robots, its unnecessary
4. Programming robots with the ability to feel negative emotions is unethical in the first place. Giving a sentient robot the ability to suffer is the same as making a human suffer.
5. If you fuck up somehow and this ethical stuff comes up, choose the pragmatic and safest option for human survival and kill the robot before its able to make copies.
6. Don't fucking program sentient robots. There is literally no need for it.
youtube
AI Moral Status
2017-02-24T01:2…
♥ 218
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ughnexcsmb3x6XgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UggvblKpw1_kgXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjVEzS6w8goNXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggfO1G2FHfPI3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgjcsyISv-nG-HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugio6zncMOKloXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiG4UILVf0E13gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggRzJZbiuIY0XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgiAg_hJ4iN9Z3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgjOBOe5JgVz5XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}]