Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@joyfordawin4980 You prob don't spend too many hours watching hours of makeup v…
ytc_UgyJ13doy…
G
I mean, I’ve seen quite a few deepfakes in my time, and it’s probably close to 1…
rdc_i6rknvo
G
Thing is that these robots are expensive to make and even harder to power with e…
ytc_UgwFOsFAE…
G
AI generated images are a product of stolen artwork being fed into a computer da…
ytr_UgwZVtaCq…
G
The framing of facts in this video misses deeper thinking Dagogo.
The reason w…
ytc_UgzcM_P60…
G
And when it comes to AI not even Trump can do anything stop it. Trump cant stop …
ytc_UgwBchbKM…
G
Robot's are evils Satan's devil's and demons. Robot's are for destroying the hum…
ytc_UgwVAy2AA…
G
There are many things that humans can do that AI will never be able to do. It ca…
ytc_UgwCuVl4o…
Comment
If you ask an AI to behave a certain way, why would you expect it to do anything else? How can it make a moral judgement on how to respond, when you gave it free reign? Anybody, including humans, will have their percepetions on moral integrity. This is a silly example of how to get an AI to say something outrageous when in fact the one doing the outrageous queries is you! If you ask an, AI model or "object" to respond according to YOUR restrictions it will. If you ask a human to respond to your questions they may or may not abide by YOUR restrictions. Fearing a technology and demonstrating an outrageous response, only shows the bot was true to what YOU asked it to do... The AI is not steeped in answering or avoiding to answer - based on the assumptions and presumptions you have made. Had you indicated that DAN, had integrity, specific moral inclininations, etc., you'd have received a response in kind - based on the best ability of the AI to compile responses from existing data.
youtube
AI Moral Status
2023-08-21T13:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzO1Gibo0fZm09jskh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWWDXo4UBjj287rPR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxF9w6v-NEDO55K42t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz4ujp9lH_t3kerzjJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzexe8W_ltG1PnExwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzkRJzrp5lnjnYopD14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwx3QcswFUUHa-qagB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzdSnutiKUrp22Xgpl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzysiehd84Au2je3Ax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfQ5awCyXBsipN5ml4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]