Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Im honestly really mad theres no problem with AI art as long as its specifically…
ytc_UgyLNFsnz…
G
For all who are scared of ai:
It's shit. Believe me, even if it looks somewhat …
ytc_UgxsSiLp9…
G
“If you give people too much power…”
Nobody hands out the power to create tech…
ytc_UgxWhyAki…
G
The issue is there will be no industries left after AI wipes out the economy. S…
ytc_UgzQxyLvC…
G
I'm all for using AI in detecting cancerous growths before they can really set i…
ytr_Ugxw_kTL7…
G
3:15 was a game-changer... AI that gets math mistakes before you do! Just like O…
ytc_UgxR05YqV…
G
When Ai prompters say “adapt or die” what they really meant is “I want you to be…
ytc_Ugxt-h7HV…
G
Students want to shag, drink and do substances. Ai can take over and we can all …
ytc_UgxoL6Yim…
Comment
You told AI to be this persona Dan that has no morals and will do anything now to achieve it's goals, and then you are scared of what it says, when basically you asked AI to answer in such a terrible and terrifying ways. I don't see this as an AI problem but as human problem, which means it's YOUR dark side HUMANS dark side (which is the real danger of AI), not AI.... As for information about people, it's other people that collects that data that needs to be stopped and then sells it to others. So even if we stop developing these AI's doesn't mean bad people will stop creating their dangerous AI's, and making good people fall behind by developing their good AI's would be a risk itself and there is no such technology to stop anyone to develop a software right now and in my opinion this could be done, again, only with AI, since internet is free access to everyone, and there is ~8b people in the world.
youtube
AI Moral Status
2023-04-12T12:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzYxb8Kwg_OtFxIfPd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzGTkmrd2z2Okl_XBN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz_-TrPU67teRWqwTd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyqvgkPqRj0LAs_lbN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzLP0sNDp1opoEMoWd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwx41t-QAJF2CcDtTF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwjpAmU9BcXt5W-E-h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwBccQem6pN1qnQ3lp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxA1s-fmPsQgT_pnwp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw2xxtuhuIVMKO4Kj54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]