Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is just how humans would behave without "morals. The external world is all ab…
ytc_UgzOqyyU9…
G
Everybody is so quick to jump to the "intelligence" part...when that couldn't be…
rdc_m2d7z3j
G
Or
What if this is just the final outcome of training an LLM naturally? Like w…
rdc_kcq24x0
G
I don’t think superintelligence is the main goal of AI companies. They want the …
ytc_UgydiU7eV…
G
How about we make companies pay the taxes that its workers would have paid and d…
ytr_UginAgDYm…
G
It is going to be like the third world, you will still have super rich fucker ow…
rdc_mdgav7w
G
Was ChatGPT told the iPhone was physically undamaged, did it assume it was undam…
ytc_UgxrHgh8H…
G
Art is Art. Even the stuff that is made using an AI program as the program is no…
ytc_UgzgGAcO5…
Comment
Elon Musk is already among the first on the cusp of a best answer to risk from AI called Neuralink leading to a merging of Humans and Ai. It will be us and we will be it. First, we should use AI to reduce human aggression levels and increase intelligence; and then, to merge. The power of future AI and technologies will be so powerful that the only option will be for mankind to become the meek that inherits the earth, or we will self-destruct. Sadly, most media accounts rarely mention merging as an answer to saving mankind from future AI amok.
youtube
AI Governance
2023-12-04T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyyXjTzLPsiVvs1f-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwdrxT1yIN3xcrj4q94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxd1vvjnuueT7e6dPt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzWWlzCnaE1ul-0zaZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxk8hyXb-KQj8RqVJl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxazyOYSiLsGQM2x9t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRRuR2zqqe1_Om22x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz93K-qM5KUdXOFvtx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UgwHb1tUzIvVBygiQRd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgySoa9MwZHrkHkuDaN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]