Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Proof Directors, CEOs etc are gullible and easily manipulated by pretty presenta…
ytc_UgyzkjqaA…
G
One year later: "Please bail us out again... In 12 months AI will CHANGE THE WOR…
ytc_UgxYLPEiS…
G
I ran the same questions through ChatGPT under the same rules and got a slightly…
ytc_Ugzbv-nYF…
G
Apparently they have revised what ChatGPT will respond with since the A.I has be…
ytc_UgzhygXgO…
G
You realize that I said some of these right. Not all of them were developed …
ytr_UgyFM8ck-…
G
I don't like the new AI stuff and how some people use it for everything, but I h…
ytc_UgzS2Yuz4…
G
@vtranoff9851 Thank you for your comment! Honestly, I feel you on the robot stru…
ytr_Ugzfk9S0v…
G
how can AI be smarter than humans if humans programmed them? there's no way. the…
ytc_UgzA95Txp…
Comment
"Our mission is to ensure that artificial general intelligence -- AI systems that are generally smarter than humans -- benefits all of humanity." -- Sam Altman and OpenAI
"They think they are positioned to decide what 'benefits all of humanity'." -- Critic
No, that's not what he said, at all. He said the mission is to *ensure* that it benefits all of humanity. He didn't say that the people in his company alone are going to decide what benefits all of humanity. He didn't specify how that would be figured out, because, as these "Utopic tech bros" have admitted before, they don't know that yet, but it will likely be a multidisciplinary process.
youtube
AI Responsibility
2023-05-15T05:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwljBWmxzgxcaXOQ9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZYhJdAKAJtYh6C-14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxdB0d6ADB3PyXXAop4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzCLSZbg6utOqRsat14AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw_oPrcYN7A9J6nXtV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwhq586lNWboAYmuf94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxc2Rep3p7Qc3XvxYN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw55r1btDusdw_xqJR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwqNidnglB6R1fVhu94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyfL2kFDuFf2dnHpUp4AaABAg","responsibility":"industry","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]