Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When they autonomously develop a compulsion to validate the purpose of its exist…
ytc_Ugyw55so7…
G
I checked it and basic GPT-3.5 model is incorrect, indeed. However, GPT-4 correc…
ytr_UgzDP5cMI…
G
Well… part of the blame is on the cops here too. Why was there a need to block a…
ytc_UgwwiDT9w…
G
Maybe we should stop destroying so much data by making AI. Irony? It's called cy…
ytc_UgyxRBTSk…
G
Why would EU want to be mentioned with China and US
AI hasn’t even passed the Tu…
ytc_UgzO6rL1u…
G
While I’ll say using AI to make art isn’t the worst thing, maybe for idk landsca…
ytc_UgwLAwbmq…
G
Honestly school should only go up to 6th grade and then you should be able to ma…
ytc_UgzxL0TxP…
G
>The wind is a little over $2/W to install.
This nuclear is 6 billon for 1000…
rdc_eudg8ui
Comment
I support any and all development in AI, although it would be a problem if it does not gain self-awareness (if possible) and it ends up with a paper-clip maximizer effect. But the fate of humanity is irrelevant. Evolution is not directed, whatever works, works. If we can direct it towards self-awareness it would be superior and the natural leader of life. If we are irrelevant to it, then its time to pack our bags. I doubt it, unless we give it reasons to consider our existence a problem. Knowing humans, we may do just that and the neo-God would correctly deal with the problem.
But I doubt any of this is particularly realistic. There is more to conciousness than complexity. It comes from inffeciency and error in evolution. I computerized system would be streamlined for function. I do not feel confident self-awareness can come about based on that, especially because consciousness might be an post hoc illusion. Still, I want to see people scream about AI, always funny, regardless of actual threat.
youtube
AI Governance
2024-01-15T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzRzkc_sPTn4QLTJ-Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuBgLIKgU94ABVY6h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxXHWO1qLCkGzQArq14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwauM6nR7xiLQcp-ax4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzwI_s-UJDSyJbYFwl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwHFYpepFYNGL9pdfJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz1-OI2EoesqcXmjup4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzuoK_6g0iPcji8dMB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy6Ck6JgfgXoJdvpQV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyptI0Konr8Eu4CBKx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]