Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Jammer_r I'm dumb and misread your comment, my apologies. I took the "I want AI…
ytr_Ugy6VD_Mp…
G
AI is already affecting all positions since the company expects you to deliver m…
ytr_Ugww3Ypc7…
G
While the US is attacking China/Russia for vaccine diplomacy, this is what is re…
rdc_grqm74y
G
calling yourself an AI artist is like driving 10 kilometers and saying you ran a…
ytc_UgxOQdc8S…
G
It seems to me that it would be easier and much safer to automate bosses instead…
ytc_Ugz8ad5fz…
G
"REMEMBER: 'Artificial Intelligence' means you aren't talking to a real person, …
ytc_UgyYrAq-4…
G
Towards the end of this video was what most resonated with me when he said "driv…
ytc_Ugx7zeZgi…
G
If all human art depended on mimicking previously existing art then we wouldn’t …
ytc_UgyT7lMSC…
Comment
As a seasoned computer programmer with over four decades of experience, I cannot help but feel a sense of apprehension when it comes to the progress of AI technology. While it is undeniably impressive that we have reached a point where AI can generate code and create virtually anything, it is disconcerting to see that not everyone is taking into account the essential principles laid out by Asimov's three laws of robotics. As someone who has always placed trust in the idea that these laws would be upheld, it is troubling to see that some individuals or organizations may be disregarding these fundamental guidelines. It is my hope that we can all work together to ensure that AI technology continues to be developed and utilized in a safe, responsible, and ethical manner.
youtube
AI Governance
2023-07-07T14:1…
♥ 190
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxgOO0o8rYcCbHE_1l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy3FlxL_yyTpKA266J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPoAx2MV81q9FH48J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyAVLCjC2FSPiLDcwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyevAa5KtDnj8OU4b94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzyQiQqu6vPKYgtwAd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwUKbDSKtuqlgq2yrR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTAGqXWdHBOrV1WlV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzVaAT2SgG4rJyoMxZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_BDnYNmQLiduU12l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]