Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What if North Korea and China used the full strength of their land forces to hel…
rdc_mcsqtd1
G
Don’t use AI for things where you are not able to judge the result as correct or…
ytc_UgwFBtecP…
G
today on February 07,2025 I again asked about the varghese case. and chat GPT re…
ytc_UgyqA6612…
G
Honestly the same thing happened with me last month. A guy sent my deepfake phot…
ytc_UgwSdHIPS…
G
I've been in a very depressed, anxious and hopeless mood since the whole AI thin…
ytc_UgySfxozj…
G
As a parent this is terrifying, I was the last generation to grow up with no tec…
ytc_Ugxzk17Kv…
G
What's up)
@roxsy470 you may be right, depending on the point that you are maki…
ytr_UgzHmVm8J…
G
It is to show the "effort" that prompters put into their art, not thr result. Al…
ytr_Ugy2Hmdmv…
Comment
A concerning ethical issue arose when the team suggested ChatGPT as the most reliable entity for making final decisions, even determining whether my research on avatars—focused on providing intellectual and emotional support—was valid. Despite investing 10 hours of senior-led research, the decision was deemed beyond my authority and assigned to ChatGPT. Another GPT model, trained with biased data, dismissed the first trial using software as inconclusive, raising concerns about biased inputs and over-reliance on AI outputs (Kun, Rich, and Hartzog, 2020).
While AI, including ChatGPT, offers potential for fostering connection and empathy, dismissing research solely based on AI outputs is shortsighted (Danks and London, 2022). Decisions in critical areas must remain collaborative, with AI serving as an advisor while humans retain ultimate decision-making authority. This underscores the urgent need for governance and boundaries to prevent over-reliance on AI at the expense of human expertise.
These experiences, reflecting fears of being replaced and undervalued, were mirrored in this experiment when team members trusted AI over the expertise of a senior with giftedness. This diminishes the value of human insight and raises ethical concerns about dehumanization in decision-making (Davenport and Miller, 2022). Placing trust in machines over human expertise signals a troubling shift—one that prioritizes technology over both emotional and intellectual judgment. The term 'neurofascism,' coined by the brilliant and handsome Idriss Aberkane, a scientist and speaker with three PhDs, including one in Neurosciences, introduced the concept during his conference Le Futur de l'Éducation face à l'Intelligence Artificielle, to underscore the dangers of prioritizing AI over human judgment.
youtube
AI Responsibility
2024-12-19T01:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyIg1wYSStfyDhxvnx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzd7RdrLeMk6Wppe_V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxxGzSFs7dpmgLS-mN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyFW0jQcWqyghK93et4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQQkHCJak9LzGoWA94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwWDguM5O2Sjv1GRKB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzxXsRxxQyLT6f7rLF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9A9adDeSAFDujNk14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyIdw6DkNbEBt4_p-J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYQEHMGoPKuyLabQl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]