Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They're not talking about a general AI, they're talking about automated weapons …
rdc_cthqepc
G
Exactly! It will happen. AI will replace teachers, doctors, artists, drivers, cl…
ytc_UgwVnO0hD…
G
How does that compare to companies training ai by taking every piece of music an…
ytr_UgwwKCFXL…
G
mankind has finally created a Frankenstein monster in reality ..
as Churchill …
ytc_Ugz71a0tp…
G
AI is only as accurate as the data it receives from human sources. The fact is,…
ytc_Ugy01oxlh…
G
You bring up an interesting point about wisdom and intuition! Sophia does emphas…
ytr_UgyMWmk1H…
G
We’re afraid of what AI will do…what I wonder is how can we possibly predict the…
ytc_UgyARcdKG…
G
After having a friend come back from vacation there recently its absolutely stup…
rdc_f9f6sqi
Comment
This moral dilemma illuminates the problems of algorithms. However, we humans are nothing more than complex computers that are constantly reprogrammed by our experiences in real life. Our genes are our hardware which also contains a set of core premises which for example include: survival and protecting your offspring. We are in no control of choosing our genes, nor are we in control over our environment which shape us through experiences. Therefore can the choices of human actions also be deemed as "not our responsibility".
youtube
AI Harm Incident
2015-12-08T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugj-WH6OpZhDSHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjQavChndvc5ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghZ2CeGeDq4y3gCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugg8HfmGm2p6hngCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UggesFpy1EznlngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgiRW9mWll7FTHgCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgifUAfLDoDb23gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg_yjdSah1yH3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjkwzfB0yQ1NngCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugh_9XnDJVggxngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"})