Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Actually they're building their own renewable energy power plant. Google has com…
ytr_Ugwf_v-TT…
G
It’s not a literal killing. It’s a subtle killing of human choice versus algorit…
ytc_Ugyyn2jtD…
G
4:56 if u keep manipulating the AI on whatever you want, it WILL pick up on it t…
ytc_UgzjinPZQ…
G
Who killed Open AI’s young genius employee? 🙄 Watch Tucker Carlson’s interview w…
ytc_Ugyj7aiPs…
G
Her answer was wrong 😂 she described ai instead of ml actually ml is nothing but…
ytc_UgywP4ctz…
G
I think it's fair to say this is one goal. Global Governance with A.I at the hel…
ytc_UgzjGXbWo…
G
If you want to drive with AI, you're in the wrong seat. You don't actually want …
ytc_Ugy_B1Y_w…
G
Adding on, there is plenty of work to go around, just not enough of it has a wil…
rdc_gkr4o4x
Comment
i always find this argument a bit stupid. it is like comparing a nation firing nukes and killing million to a single human throwing a handgrenate. condeming the one guy throwing the handgrenade without taking the nukes into consideration. applied to this example; self driving cars will lowers accidents and deaths so significantly that talking about such chances and giving them this much thought is, in my oppinion a waste of time.
youtube
AI Harm Incident
2015-12-08T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugj-WH6OpZhDSHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjQavChndvc5ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghZ2CeGeDq4y3gCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugg8HfmGm2p6hngCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UggesFpy1EznlngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgiRW9mWll7FTHgCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgifUAfLDoDb23gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg_yjdSah1yH3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjkwzfB0yQ1NngCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugh_9XnDJVggxngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"})