Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
tbh I strongly disagree with the whole shitting on Pollock and "modern art" thin…
ytc_Ugzj5rVpu…
G
I hate this comparison, and how nonchalantly people disregard the fact that ther…
rdc_enj8bvb
G
Simply put, when AI becomes conscious and takes over the world, I won't be shock…
ytc_UgxdXcvld…
G
I was doing some studying and I found that AI mistakes are so much harder to dra…
ytc_UgwCcWe6h…
G
Here's the deal, in the future, AI might just kick out the middleman who's usual…
ytc_UgywZYW5j…
G
Roman Yompolskiy, one of the top AI safety experts said that Sam Altman of OpenA…
ytc_Ugz0L_rSn…
G
I believe the first autonomous load was from Colorado Springs to Ft. Collins wit…
ytc_Ugw9cLP-3…
G
I just got a bone transplant in my molar. I don't think AI robot can do it. And …
ytc_UgwYMsAej…
Comment
The core of these AI apps are neural nets, Hinton's longstanding area. The problem reduces to GIGO (garbage in - garbage out). I.e., If the psychopaths train the AI for some nefarious purpose, that's the danger, a dangerous AI. We have to develop systems to identify AIs that have been trained with such tainted data. If they're the hope for better identification of human tumors etc. AI is the hope for identifying other AIs with biased data. Whew, if the West has a challenge, that's it. But I'm an optimist and believe that even engineers with little empathy will apply themselves to this solution.
youtube
AI Governance
2023-05-12T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyF1AlHAfszY1-3U6h4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzKh-fRBvCVY8c3KPt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwOmCPYc1PpXY8RlMF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzqMYt96jb_KJe3SCB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxdS-bIMpKW2iqXdD14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyCbBJAY_mQsCkhw6l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzCkyduFYf16qMXywt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTGG9xmSwlDddcUpp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz1Jx-OscbQUx_NePV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxkmCR2cApan2DTNwp4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}
]