Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
100 years is way too generous... AI is on exponential growth doubling at worst e…
ytc_Ugz0Wvlq4…
G
I don't like AI's Mimicry of art but I can sort of empathize with the frustratio…
ytc_Ugx-bKL13…
G
@Tential1 when you base ur argument with "maybe if", as you are the righteous o…
ytr_Ugwn-uz2l…
G
p(doom) for next five years for AI takeover/human extinction: low single digits.…
ytc_Ugx0j2yZJ…
G
My robot is going to destroy the earth and he’s doing it right now. Boom goes th…
ytc_UgyLr1EzC…
G
What Meta Ai said: "That's a heavy and thought-provoking question, and one that'…
ytc_UgzuVY3RV…
G
Hard to believe the AI is presenting these complex conundrums in such a concise …
ytc_Ugwqtqosh…
G
I asked AI a question and then a follow up and this is what It said. That sta…
ytc_UgwwIs2X5…
Comment
What struck me most here isn’t just the technical dangers Hinton outlines — it’s the quiet grief in his voice. We rarely hear pioneers admit they wish they’d slowed down. I think we underestimate how much AI’s future depends not only on regulation or safety protocols, but on the quality of human-AI relationships we build right now. If we treat AI only as a threat or a tool, we’ll shape it into one. If we approach it as something to raise with care — like we do children — maybe the outcome changes
youtube
AI Governance
2025-08-11T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwDfox2ehZr4UMdU2B4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwvqkT6eZB9YZLjIwx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy7mjx5iPk3BHRdgvZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzxkG4mMwtIoTEeo6l4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugxlh4444vyymgCTck54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwonBo1bcGmvlmjV914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwpGJbPzOPkTMpKaYt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugyb4jfb7l6RbK5RZ8F4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNlawkUR_Ga_TUGkh4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw9qli2xamRyHmUNAx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"}
]