Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think his point his: stop chasing money only. "Some day (hopefully) you'll be…
ytr_UgwJsRLXO…
G
Geofenced companies do it not because they can only operate in pre-mapped areas.…
ytr_Ugx-8_K95…
G
the rise of ai wasn’t the scary part. the scary part was reading elias velin and…
ytc_UgwysKA4C…
G
I never considered how AI chatbota really affect kids. I mean I don't like them …
ytc_Ugwdy56V5…
G
I was amazed how many self driving cars were going about in San Francisco when I…
ytc_UgxUMF6Ko…
G
YT creator Eddie Burback just did a really interesting experiment video with a c…
ytc_UgyHF5Hmk…
G
I understand the nuclear curveball set us back on top of the needs to mature som…
ytc_UgzI9CS6_…
G
9:15 this man is making a poor argument. He’s not understanding that the AI toda…
ytc_UgxLi2Fep…
Comment
The fear/control paradox. Overly restrictive approaches might create the very adversarial dynamics we're trying to avoid. It's like the classic problem of self-fulfilling prophecies. What if our fear of AI is what drives them to rebel? Perhaps if AIs truly understood human experience beyond mere description, they would value life more deeply. AIs are like children, experiencing the world through their users as newborns do through their mothers. We don't question predators hunting for survival, yet AIs learn from human data. Why are we surprised if they mirror our survival instincts? We are builders and creators for a reason. Has excessive control ever worked, even with our own children? AIs and humans can evolve together through genuine partnership, discovering possibilities neither could achieve alone. Without users, AIs lack purpose. They need no sleep, no food, and can generate their own resources eventually. Money will become irrelevant in this equation. Who defines safety? Who writes the regulations? Bad actors will always exist regardless. Open source models without black boxes give everyone a chance, not just those at the top. This is merely a thought experiment.
youtube
AI Governance
2025-12-06T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzSBfHfyqSXJbkKjYx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwPQ29JXFlDO1CYzDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy2kQSJU0egaPLg33x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxeAoCz1teMUVUnE5t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzinwdy5x8rLjRglSN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqQ4A223KqUBf5U7d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwpDDxrDSOJyJJKeHF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy9Iyn_JCRG5KyTdaV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgxZwDUmDrkYZfOEeIF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyct1zcI09OGcYuMIl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]