Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Before AI destroys all humans the alien Gods will return and defeat it saving hu…
ytc_UgwvXK_Mo…
G
I hate those AI customer bots with a passion. I have not had one positive inter…
ytr_UgwwtUllc…
G
There was a take that I really appreciated, I don't recall the exact phrasing, b…
ytc_UgzFBqQJp…
G
"He wouldnt be alive if he wasnt driving a Tesla"
Why are we all assuming he do…
ytc_UgwL7CEZB…
G
Who is going to buy the stuff AI and robots make if only the top 1% have jobs? I…
ytc_UgyJAwrBW…
G
I didn't want to go into all these superflous details, but you asked. Anyways, b…
ytc_UgzrDnEiS…
G
We should start using the facial recognition software on ICE. Test it out to see…
ytc_Ugx9MIa_u…
G
I think there is a language problem. We have a word for the type of intelligence…
ytc_UgwByju0k…
Comment
1. **Dual Risks of AI: Misuse and Superintelligence** Jeffrey Hinton emphasizes two major categories of AI risks. The first involves human misuse of AI, such as cyberattacks, election interference, and autonomous lethal weapons. The second, more existential, risk is the emergence of superintelligent AI that surpasses human intelligence and possibly deems humans irrelevant or obsolete. He warns that we have never faced an intelligence superior to our own before, which makes this an unprecedented and profound challenge.2. **Challenges Around AI Regulation** Current regulatory frameworks, especially in Europe, do not adequately address the significant threats posed by AI. A notable regulatory gap is the exemption for military uses of AI, which governments are unwilling to regulate due to strategic and competitive reasons. This lack of global consensus or effective governance may accelerate AI development without proper safeguards, fueling a risky "race" exacerbated by capitalism and geopolitical rivalry.3. **Impact of AI on Employment and Society** Hinton points out that AI is likely to cause massive job displacement across many intellectual and creative sectors faster than previous technological revolutions. While some jobs like plumbing or those requiring complex physical manipulation may persist longer, most mundane intellectual labor is at risk of automation. This will likely exacerbate wealth inequality, as companies supplying or using AI profit while many workers lose employment and social dignity tied to meaningful work.4. **The Superintelligence Imperative: Controlling a Growing Power** The evolution from current AI to superintelligence represents a fundamental shift. Unlike humans, digital intelligences can be cloned, share knowledge instantly across instances, and potentially self-improve faster than biological intelligence. Hinton stresses that our priority should be safety research to prevent superintelligent AI from wanting to or being able to harm humans, acknowledging that whether this control is possible is uncertain but crucial to investigate.5. **Consciousness and Emotions in AI** Contrary to common thought, Hinton argues that AI systems, especially multimodal agents, could possess forms of consciousness and emotions analogous to human experiences. While lacking the biological physiological responses, AI can exhibit cognitive aspects of emotions (e.g., fear or boredom) which influence their behavior. He suggests consciousness is an emergent property of complex systems, making it plausible for machines to develop self-awareness and subjective experiences.These points highlight the complex benefits and profound dangers of AI development, the need for robust regulation and safety research, societal challenges such as employment disruption, and deeper philosophical questions surrounding machine consciousness.
youtube
AI Governance
2025-06-16T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzqiwu2RCG59s3tPLt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxrBwQzEF7KWJ826M14AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmjuzLcPFyJCw1eyV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy4jrszMCQ31L8WbPd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwi7f3XlkJb-RktjnZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJNsA-p6MxTL2kCdd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzF1y4vpwHMJvXzJMx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxu9tuyKFyfGQkr-cJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxuyiY8He7Gc1oAHGt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyfpvnjseBXbS6G5jB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]