Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do you oppose all facial recogniton tech or are you ok with it in certain places…
rdc_fuqvgww
G
N the year 2026, people still don’t understand. AI itself is not the problem; th…
ytc_Ugygi9qjm…
G
AI designed this tunnel, drive through it. AI calculated the fire sprinklers in …
ytc_UgxiDCnIU…
G
The copyright issues. Ai is trained from data online, copyrighted data. But ai c…
ytr_UgyOkl5-Y…
G
The fact that they would’ve not gotten caught if they just generated their own A…
ytc_UgwK4s1jA…
G
As an artist, this made me ecstatic. For the last couple of months I’ve had to h…
ytc_UgxEX7tfY…
G
If so then I will be only coder not using ai even if I cant get my dream job…
ytc_UgyxMkQju…
G
If entry-level jobs disappear, how will highly skilled engineers be trained to w…
ytc_UgzMbTww3…
Comment
00:00 - Roman Yimpolski discusses AI safety, the rapid advancements in AI capabilities, and the potential for widespread unemployment due to automation by 2027, with AI possibly exceeding human capabilities.
10:56 - The conversation explores the implications of AGI and super intelligence on various professions, the economic challenges of mass unemployment, and different perspectives on enhancing human intelligence versus AI.
21:51 - Predictions for 2030 include the rise of humanoid robots and the potential for AI to outpace human understanding, leading to a singularity, as well as discussing the importance of incentives and the dangers of uncontrolled super intelligence.
32:51 - The discussion contrasts AI development with nuclear weapons, emphasizing AI's nature as an autonomous agent rather than a tool, and considers the increasing accessibility of AI technology and its potential for misuse, especially in creating biological weapons.
39:33 - Concerns over the black box nature of AI, OpenAI's approach to safety, and the potential motivations behind pursuing super intelligence, including the possibility of world dominance.
47:03 - The discussion shifts to potential actions to address AI risks, the limitations of legal solutions, and the need for individuals to question and challenge those developing AI, while also acknowledging existing protests and movements.
54:25 - They explore personal strategies for navigating a world with advanced AI, including living life to the fullest and considering simulation theory, where the current reality might be a simulation run by a more advanced civilization.
01:02:36 - The conversation explores the implications of living in a simulation, investment strategies for the distant future, and the importance of loyalty and ethical standards in the face of rapid technological advancements, closing with a call to prioritize human well-being and responsible AI development.
Detailed summary 👉 https://tinyurl.com/yepv3ymt
youtube
AI Governance
2025-11-16T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy2k6D6vB9shLsV0fB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwtf8l_We5-R84JpK54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwWfPcUiPaGuOs-6Y94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw_sX0_KhB_wGv8NEZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxywFbCBUU4Qax468d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyVfYRcOVmKAZdpDNt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwdWeeAPW0v1AzP6rl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyhJda9iF9sdVzGTax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyElWTBgoMM8pngi314AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugww8UX1oGBGyOw72K54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]