Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
people in the west aren’t worried about AI replacing them as humans, they are wo…
ytc_Ugwu47BwY…
G
True, but the development of safety failsafes has always been an integral part o…
ytr_UgwGsIxp6…
G
Now I realised that Mr. Shah has good futuristic vision. He found the threat of …
ytc_Ugx_vxRDq…
G
Humanity's going to really regret AI and wonder why people didn't stop it before…
ytc_UgxGd4DrW…
G
This is madness. AI itself will regulate going forward and you need the human wo…
ytc_UgwpneseK…
G
Oh. Yeah. Totally. Omg! Oh no! AI is gonna take over! It’s lying to you on purpo…
ytc_Ugwxhhug9…
G
So we're lookin' at a potential world of capitalist AI with a a robot squad to k…
ytc_Ugz9XUeNq…
G
Reality is often asymetrical, messy, uneven, imperfect. AI creates flawlessness …
ytc_UgwJf52ER…
Comment
The bias against Musk suggests this Canadian Liberal is looking askew at the future with very Liberal eyes. His distaste for Elon is probably political. He says Musk has no moral compass, yet it was Musk that left Open AI with the view it should be open source and transparent, it was Altman who sold it to Microsoft. Second, he does applaud Musk for his EVs ( obviously looking at Climate Change) and his providing Starlink to Ukraine - so I have pigeon holed him into a Globalist Liberal with a 🇺🇦 in his bio who also believes regulations are very necessary - Elon feels less are better. He also doesn't credit Musk for warning that AI is dangerous, something he admits he only recently himself came on board with. Overall, not too impressed.
youtube
AI Governance
2025-06-20T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgznAlOBnRpSYhctIzR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyuwaeuWJ1xyltO21p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-7zyub7iq1CzsSwB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxnI_GYza4F5bdq7Nd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzTNSPQrV7AIto21bh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwV3T_XK0otFZp6LF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgygDvWO792FiWdxPJR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzXVGITN4AN-ymzllp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz3DwgYZqrFWNi_PXJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyslR3HXMnuwUGowTB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]