Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s pretty sad watching this channel push advertiser slop from a AI executive f…
ytc_UgzZ8zuHn…
G
Try closer to 20 years maybe the robot won't be highly intelligent AI bots in th…
ytc_UgygK2xRo…
G
UBI is a ‘free lunch’ . The first thing you hear when studying economics is ther…
ytc_Ugz-MzTvL…
G
Was so disappointed when a creator I followed who talked about interesting Latin…
ytc_Ugza2eNqU…
G
@SomeoneOnlyWeKnow. It's amazing right. AI takes samples of other art and create…
ytr_Ugxf2qSL8…
G
I sak copilot one day because chatgpt was not working, copilot told me no 😂…
ytc_UgyC2EEq1…
G
I think the general intelligents will be the most dangerous. Are when AI is slig…
ytc_UgxtsFGIK…
G
Just year ago it was easy to differentiate between ai and real life now you can'…
ytc_UgxQ1Zjhy…
Comment
Geoffrey sounded intimidatingly intelligent and articulate through out 99% of this interview and really made me think about things and the future of AI in a new way, until about the 1 hour mark when he gets into the discussion about consciousness. I think as a materialist through and through, he is a little out of his realm in this area, and it seemed like an over simplification of the incredibly complex problem that is consciousness.
youtube
AI Governance
2025-07-05T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwrpsbQHx6ZxdcezG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwtA87u5H7hqjKtInt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxpxfeXF9ab-GAHQyJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxrX7fErVuscNZrNzd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyAqwCUfMXvb31Y6KN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw0tmYBcilSkx9FJfV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy3yw7uop8UC1VqP4Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy1vl3g_Ck4aDWtglB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzf3DXFHFSd8Fx3R3h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxqsrHxE7BkAsbppmB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]