Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is a deep fundamental change. It doesn't really compare to past innovations…
ytc_UgzAn_pl_…
G
I want to hear AI speak to a guy from the Chicago Southside.. The real test..…
ytc_UgycNSOsH…
G
Don’t worry, zuc will also be replaced by ai, he probably already has been. Zuc …
ytc_UgwTX3Wqk…
G
AI Agents are already replacing several types of Software Engineering Tasks and …
ytc_UgwjVODkh…
G
So the fuck what if it's AI generated I highly doubt you can design anything bet…
ytc_UgyQUhaqr…
G
Insert the "This is fine" while the room is on fire meme. We're all going along …
ytc_UgyRrG9gS…
G
Gaslighting an LLM. Honestly, this bot was your only friend. And now you've even…
ytc_Ugwjdph8a…
G
Why don’t they just say no self driving car can go 5 m near a different car? The…
ytc_UgwMlxjfJ…
Comment
Maybe I missed it, but I could find no acknowledgement at all that the scaling laws or financial collapse of a number of AI companies could dramatically slow the march towards AGI. He just ASSUMES that AGI is inevitable, and fairly soon. The math of scaling does not support his assumption.
youtube
AI Governance
2025-11-17T17:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw2AJNKtt2OgfjoEZZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyIpUU2aCX9jnFKErZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugznn0C1Fl_NHQR7md14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzxihzLBWVIGPcYWF14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzJVVKbAm-A5anHK6V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxrXac_HDq1J2t3OEl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyH-R_r0bUsdCOVRaF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxJDkVz0OrSuu4-QuV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxyQl8wQqGyUeyitJt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwoXAkcvF-h0utWPwh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]