Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Even if big companies are suffering a bit they aren't actually losing yet, with …
ytc_UgwHvxrsI…
G
Humans can only see a very small range on the color spectrum so what if AI wrote…
ytc_UgyroVscj…
G
Nah..."people" haven't always 'believed' in religion — MEN have always created a…
ytc_UgzP3HqNy…
G
>How would you like to be at work and the system thinks you're a guy that sho…
rdc_enk0pxg
G
1:15 BMO THERE YOU ARE and 3:28 that robot from overwatch( I don't play or watch…
ytc_UgivGeenb…
G
Why would you want the government having any part or controll with AI? I underst…
ytc_Ugyl2D7vi…
G
What a stupid people who believes that AI getting conscious, It will never even …
ytc_Ugxlhq1Sb…
G
I personal have no opinion on the subject of A.I. however I see too many people …
ytc_UgxZj-6tP…
Comment
There's a big difference between the kind of model that suffices for agentic use vs frontier models. The US focuses on frontier models with the highest intelligence, which is why the US leads in cutting edge models like Mythos. The AI majors and hyperscalers have been focused on training and inference which can only run on high end expensive hardware. The models literally cannot fit in the memory of a commodity GPU.
But a commodity video card can run Deepseek or Qwen with a modest parameter count. They're not as smart, they get more things wrong. But we don't need Einstein for 99% of tasks, and it would be inefficient to pay professors to moderate comment sections. Agents need to do simple tasks like evaluate the content or tone of a message and categorize it or trigger an action. US hardware would be wasted on this task and we've ceded this low margin business because it is entirely contingent on the cost of energy.
The real metric is Watts per token and China's national energy policy executed over decades is paying dividends now. Cheap power enables their less efficient token generation to still compete on Watt's per token and that's why they're dominating the agentic inference market.
youtube
AI Governance
2026-04-21T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgzfSnzWst02KsznDNp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzY8mdUvNIObd0uSLN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy0AO0fNWQwyXR23b54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzZ-nltPK1eP9yln694AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzlI_R6I6jh169VVr94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyuU8SMkG6McRhNgaF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgyUA_DQcDFZ6VUdb214AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzHcRRdHIF518tDVil4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwPhpUcnxHfa8b2cF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzHy8eKwczW26pQIG54AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}]