Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How about the fact that not only AI is taking over jobs, but its made the job ap…
ytc_UgyWN3T6N…
G
@coldy1897 I can assure you, this is not a fight you can win by having the best…
ytr_UgzYyOHxk…
G
she looks like she can see sounds and her mouth is flapping randomly it’s defini…
ytc_UgzCPWL3S…
G
why would we give ai a "human-like" conscience rather than a meeseeks-like one. …
ytc_UgxApS1Ot…
G
Now let’s put this in perspective, the iPhone Vs iPhone 15 PRO Max in comparison…
ytc_UgxG3jcdq…
G
If these robot trucks go ling haul, how would they refuel? Also, I have Ocean fr…
ytc_Ugx37LDRF…
G
my thing about ai art is that the person who wrote the prompt out did not MAKE t…
ytc_UgwvNC2F_…
G
I have hobbies, sports, skills, social life, travel, all sorts of things I love …
ytr_UgwgGZRhF…
Comment
The video totally overlooks the fact that AI is hitting a massive development wall. It’s not just about "safety" or "scary tech" it's about the economic bubble and scaling stagnation. The tactic of just throwing more powerful hardware to the LLM's has totally stopped scaling. And with an internet being filled with AI, we’re running out of high-quality data, and the cost to eke out even tiny improvements of these models is becoming exponentially higher than any actual ROI.
The real reason researchers might be jumping ship isn't just "panic" over AGI: it's likely the realization that the current AI path is a financial dead end that costs way more than it can ever earn back... A bubble that is gonna pop no matter what.
youtube
AI Governance
2026-03-17T03:5…
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzpZB2TwIDXQwv6Zcd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxWkOCjBPsxNPNil3p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwwES1E11rMPgx7jlR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwcuJSMfFvBtvURPZV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxieUY_SVr4KHRGD_t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxll8Gz7SsOcqr3SpB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY_vzB08zSWLljGGl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyEYKV9ah0Y7WJ3eiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwCbE5lMXvD9rR_Hmp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyXoPUvHeehIzSBZP54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]