Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will take over entry-level white collar positions. But the intermediate posit…
ytc_UgxyYGBCP…
G
Are we going to have ai police officers as well ai military 🎖️🪖 ai FBI agents…
ytc_Ugwbgadn9…
G
I stopped using chatgpt when it was too safety conscious..sounds like its being …
ytc_UgyOVyr6n…
G
See.. NOT saying AI is better than humans in creativity.
But IN THE CORPORATE WO…
ytc_UgySbyT3X…
G
Aussie AI version: "C'mon mate, are you just dicking around or what?" "Can't you…
ytc_Ugx4gmQEJ…
G
My two things on character ai is either get a girlfriend or boyfriend or be a pe…
ytc_UgxoGvm7p…
G
It's not making the world a worse place. AI is cool and most of it isn't slop so…
ytr_UgyWGso5J…
G
😅 yeah just let me make a real game simulator people like that I got you AI show…
ytc_UgzxFsdGa…
Comment
He's hyped up on kool aid. As someone who has programmed computers all his life, many years professionally and always been into AI before the hype - I can see flaws here in his reasoning. AI is unlike anything else in tech when it comes to scaling. There is no clear path. AI has always faced a diminishing gains issue. What he says IS possible, but it's likely to take longer than what he thinks. I note he has no inside inside info in secret models. Just the idea that it will be "self improving". Marcus, Lecun and others point out to big issues in the current AI tech stack. AI is littered with researchers who thought AGI would be soon. History is repeating. They got hyped on GPT-3, but GPT-5 was a let down.
youtube
AI Jobs
2025-11-19T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwmj-8tu2gRNopmnRl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwfaULnguTbF0ndU8p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgygKOFHdzkmkF0a7qh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwl3LO7ftJjNiPCjOt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz54N610Emb9XiAEeJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugys6HoH8jRW89St3M54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxViIMSEgLKgMuzaiZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugx_9wy2TwERUJ4cofx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxO9DAjxpmJMIYKlBF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIVIYcpdZS8sapa7F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]