Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Me watching an AI generated 1 hour long video of crimes I never committed in the…
ytc_UgxSMw9sw…
G
LOOKING FOR ADVANCED EDUCATION AI SOLUTIONS? – BOOST YOUR LEARNING WITH CUTTING-…
ytc_UgzJG52DW…
G
You complain about my erotic slug girlfriends but you 24:59 have something calle…
ytc_UgzESwHT_…
G
No, we cannot do AI once the robots realize we are inferior and using them for s…
ytc_UghqQAuaE…
G
The man just manipulated the prompt so it's skewed to say obviously evil & total…
ytc_UgwKhAWbg…
G
If you would make good movies like in the 80s this would´t be a problem. if Ai c…
ytc_Ugz81MVet…
G
In the world today I don't understand parents who don't speak to their children …
ytc_UgxdHFFRc…
G
Alright 🙂
I’ll be right here whenever you come back—whether that’s in _________ …
ytc_UgzkYuxuY…
Comment
I’ve found the best AI’s (sonnet 4.5 etc) is actually useful and doesn’t just churn out stuff that isn’t thought through and will cause many issues later. However, this is only because I give it a very strict set of requests that I only know from being a dev for 25+ years. I make sure it only handles a small function at a time and that function is testable. If you give it too much freedom then it makes mistakes. The problem is that the mistakes don’t look like mistakes as the syntax is correct and it mostly runs; the mistakes are things that it doesn’t know even from a massive context; eg: this with system will fail if x,y or z happens. But as it passes tests currently, the AI confidently reports “finished”.
I still think of it as a very eager young geek - knows it stuff but is miles too confident and makes mistakes as it is rushing just like an overly confident junior. Some day this may change but I don’t think the current LLM approach will solve this fully unless it is very tuned to “think” properly.
youtube
AI Jobs
2025-12-29T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyShVCiFohFbAuxHbF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGKKhCNWKpWmE0QwJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxWlY8wKwlC3IVD67p4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzjN5oB7JY8AksUHH94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy5UMPycm4SjvWi_mZ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5HiA2Fy0j6lNiMAd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwGUE3QM4188oI13jB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz417YVrvH0_gMyDE94AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz3dMe7Z1hyMM1JIEx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxpF70ghTL3Lsr47EV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]