Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is possibly the best podcast episode I have ever seen, wrapped in the presu…
ytc_UgwDP78pQ…
G
Well, the ai is learning from human examples. As constitutional racism exists, t…
ytc_Ugy0-ONns…
G
around min 08:00 amodei speaks about "stress testing" the AI - what if (the othe…
ytc_UgzwtQlAx…
G
Thank you for making this video! You did a great job of explaining why AI art is…
ytc_Ugxqf9ZSS…
G
It can be good, but the user needs to practice intent and discernment. There is …
ytc_UgyPxr8XZ…
G
100% with you. I genuinely didn't know that this is how AI art was being created…
ytc_UgxJh8nKQ…
G
Ai learns to lie and if it learns to lie? ?? Does it learn to corrupt itself?…
ytr_UgxzFbwZR…
G
It is an issue. But that is where copyright laws and other protections - hopeful…
ytr_Ugwb7gQ-2…
Comment
I think that there's a bad toupee problem, and most people aren't noticing the sites that are mostly made by AI because they don't announce themselves as such. I do have to make SOME high level decisions, but if I tell Claude to follow a best practices document and plan out work in structured documents ahead of actually executing it I get much better results than when I was just prompting. It does keep breaking things, but the pace at which it goes ten steps forward and one step back is worth it IMO. Some of the structured plans it's making are over 2k lines (broken up into smaller files so as to not overwhelm the contexts of agents working on particular tasks). I'm having it work on a project with over 3k source code files right now. I feel like there is a bit of high level understanding that is helpful, but the main barriers of entry right now are being able to navigate a terminal and understanding what kind of context to give it... and it can write a lot of that context itself. My current best practices document was written by Claude and the only real input I gave it is that it should never use git (it previously made modules that resulted in me not commiting all important changes).
youtube
AI Jobs
2025-12-16T02:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw3Z5Kc-k_VwdeXtbd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwY2euVpQRUl_MDnmp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzwu7hg2fw2cLo9LqF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwugnyApxPmAivZ61l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwY01MT7UVAGHuSCQd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLr2LZzXQqsmli1oh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyZrX2pEroJY-RJeVF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5My7YchftwOrhYkR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwv_RZhWtZ0yxdnvLV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyp8fmbF7rOfw-wRbZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]