Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humanity measures time by reference to apocryphal events. The rise and introduc…
ytc_UgxZD96vp…
G
Facts. And even with all the fear around AI replacing jobs, there are ways peopl…
ytr_UgyWLD8i2…
G
This robots wil destroy whole humanity
not good idea use artificial intelligen…
ytc_UgyAtl6sM…
G
One of the tips I give for you is start reading books and try to learn the schoo…
ytr_UgxuNnf2B…
G
SO WHY has AI not developed u-toob videos suggesting it is not a threat? Not th…
ytc_UgwhKa51i…
G
in the far future, actual human beings job is to reproduce. AI will pay us to ma…
ytc_UgwiqJAi3…
G
The other countries of the world that are developing Ai won’t have the same if a…
ytc_UgzdoLqkp…
G
I think a lot of companies are lying about how effective AI has been at improvin…
ytc_UgyavnKsH…
Comment
Good test cases exist to make sure a piece of code behaves in a way that the user requires. That's context that's impossible to determine just from reading the code alone. That context comes from requirements.
> "It's not about " is it okay?" it's about " what does okay look like?" - Kevlin Henney
An LLM will simply vomit out a bunch of test cases that assert that the code behaves the way it currently does. Those test cases will be brittle, coupled to the implementation and will make the implementation hard to change.
reddit
AI Jobs
1728304182.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_lqrgwot","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_lqrmdqa","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_lqrnarq","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_lqrhkex","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_lqrmqdy","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}
]