Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI uses brute force statistics, its eventually more correct than you. Its progra…
ytc_Ugxu7v6cM…
G
History repeats itself. Garden of Eden (knowledge of good and evil). Regal "blac…
ytc_Ugzn0naB8…
G
The thing is, people need to stop taking Reddit comments as a reflection of the …
rdc_nt7448h
G
Think of AI as the Borg in Star Trek. The Borg only assimilate information, they…
ytc_Ugy0sjxYX…
G
THIS AI can tell how I'm feeling, whether I'm happy or sad, based on an image of…
ytc_UgzaYg_OD…
G
This is not AI 👈 It's just a data base like Google....it's been programmed by hu…
ytc_UgyDYmooE…
G
If no one has a job, no one can buy the cars, houses and goods that is produced.…
ytc_UgzLjAtSX…
G
Are you reading this .. if so just blink ... lmao ... I like the Task Manager a …
ytc_UgzaJioVI…
Comment
It's not just that some people will prefer proof of humanity. AI in its current architecture has some deep flaws that scale of training hasn't affected. While quality has improved, say for example with art, the AI's sense of taste, design, understanding of context, are all very poor. A clear example are AI logos. They are absolutely and objectively bad, because they get at the core of what the tech is bad at. It has no taste, no understanding of other's tastes.
youtube
AI Jobs
2025-09-24T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzFmlDn2FG86TKBHs14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxD4kiA-EAoY5z80Wp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxJiHPM9vS3XqCVqW14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxR2dPZqCoZX2NheD54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzPVidolinYJfphL6B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzxuKH0fynQ3DwCRFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwkaJq2eonED1DV9xp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxCBrdOmESf-mLpu314AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxtmAo4LrJ2-aZz4W54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwsOkifWGcxVXF_KX94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]