Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Intelligence is nothing it was always about surviving. What you think that Artif…
ytc_UgyAe5aae…
G
I got added to an "anti AI art bullies" list on bsky and I was like awww you not…
ytc_UgxfIr43k…
G
From the sound of the article they threw in a ton of resumes and trained the AI …
rdc_e7jals0
G
Absolutely! The dialogue between Sophia and the AI highlights the potential for …
ytr_UgylezgbU…
G
AI is a danger to all humanity! It’s all fun until it’s not, not only will these…
ytc_UgyuFH964…
G
It's not surprising to see a lawyer shitting on an A.I. doing legal work. Yes th…
ytc_UgxXmz0pz…
G
If AI can do the job without error, it's disingenuous to say they won't doctors.…
ytc_Ugwb44GEy…
G
What does all this mean for music that is a "Work for Hire"? If I pay for an AI …
ytc_Ugzwe8Nko…
Comment
What a bunch of horseshit. This current, 4th in recent decades, AI hype cycle is the most damaging hype cycle to AI research. LLMs are portrayed as something that will get humanity to AGI and replace most jobs, while the same LLMs cant answer basic logical questions. Forget about some BS benchmarks. Real people with real jobs all say the same thing. LLMs are crap. They are good in simplyfing easy, repeatable (and well-documented) tasks. They suck big time in real-life tasks. And that's exactly what you would expect from a STATISTICAL model that basically just combines words that fit together based on the prompt. This delulu guy in the interview (whoever TF he is) is wrong on many levels.
youtube
AI Jobs
2026-03-05T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwXloj4DiKiGvFPiYR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxuwJrNqQF3nJUpr554AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyniJKSXkla294VQ2h4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw_W7TerPDDyeW8a9F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzsJZD1al_zw7sq7fZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzhQesH3EAW0CKvh4d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwePDAihiDMQDNMSVp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxnAcRosv_HqFy0ZuN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKj_qlgBmvOlsaA9p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx7vfZy8sF1cAg-ZEl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]