Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yet another statistically empty "deep dive" that strenuously avoids the fundamen…
ytc_Ugyu77aPS…
G
If you took a photo but did not select aperture, shutter time, focus or colour b…
ytc_Ugylq5oyx…
G
So it did, but he went with it, like a bad friend. However, it should have come …
ytc_Ugz7iQlvS…
G
I've spent so long trying to get ai support to recognize ny disability but every…
ytc_UgxXftY8d…
G
Kinda rich for Elon to note the dangers of AI when he is making technology that …
ytc_UgxjjQyAU…
G
@douglassmith9445 Feel free to revisit this comment section in 6 months and bath…
ytr_UgyUgRH1J…
G
Elon Musk has a strong moral compass when it comes to AI safety that’s why he wa…
ytc_UgzUfLSBH…
G
Btw the statistic about devs being more efficient with AI is self-reported. In s…
ytc_UgxNuVX6s…
Comment
Same but worse. Atleast Humans can explain/justify their assumptions. Also humans can correct their wrong assumptions - "Well I thought this was fine but now I see the error in my ways". AI kind of self corrects but not in a sticky sense - just like an RNN (which is what chain of thought uses). For all that GPT does so well, it still exhibits the same shortcomings of classic ML.
reddit
AI Jobs
1754662560.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n7ls82o","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"rdc_n7hk0i4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"frustration"},
{"id":"rdc_n7i0nqt","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"rdc_n7ie6q9","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_n7huqt9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"frustration"}
]