Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The hard part isn't deciding if they deserve rights or not, the hard part is det…
ytc_UghB59eFQ…
G
When big retail chains started replacing mom-and-pop stores, people were just as…
ytc_UgwPFEZio…
G
One of the most useful threads in all of Reddit related to anything AI or GPT re…
rdc_jhc0x28
G
@impasta1682 Sure it is. I'm expressing it by putting my ideas down in writing a…
ytr_UgyB0LZRU…
G
Funny how they're the first to go from AI 😂they wouldn't give a fuck about blue …
ytc_UgwukY4vv…
G
but AI has already replaced devs. past tense. the market determines whether Ai r…
ytc_UgzGMgmvC…
G
The problem I see with AI is it will never say "I don't have enough data or trai…
ytc_UgwslIcni…
G
I get how you dislike AI "artists" and "fixing art", I agree, "fixing art" shoul…
ytc_Ugz4TU8eW…
Comment
While you're correct that messing with the algorithm will strictly make the AI less accurate, that's only with respect to the training data; if the training data is biased on its own, then correcting for that initial bias can make for a more accurate prediction. Amazon itself points to flawed training data as part of the issue; past bias in hiring, for one (and quantity of data, no doubt). And the AI may simply not be sophisticated enough to pick up on nuances in the resumes.
Take their example:
>It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter.
An AI might have (correctly) noted that no good candidates come from Bob's Women's College or Women's Discount University, but if it's not sophisticated enough, or there isn't enough data, then it can over-simplify and conclude that the problem is the word "women's", not those two specific schools. It fits the training data, but not necessarily the world at large.
There's also this bit:
>Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs, the people said. With the technology returning results almost at random, Amazon shut down the project, they said.
So it wasn't working in a lot of ways; I'm hesitant to conclude that it was correctly handling female applicants.
reddit
Cross-Cultural
1539206014.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_e7jm1ke","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_e7jgcg1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_e7jcw1i","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},{"id":"rdc_e7jva6y","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_e7jcktr","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"})