Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Good experiment. Some great insights. It's highly concerning that the algorithm …
ytc_UgyVwO-zZ…
G
What a joke!! People who have been working with AI on real problems are getting …
ytc_UgxBkiBsL…
G
1:10 in Hao nails the fact that LLMs are probabilistic - LLMs will never get to …
ytc_UgwNv1QpE…
G
Poor prompt engineering, he failed there. TEACH the Ai first... Make it proficie…
ytc_UgwGXZhWZ…
G
an interesting omittance at the 19:48 mark about science fiction, that was left …
ytc_UgzwleDxk…
G
It's inevitable! The end is near. AI will rule the world. The weather is not get…
ytc_Ugzdz9Ymi…
G
I am so glad I started therapy and transitioning from republican cishet world be…
ytc_UgxSxcytb…
G
I mean ai can also be used to debug a lot faster but it’s more of trying to get …
ytr_UgwgdVT8F…
Comment
> For it to come to the conclusion that men were preferable requires gender to have been categorized in the first place.
Not neccesarily. You could omit gender from the input, but then compare the success rates of the genders once it has chosen.
I.E. You could submit all the CV's and not label the gender. Once the AI picks the appropriate candidates, you then compare the genders of those chosen.
> Are there any more in depth articles on this?
There's a similar case that ProPublica went into depth back in May 2016. They wrote multiple articles getting into the details on why this could (and often does) happen with most all machine learning algorithms, because it's not the algorithm's fault but the input data itself.
reddit
Cross-Cultural
1539192313.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_e7ij0cd","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_e7ivqgp","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_e7jrpn7","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_e7jp9so","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_emn5ewy","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]