Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the difference between ai and modern art is human touch and intent. it still com…
ytc_UgwkmzAD_…
G
My bets are on AI being used to blame for a crisis that humans have knowingly pl…
ytc_UgyhdpGqG…
G
at number 16, i would not use an ai for any of those jobs, they require human so…
ytc_UgztL-kR_…
G
At 5:23 she just gives up on trying insults and just says: trash. And I am about…
ytc_UgwQnEJ_u…
G
every randomly picked detail chosen by a person has a history behind it, every d…
ytc_UgzMWoSTe…
G
You're correct that current models will never lead to AGI. However, the models a…
ytc_Ugys-AhMt…
G
I got a photoshop AI Ad in the middle of this video 🫠
The future is bleak…
ytc_Ugx-8Lfdl…
G
There is a missing gap here somewhere because I'm a dev with 15 YOE and I find C…
rdc_mjte9q3
Comment
I want to point out two things that bother me greatly about this report. The first thing that bothers me is that we did not hear anything from the people who create these algorithms or tech systems, or from any singular person with an opposing point of view. I still know almost nothing about how these systems are even capable of being racist or how that would work. I understand the tech that can be sloppy, but not biased. The second extremely troubling thing is the people interviewed, especially Rashida Richardson, had nothing to say. What I call horoscoping the narrative. Go back and listen to her again. Her words were vague enough to apply to any situation, but the word choice was "sophisticated" enough that if one were predisposed to believe what the report is saying, they might just nod along with her as if she was preaching a gospel. It's pretty deceptive.
youtube
AI Bias
2022-02-10T15:5…
♥ 31
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy0XPWqDqDIWOGG5wR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzcwNCC5jKT75-mSvx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQ1tEAlzJGb79uusF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxbkE9SXbPDpabJegJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwxciNanqWpEXnc5KF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_dflmFMB8JMppg7F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxxkzJ3A9zRzyATob14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZJ497eW0aQd0McK14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwoNS8whdeV-gbTtWN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwiP6lmt_qluOrczmJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]