Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai can’t make art in my opinion unless it was conscious, because u can’t make ar…
ytc_Ugz9G9wv1…
G
Decisions? Fucking AI does not have a conscience or will; it is fucking code and…
ytc_Ugw3cKri7…
G
I say bring it on. At this point, it couldn't do any worse than Humans have done…
ytc_UgzrbK6gI…
G
"95% of Gen AI pilots fail" is such a brutal stat, but not surprising. The fragm…
ytc_Ugy0YY18M…
G
😂😂😂its gonna replace about a million different jobs aI is gonna be a blessing fo…
ytc_UgyLXV73k…
G
lmao he lost me when he said "removal of bias based on gender, religion, and eth…
ytc_UgxRa41Tv…
G
Ive been assaulted by a creepy uber driver before so Im fine with driverless car…
ytc_Ugwd5vDN_…
G
I've tried multiple psychologists and psychiatrists. I never spent more than 3 m…
ytr_Ugz9AwZ_5…
Comment
so how do you eliminate the human bias that controls the moderation of the machines human bias? doesn't seem to be much help the "limiting of offensive results" only removed "offensive" opinions that google doesn't agree with, either manually or through new human bias influenced machine learning. opinions like that of the man who google recently fired for questioning google's current stance on workplace sexism. even if you agree with google for this example, there could be anything that google finds offensive that you don't. if the only information available is the information not censored by google, whether or not you think that the results would be in your personal favor, the control over what opinions people have access to should be the right of no person or organization.
youtube
AI Bias
2017-09-08T22:2…
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzlllO_5dApu_u1fzV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyhW6a7nhs5Eenu4fZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwdRhIPvr1v9Y0a0PV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugxmxk0NBv8WRZmXRIB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwNC1lufUTKzRJRvXd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzkaAGLi4cd1cmhZmh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwB8UU3IEk3wxgi9DJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxOxXDHijwz3eTGELx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwc7Uru4h_CTxKEzyt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw0VU55OAYT1tpt4FB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]