Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
More speculation. The saturation of "what if" is a p. big tell. Again, the limit…
ytc_UgyXUCaPU…
G
When a human get inspired they still draw it, when an AI gets art it takes it…
ytc_Ugw6yKb-q…
G
4 workers on ~£30k salaries actually cost an employer ~£47k each (after pension,…
ytc_UgyDyr12z…
G
Also when you spot an AI video dont rush to the comments to call it out, at the …
ytc_UgxtRLrep…
G
I'm very excited about AI. But I think it's good that Elon is prodding everyone …
ytc_UgyxpTY08…
G
The confidence some people have is wild.
Before claiming a neural network is con…
ytc_UgzdNV38F…
G
AI has no soul…. Regardless of what it regurgitates, it is going to end. We will…
ytc_UgzAlhHFS…
G
I maintain that they are different versions of the same error. I disagree that t…
rdc_djgqu1k
Comment
> I believe we miss-understand AI based on the fears of what movie producer and directors were scared about decades ago. It will never be a evil machine that decides by themselves what they want to do.
Yes. [It's worse](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities). Maybe [this book](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies) would interest you.
I recommend [this fiction](https://www.gwern.net/Clippy), written to be a relatively realistic/probable illustration of what might happen.
> The biggest problem with AI's is that it will learn patterns from failed humans. Racism, sexism and many other discrimination patterns will end up in the machine, which will be more powerful in the hands of powerful people rasing the power discrepancy.
It's an incredibly shallow way of looking at it. Consider GPT-3. It's a language model. It's supposed give an accurate probability distribution of next token, given any list of tokens before it. It is given corpus of all text available (it's not that, but it's huge enough to not make much difference, maybe) to learn doing that. The bigger model is, the more (GPU-)time it spends training - the more accurate it becomes.
Now, corpus will contain racism, sexism etc. GPT will be able to output that. Is that _bias_ through? Wouldn't it be bias if it didn't? IMO it's not bias. It's supposed to be an _language model_, but fighting against "bias" makes it wrong.
Lots of the criticism was about gender vs occupation. But if some occupations _are_ gender skewed, and we _talk about it_ - well, what is "non-biased" language model supposed to do? Output falsehoods? Is that non-bias?
More agent-like AI, hugely powerful - it'd also learn these things, same as language model. To the extent these are stereotypes and falsehoods, it will know it also.
> We have to aim to a AI that is different than us on our prejudices. So I think the questions should be:
This makes me t
reddit
AI Moral Status
1655293415.0
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_icg1fkb","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_icg3nfm","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_icfzij7","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_icg30ae","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_ichf075","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]