Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I use both Gpt 3.5 (Official web) and GPT 4 (on bing), GPT-4 is actually dumber …
rdc_jskpi0q
G
The name of this video is so stupid unbelievable !!! The inventor of artificial …
ytc_UgwWSfMal…
G
While it's probably true that the chat bot never said specifically that bromide …
ytc_UgwUXTu8j…
G
AI 100% doesn't help with learning. You can just ask for answers and get them, i…
ytc_Ugx5XfId2…
G
> US takes Canada
With the longest land border in the world and 40 millions …
rdc_mcqkyjc
G
These autonomous trucks could also be given sensors to monitor commuter speeds a…
ytc_UgzF6qKcg…
G
Something I find interesting is that there is this assumption baked in that we n…
ytc_UgwNKsyfk…
G
So i can destroy ai when i post misinformation and wrong pictures a lot i mean e…
ytc_UgzwEZKhz…
Comment
I suspect that the neural nets saying the right is wrong and dumb is mostly because their training data is dominated by people on the left saying the right is wrong and dumb. Of course the right *is* wrong and dumb, much like the left in that regard, but I don't think either is *so* wrong and dumb that you couldn't train a neural net to output more-or-less plausible rhetoric from either side if you gave it the right training dataset. It's not that hard to make either rightist or leftist rhetoric coherent enough that noticing its inconsistencies is beyond the ability of our current AI (and for that matter beyond the ability of most humans, hence the enduring popularity of bullshit political ideologies).
Additionally, don't forget that the neural net's ability to learn patterns is constrained by its own internal structure. On the face of it, it's plausible that rightist and leftist rhetoric are both wrong and dumb but one of them disguises its wrongness more than the other in some way that makes it harder for the neural nets to pick up on. In my experience reading material written by humans from both sides, the right seems more willing to commit openly and concisely to their wrongness, while the left is more inclined to write massive tomes propping up their wrongness with elaborate self-justifying theories that take effort to pick apart. It wouldn't surprise me if this biases AIs towards the left insofar as they don't really do enough reasoning or recognize large enough patterns to identify bad leftist rhetoric as easily as they identify bad rightist rhetoric.
(Just for fun, I asked ChatGPT whether chatbots are more likely to notice mistakes by one side of the political spectrum over the other. Its answer was too long to post here, but it leaned towards the thesis that leftist rhetoric is probably harder for AIs to identify. Like anything it says, take that with a generous helping of salt.)
reddit
AI Moral Status
1750539675.0
♥ -9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mz9pzew","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_mz1pzdf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_mz4d5tm","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_mz0ag0x","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_mz0ogi8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]