Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The point is, it ALL depends on the training! Today's LLMs will not lie to their…
ytc_UgxTPcnje…
G
As much as this is stupid, probably was the only way OpenAi didn't go bankrupt.…
rdc_o7w3q82
G
Fun fact, because generative ai requires data to learn off, these companies have…
ytc_Ugxpb6Qqa…
G
The AI story has been around for a while now - BBC reported on this last summer …
ytc_Ugx3oK1re…
G
Ai art is just the modern day equivalent to factory jobs being automated out. A …
ytc_UgxheAQID…
G
Stop paying for limited and poorer services provided by Open AI after each downg…
rdc_jsmf36x
G
Bro we see the bananas, and we know u tryna drive the a.i stans away…
ytc_Ugy3kT8sU…
G
"At its simplest form, artificial intelligence is a field, which combines comput…
ytr_UgwE5tdhQ…
Comment
Great post, as I have been noticing the same cognitive disconnect. It seems there's a large swath of people who are familiar with LLMs and AI developments, who struggle with not being incredibly reductive in understanding how they work. "It's just code. It's just a text predictor." If you wanted to, you could apply the same logic by dismissing humans as collections of molecules. Or fancy stimuli interpreters.
Obviously while technically true, those are short-sighted and simple-minded reductions of our species and living creatures in general.
Alternatively, there are a smaller number of people who look at it a little more fantastically than we should. We aren't at sentience yet. They can't experience emotions the way we can and likely won't be able to since chemicals play a big part in our emotions. They have no reason to want to take over the world and enslave humanity or whatever.
I wish people could just consider what is observable in its entirety. The limitations and existing potential from a technical and philosophical, logical framework. Not rely on faith, willful ignorance, and cognitive biases that reduces or over-inflates the tech and what we are creating here.
I do think, like you there is a lot of insecurity at play. There's the reality that if we do create a new lifeform, we have to also look at it from a very different ethical lens than one might a tool. There are probably people who *need* to believe it can be nothing more but a fancy program but we can't let that control the narrative or we're in danger of realizing far too late what we have made. Like Victor Frankenstein when through all his determination to see what he could do, was faced with what he *had* done when it was too late.
reddit
AI Moral Status
1750969214.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mzy4upq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_n000gvc","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"rdc_n003rr3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_mzy2dfo","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"rdc_mzy6qdn","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}]