Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wouldn't capitalism collapse on itself when automation reaches that point. Since…
ytc_UgzrAM3Cj…
G
You seem to forget that Lidars are just a sensor technology. It most probably de…
ytc_Ugxy01VX_…
G
"The speed at which we're developing artificial intelligence, beats them all"
I…
ytr_UgxoMhm4G…
G
I think students should be conscious about this... I personally don't use AI whi…
ytc_Ugz7uPvAd…
G
You persuing wrong element of the art. Elitarity that derived from the scarcity …
ytc_UgzxwFmNr…
G
So happy you guys are saving braincells because that kind of work turns anyone i…
ytc_UgzZToLAh…
G
This is NB2. I don't use NB Pro.
The reason being that NB2 runs on Gemini flash…
rdc_oi29kdn
G
Off course no. The day they do that, i would stop shopping online.
AI is dumb .…
ytc_UgzAYYy0V…
Comment
Please educate yourself on how deep learning actually works before posting about it please.
You claim that ChatGPT when asked to do a film review of Star Wars episode IV, will look up thousands of similar reviews and mash them together. This is blatantly false. ChatGPT, I’m
Its current state, is not connected to any database. Asking ChatGPT something is not equivalent to doing a Google search, where tons of files are parsed and returned based on a search query.
Instead, the query is tokenized, and those tokens are processed and then run through the neural network. A neural network is just a series of interlock parametrized functions that will give us a response that is (in this case) a series of tokens which create a movie review.
Yes, it was trained on previous movie reviews that helped to determine the weights of those functions, but no database is referenced once the model is trained. This is a common misconception people have, and the neural network will often take 1/10,000th the size of the training data, or less meaning that it simply can’t be storing the data in a database.
I agree that chatGPT is often overhyped in its use cases, and it certainly isn’t anywhere close to an AGI or to having human-like sentience.
But it’s also important to be precise about what we are discussing. You say that humans have intentionality and chatGPT doesn’t. Where does that intentionality for humans come from? Where in our neural networks does this capability for intentionality and emotion emerge? I point out these questions, because the way you describe how chatGPT works isn’t really fundamentally so different than how we work, and it learns in much the same way a human child would learn to speak a language.
reddit
AI Governance
1676271971.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_j8axmgg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_j8cfik0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_j8cnq18","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_j8cqgzb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_j8cefas","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}]