Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sounds like it could be a boot strap paradox.
If humans never ate the fruit th…
ytc_Ugyqc1WNT…
G
I have the solution for combating AI. Go over to the wall and unplug it. As for …
ytc_UgyqYuaUt…
G
I asked my AI chat App how we can make AI make decisions that will only help eve…
ytc_Ugyq0Yexs…
G
Typically, gamma is viewed as part of the problem, not of the algorithm. A reinf…
ytr_UgyJMpeIH…
G
You think 95% of this sub (being generous) even know how to use an open source L…
rdc_m943lll
G
You won't be able to adapt fast enough. This is a rat race the majority of peopl…
ytc_UgyLlsLpr…
G
One day we are going to find out that Ben was actually a robot pretending to be …
ytc_UgyLce7Io…
G
That is rather interesting what pain is to robots. In reinforced learning we def…
ytc_UgyS3o_P3…
Comment
People pooh pooh subjects like philosophy but a lot of these conversations are borderline useless without defining what it means to "think" and other terms that get thrown around very freely. It's very impressive technology, but I also don't think something like AGI is right around the corner.
Having said that, I have a pretty wide view on what "thinking" is. I don't have to do anything more than simple math operations in my head to say that I'm thinking. Good to keep in mind that the original "computers" that inspired Alan Turing were people who just did calculations for various scientific projects, often women. Computers were always bases on trying to initiate human cognition. Now machine learning, causal bayes nets, parallel processing and the use of networks have been another huge step again by imitating human cognition since we've learned way more in neuroscience. Something like AGI is a whole nother leap in my opinion. Probably decades. That said, current LLMs should still keep improving at least a bit.
reddit
AI Moral Status
1750963561.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mzwbxyq","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mzwccmt","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},{"id":"rdc_mzwqggy","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mzx5215","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mzxjaio","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"})