Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not possible for a robot to become conscious if it simply uses regular comp…
ytc_UgzZ2y196…
G
We understand your concerns about the potential impact of AI on humanity. It's i…
ytr_UgyUG7ThC…
G
The risks of LLMs (not calling it ai) are not always exciting, and I think that …
ytc_Ugy10QP_A…
G
The fact that people think that AI is simply taking "insparation" from people. l…
ytc_UgwDhT88s…
G
I know it's fake, not because of the brilliant idea of giving a gun to a robot o…
ytc_UgzQHq0Wj…
G
@lunamig1006 https://youtu.be/OlacUhm5PlU?si=di4DnweKQ_vARKhi
This is also rea…
ytr_Ugw4SaToK…
G
I love Munk debates and was really excited about this one but... the arguments a…
ytc_UgwOibtTX…
G
Nothing says “you” like YouTube forcing AI into every corner of the site. They’r…
ytc_Ugz1VSWfE…
Comment
As a counterpoint one could argue that the most dangerous AI is a very specialized one that has a limited scope and goal.
Take a real life virus as an example. It has a very specialized set of instructions. Inflitrate, replicate, and loop. It takes no view that it is killing it's host or that the majority of the offspring will be not survive when the host dies. Or that the most efficient processes is counter intuitive to its longevity for continuing its posterity.
Now it's not apples to apples comparison but my point is that for me I'm not worried about an over engineered AI but a flawed one that cannot discern its own shortcomings.
reddit
AI Bias
1438052206.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_cthnpj3","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"rdc_ctho0g5","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_ctikd3g","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_cthsv2d","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_cthnmx1","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]