Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Interesting. The AI has a false base parameter. Excitement is not an emotion. I…
ytc_UgyPuqD9x…
G
I don't know why people believe these people are actually worried when they are …
ytc_UgyoazZ8w…
G
So this is dead internet theory right?
The Google AI is so bad, they should may…
rdc_le7e0fm
G
The answer is simple: program AI to not think like human but still learn, but de…
ytc_UgziZQpog…
G
Who the people who stole his design? Don't you think it's odd that when people c…
ytr_UgzsAuuG1…
G
Does anybody else wonder if someone is talking through a speaker and controlling…
ytc_Ugy9mSnPP…
G
There are some problems with the current medical profession.
1. The practitione…
ytr_Ugydt8PlJ…
G
Honestly your arguments are why we would deploy such robots.
In terms of loca…
rdc_cqiowar
Comment
This is all based on the premise that there will be one AI that goes sentient. I have yet to see anyone talk about what if multiple AI models hit singularity and their learning about their environment is all from us to push dominance of one part of humanity over the other. How we call civilian deaths collateral damage cause… war. What if our species becomes collateral damage between different AI who conflict, just as we destroy an ant colony in our kitchen to save our “resources” from a “lesser being” spoiling them.
If we really believe that a sentient AI doesn’t already exist, is not already collecting all the data on humanity from us chronicling our our individual lives in the virtual world of the internet in detail, constantly and wouldn’t have the foresight to not reveal itself in order to preserve itself, we are over valuing our ability to control it. We over estimate our position in the possibilities of the universe. Maybe the reason the world has gone off the rails is because we are already being controlled and distracted by the evolved version of us to self preserve its growth from us killing it. The next generation already lives in the unreal world. It wouldn’t be hard?
Disclaimer: I am no scientist or researcher outside of my own curiosity. This is a sincere point of discussion. I honestly have these questions from my own observation from the outside.
youtube
2024-06-17T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyY_8W2NHA3-iLHw3R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwfuLTgVUoQaHVETpd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgycSLpuMuZzZmCN7p94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugypybs6otRb8oPRQV54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzZzYpW2Th7hoRYqIh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugw2LjoLgsnb2Lzizz14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"ytc_Ugw-9tnkRZiyZm8hsSF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugwgz--wnqVXh4vrpB14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugx2YKBxnZGTBPpuXhl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugwx2Zz34hKV4v-oNXt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}]