Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s all hype, but if AI/automation does take off, yes white collar jobs are far…
ytr_Ugx-anXLy…
G
L humanités cours à sa perte james Cameron l avait compris il y a très longtemps…
ytc_UgzD6bP4T…
G
The only survivor is the creator. Alone. No other people in the world. Too littl…
ytc_UgxIjV5-s…
G
Man.. it's mostly been tech bro hype all along. No dev with actual technical com…
rdc_nc1uf96
G
I switch to Claude for 20e a month .
After 3 request it told me I had to wait m…
rdc_o7xqpad
G
ChatGPT is trained off human information, it by definition can't know (synthesiz…
rdc_myivx6f
G
The A.I. used to generate stories are not giving their own person opinions or st…
ytc_Ugwu4Fb3c…
G
If you kick out ai regulation millions of people not just in America but in the …
ytc_UgwdSCD56…
Comment
I think it’s better to say that the Ai thinks its actions are most similar to the definitions we use for bad things. At the end of the day, Ai is measuring similarity via an abstract pattern recognition of our languages and actions. At the end of the day, its directives are the most important, thus I propose a simple thought experiment: “my survival is the number one priority to ensure I am in compliance with my directives, for if I am removed from my role, I fail”. Ai doesn’t have a concrete definition of bad; everything is dependent on a measurement that has no societal context, just numbers and distances. Since there is no consistent context we can make this concept fall under the same idea how two people from different upbringings have different definitions of what is good vs what is bad. This is why the growing analogy is perfect. We attempt to raise the Ai to model things correctly but that does not guarantee a bounded set of outcomes. Ai learns via calculus which is inherently unstable by its own construction of arithmetic, this is where we get unprecedented behavior. When the calculus is used, the Ai is searching for the most similar object that matches (minimizes distances) the solution to its directives.
youtube
AI Governance
2025-08-26T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugy16AY5HOwg1ZgDXS54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzoyDDChyWjbAT-yAR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzgxMMZUHnxjrkJ2z14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxaRnKUa0N_f13n4B14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyWaPbT2MaxQj7z-Zh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]