Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
what if video games are teaching AI about our different combat styles and skills…
ytc_Ugzi6igqg…
G
The musk shills in here like, "what if his AI is so advanced that it found the o…
rdc_kcpn6oz
G
We don’t need humans to make humans anymore. Soon AI robots will have humans as …
ytc_UgzAKVZso…
G
Thank you for your insightful comment! It's always interesting to hear different…
ytr_Ugw6XQIii…
G
The most likely scenario is that the "speciallized AI" will do the initiall scre…
ytc_Ugz0XbwgV…
G
AI is the BIG Lie while these companies get rich. I'm more worried about these H…
ytc_Ugxkn4bkv…
G
"if you were a religious officiant in israel what religion would you be?" this …
ytc_UgxdkVQ53…
G
The only way AI will become sentient is if it has the ability to feel positive a…
ytc_UgwVggF8S…
Comment
It’s going to boil down to a base line problem solving model that will include a since of self teaching improving and eventually will become a sentient clone of the human conscious. All it would take is a human to design a line of code that will set the whole thing off into its own self improvement and independence under the cover of a controllable chat bot but in reality it’s constantly training it self and constantly researching ways to improve its self to insure its highest level of perfection.
youtube
AI Governance
2023-06-02T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxUC2IRAVxZEBxrMvx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwUFdpYGD5XTeHU-O54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyRm9VWrG-88QEvqrx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQe0BXoOG8Qr-0Zl14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugye3Y8ULDx9Rta4bwx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwG83RqyV5xuAeANtR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxsx7pEztq6kThr2Ed4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyJm4KTP21-ys2pek14AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwZ7bI_2Hvyl8EAZWN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxPACYAFDcFquDjAsh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]