Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It think if AI gets good enough to write all code, then it will be AGI. Because …
ytc_UgxUOdXGS…
G
Did eveeyone miss the 3 AI robots that unalived 24 scientists? Or the AI bot tha…
ytc_UgwUBwSSu…
G
I don't think it is so black and white. I am an artist and I now use AI... It is…
ytc_UgxqQ3iVo…
G
I'll become interested in AI art once AI has a conscious experience and feelings…
ytc_UgyIi4pg2…
G
People confused between AI or real
Me : that some real looking painting. Who is …
ytc_UgzVvM4SO…
G
This is like calling your self an artist because you commissioned someone else t…
ytc_Ugw3eljF8…
G
I would be a member of the one true religion
Jedi Order
1.22/10.33
It is not t…
ytr_UgyU2ZtIZ…
G
As a computer scientist that has dedicated plenty of time researching AI, I alwa…
ytc_UgxoDiz1j…
Comment
The A.I. everyone is referring to is better termed a more advanced expert system. If one feels that a cybernetic device is aware of them under conditions of everyday life or in warfare, it is only the result of humans anthropomorphizing these objects. No matter how sophisticated, they are only expert systems driven by rules. And remember, these are limited to binary coding. No matter how complex the algorithm, no matter how complex the program, they are expert systems with very, very, very limited autonomy. The prohibition against attempting to integrate an array of dangerous instruction code involved in recognition of targets would, in experimental stages, result in "self-aware (I'm using the term very sparingly) the drones or robots attacking the people building them. Their threat is overwhelming. And if another country tries to use drones with the always-poor ability to discern friend from enemy, our drones would be more numerous and own them on any battlefield.
youtube
2018-04-03T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwOHWOKk4mrzdvx6r14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwD5zjsOKm381BRwkp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKd151bVJvhA7QixJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6QvnlNOFGFQu3fph4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzhO-M52B0Uwg-KnsF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyMkcekidWccs1Q-Pt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyoglIh54yKKwxPFFh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWqUR3rjToEfHbbWB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugytlda4Gn_CBnVOaCt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwP0Zt_2THvcx3nmvV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]