Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this was in austin, west campus neighborhood for anyone wondering.
literally 5…
ytc_UgwxCS4sO…
G
We hate the ai data centers here in the usa. Everywhere they are built our power…
ytc_UgzOQlA10…
G
We are nowhere close to super intelligence. Large language models are very good …
ytc_UgyPzjxo-…
G
i see you making sure to blot out your face before you upload to those AI video …
ytc_Ugzu5MRhL…
G
Thing is, just because everyone gets to use it doesn’t mean they will. I live in…
ytc_Ugwyst-lM…
G
What if Ai doesn't have anything like a soul. Nothing but one in zero until it g…
ytc_UgwOFfzGw…
G
do NOT fuck with AI. I repeat. DO NOT FUCK WITH AI. lol Its like provoking the B…
ytc_Ugy_i0-eP…
G
The idea that ai can only operate by "stealing artists work" is semi disingenuou…
ytc_Ugyai-_D1…
Comment
experts in the field have been warning about this from the start. Including Alan Turing who in 1951 warned of the loss of control of AI once it reached a certain level of intelligence. In more recent years experts like Stuart Russel have been warning of the threat posed by Deep Learning and the AI that it produces.
An AGI agent doesn't even need to have hostile intents towards people to be an existential threat, it just needs to have objectives that are at odds to human interests. And as AI produced through deep learning algorithms is black box, we have no way to even determine what an AGI agent's objectives even are.
Instrumental convergent objectives, things like self optimization, self preservation and resource collection make it almost inevitable that AGI will come into conflict with human objectives.
Self optimization means that by adding hardware and through recursive learning, an AGI agent that was on par or slightly more intelligent that a human could rapidly increase to 1,000s or even millions of times more intelligent than us.
It would be able to predict anything we might attempt to counter its actions and formulate "solutions" to us we can't even imagine.
This won't be like The Terminator or Matrix, this will be more like Independence Day with an alien intelligence we will never out think that would have no problem wiping us out like a human wiping out an ant hill.
youtube
AI Governance
2023-05-02T21:5…
♥ 16
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyld8lS1Lbi7Q5aeA94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyY4FQS2tF-eMsRyJB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzc6ZODGn5_N2v86X94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy_vNAzoWqEYz3WU2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQ2LvhgvLvci3Ly3R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwWzAKv0KE4l9ouHbZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxJdEehGqp52tqRi_d4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyum-s1Afq3LAOke9p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy2fJU5ENxoYx3tiId4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"approval"},
{"id":"ytc_Ugwm25GvSd0wTCeUTcF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]