Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yep .....the demons are already using satan tools to brainwash its people . AI …
ytc_UgwfLUAJW…
G
2:00 oh ok, when they bring robots into an automotive plant, that is not taking …
ytc_Ugw38s4qr…
G
They are lying to you in the water that comes out comes out of steam and the ste…
ytc_UgwK43MLr…
G
Oh, heres an Alternative, Humans break away from the current heirachy, create a …
ytc_UgwUn_UbR…
G
Probably eventually get there, but the 80/20 rule where the last 20% takes longe…
ytc_Ugwz9aZBx…
G
That is why it is being offered for free now so that we can test the limites of …
ytc_UgxDjDCzk…
G
For those saying this doesn't work or will not work, your children are already a…
ytc_UgwWnju_2…
G
XPeng gait is slightly more typical to humans, but the Optimus looks more smooth…
ytr_UgzUpyH2F…
Comment
Melanie and Yann seem to completely misunderstand or ignore the orthogonality thesis. Yann says that more intelligence is always good.
That's a deep misunderstanding on what intelligence is and what "good" means. Good is a matter of values, or goals. Intelligence is orthogonal to goals. An agent with any amount of intelligence can have any arbitrary goals. They are not related. There are no stupid terminal goals, only stupid sub-goals relative to terminal goals. Bengio briefly mentions this, but doesn't go very deep in the explanation.
Melanie mentions the superintelligent "dumb" AI, thinking that it's silly that a superintelligence would misconstrue our will. That is a deep misunderstanding of what the risks are. The AI will know perfectly well what we want. The orthogonality thesis means that it might not necessarily care. That's the problem. It's a difference in goals or values, it's not that the superintelligence is "dumb".
Also, they don't seem to understand instrumental convergence.
I would love to have a deep discussion with them, and go through every point, one by one, because there seem to be a lot of things that they don't understand.
youtube
AI Governance
2023-06-26T00:4…
♥ 34
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgykRfsieqhf-rMm-5N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzX0yN29IQbhWEw8uN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCkAi5xQLPUGT9ju54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8xg_TAUp50sGdgEh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwRg0KJemLVpW6t2ex4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuxRs_BKrl6JIqN_B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzq-DKeLeBVAkbdxkZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxLjjJkfQCEtw0eyUZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwMSBDoNzy8g3RLmlt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwp8jS3Ka-LbhS0UCx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]