Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
9:19 Non work use of ChatGPT now up at 73%, it is a leisure practise, the opposi…
ytc_UgxNV2DwY…
G
You know why they want law for AI actually? Because they dream about that natura…
ytc_Ugwn_44Hx…
G
hoping for for AI take over terminator style. It'll be interesting to see it go …
ytc_UgwhHbO3g…
G
one thing i can always rely on to spot ai art is logic. If theres something ther…
ytc_UgwYPbo9y…
G
Petition to call it AI imagery instead of AI art!
To use the term ‘art’ in its …
ytc_UgwpYUj42…
G
So sad to hear someone as stupid as he has manage to raise money and organize bu…
ytc_UgxnIwr_3…
G
I use to play around with this idea on pallafiumbooks heroes and I started out a…
ytc_Ugx2MWVJg…
G
I'm a huge fan of ai and it really peaked my interest when you said "I have no d…
ytr_Ugx6YnfaH…
Comment
The thing about superintelligence is, by definition it needs to be smarter than any human. If we train it on human-generated data, or synthetic data that is produced by algorithms designed by humans, how can we possibly train it to be smarter than us?
There seems to be this assumption that if we make a "general" intelligence, aka an AI model as smart as any human and just as capable, then it will naturally follow that the general AIs will be able to figure out how to make a super AI.
But...... why do AI companies and researchers have that assumption? If we humans can't figure it out, there's no reason to believe something no more intelligent than we are will be able to.
And if they do figure out how to make a Super AI, at best it will be controlled by the General AIs, at worst it will not be controlled at all, but either way, it won't be controlled by humans. And we think it's a good idea to be striving towards this goal because.......?
youtube
AI Moral Status
2025-10-31T14:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzoYZLwz1hvNcmWdih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwqEcV4Qs5OkZ4AFgN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzqRekSJOzVfIBImfh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzeFkkpaR4Jdj5J5J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxMQgb3wFL9aJnLrj54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9NqqZ5u5z9bOVc754AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw4lYL_D-jVZDsPA9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhN7AlDS6bIJ4PAGh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgydiU7eVhVJv35V0xF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgweoqkAkh4nIO_Iwwl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]