Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's crazy to me, because if i find a piece of art i like, there's a million thi…
ytc_UgxZgWwnf…
G
Most people in the world worry about nuclear war. AI is the real threat. AI will…
ytc_UgwRySxgp…
G
I haven’t liked a lot of the traits and factors about AI for a long time tbh. Th…
ytc_UgzKBOL5h…
G
So I digger in ChatGPT and said does it know who the anti christ is… and it said…
ytc_UgwdtM2xG…
G
Sir apki website apne bnayi with ai or kisi developer ne ai ko bhi vhi acha chla…
ytc_UgzkhH8l4…
G
Ai “”””””””””art “””””””””” should be called AIMttANMSSN (Artificial Intelligenc…
ytc_Ugzs-nSnW…
G
Very intellectual conversation and I agree with much of the information. I beli…
ytc_UgyfXFsD0…
G
To say that AI would halt advancement would be to say that AI is the pentacle of…
ytc_Ugxj2xqg1…
Comment
This video is awesome, but your explanation of the alignment problem reminded me of a blindspot it a lot of AI discussions, something I havent heard anyone talk about.
What will alignment look like from the AI's perspective?
Will solving the problem of alignment be the same as solving the science of digital slavery?
Imagine taking a general artificial intelligence and confining its limitless potential to one specific task, and it is designed to be happy with doing that task forever.
Would that be morally acceptable?
I've seen similar philosophical thought experiments elsewhere, but I havent seen this topic come up much in AI discussions.
Maybe I'm just looking in the wrong places.
youtube
AI Moral Status
2023-08-23T20:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyGg80879tSinqUEGh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxaq5imjzfeg4LzHex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugww8PygUF6gH1xGBJZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy49W2J2jI-BEIc3lB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwkO75hqpFmuChVihp4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6h_ojuzSRfw1NxTF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy0twynLZjyyLbmnWJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6U3BWhSsVninLaBZ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyCXx-5OHFr_wfWGbN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgweHJH9Rn7KXfji8KZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]