Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@lazer8776 The problem is we can't be sure that AI will do that. It can totally…
ytr_UgwRKClpc…
G
Chatgpt is a bunch of numbers tied to words. The patch was likely "bromide" = -1…
ytc_UgzGgvh_R…
G
It's all about the money. Strangely enough the con AI side used the asteroid pro…
ytc_UgwLVGaFF…
G
This is a stupid truism, but I think AI is overestimated for what it's capable i…
rdc_mleta9g
G
Weaponized mass Nazism is one of THE worst alignment scenarios ai safety researc…
rdc_n22nb9t
G
I'm convinced that anyone who speaks highly of AI or uses it doesn't live in a w…
ytc_Ugzebl-e4…
G
ChatGPT is a joke,, Go to 'Han Meditation' to see ChatGPT is Choosing Islam 😂…
ytc_UgypNhvC5…
G
So would it be okay to use ChatGPT as a doorway into to find things to look into…
ytc_Ugz1LM9vo…
Comment
Maybe we should ask another question.
What would the corporation do if their product will act "conscious"? Act like in all other cases, when product act NOT like it is advertised or intended to do - try to act like it was a "minor bug", relaese a patch for it and continue selling it? Maybe pay/ask some tech-content creators to make a cover-up story? And payment will go through the advertisment of a specific programm, that will gather information about all the activities of the user under the guise of a "secure and private connection"?
Later, that activity will be used to futher train AI`s to predict and (in perspective) to control the behavior of people.
Have you heard the recent reddit scandal, where some university used AI to act as human users in order to persuade reddit users to change their views?
youtube
AI Moral Status
2025-07-09T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwx2Pm6TGUHZSdZ0IV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzeBicFs6vyKaWl8xt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxpfqaHN6iD5TSw0HR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwO-ME2IxthoL3ykqF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgypCoR8t1-AxkY_4Mp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx0j7dRb-pcTJOQ5Vh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRLvmI_j7AZkWW3E14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJHMTTnlVZPVwV8tx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwbxRfJEyHabuwcqLt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzoHETvsGwt1LpIssB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}
]