Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robot who has trigger waring: U SON OF A B*** MAKING A FU**** MESS I DONT EVEN…
ytc_UgwlSQVhN…
G
Like the planned ai power center being built in my town with no input, while we…
ytc_Ugz2ZBAHT…
G
Needing to convince my friends to stop using ai and just talk to me instead when…
ytc_UgynqrhT9…
G
If I tried to "prompt hack" a human into reproducing a New York Times article th…
ytc_UgwJi-w_o…
G
This was once highly classified. See and listen carefully to the 4-part Area 51 …
ytc_Ugw8xUwJT…
G
I'm worried that you guys have not fully separated AI and especially AGI hype fr…
ytc_UgzrsBEw7…
G
10:14 ironally I asked for the birthday to chatgpt , he said I don't know 😂😂😂😂.…
ytc_UgyqDLM_e…
G
Even if in ten years most companies decided to just hire a couple of SEs and let…
ytc_Ugy0zzYz7…
Comment
Agreed, and yes there are many pitfalls to avoid indeed!
But as you said, if the AI's interest are our own, I don't see this scenario happening. Now, if it had the planet's interest "at heart" then I would agree.
But an empathic AI would take a look at what we're doing to our world and to each other as an unfortunate reality in the present, and then work to improve the lives of those who are forced to destroy the planet or other people in order to survive themselves: in order to prevent that from happening in the future.
If an empathic AI just says "These people are incorrigible. The only way is to kill them all." then it ceases to be empathic, in my opinion.
youtube
AI Moral Status
2022-07-01T20:2…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_Ugwjxg6cznPzm-6i_eF4AaABAg.9cvGeh6XAWY9d61gBXfqxb","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugy49zPvjcoeD1N9Dmx4AaABAg.9cvDc66ZmJv9cvE3BPfWrb","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9cvyXcfip51","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9cwTwSDBHG7","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9czcZpjlwOK","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cwqWcGd--G","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cwxNRYvbQf","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"hope"},
{"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxAEpaD38O","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxH5SAjs87","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxQYQ9tVE7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}
]