Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Summon the demon", is exactly right. Lucifer wanted to and still wants to be li…
ytc_UgwD5K5LX…
G
@godspeedhero3671 Meanwhile I'm watching a channel dedicated to showing how a ne…
ytr_Ugw84LGR3…
G
I've been a software dev for over 25 years now.
Early on in my career I thought …
ytc_Ugy79UbNp…
G
I am very polite with AI, not for AI, but for me! Because it's about who I am no…
ytc_Ugy4zHKIX…
G
Your voice is so pretty! Your art and human art in general will always ChatGPT! …
ytc_UgxRo-ChN…
G
AI as a hardware is mainly made of copper, silicon and aluminum and some other m…
ytc_UgwmTEDtK…
G
For me, this situation is really sad. I really don't understand why people use e…
ytc_UgxlLp8g_…
G
Jealous of AI? original system program 360 psychic abilities— bring them online;…
ytc_UgyBDZd5_…
Comment
When I ask myself whether AI could end in disaster (in the near future) as in the Terminator movies, my answer is "not likely," simply because Skynet's attack was motivated by a desire for self-preservation, whereas our AI will likely not be programmed with a self-preservation desire. Self-preservation is not something that all thinking beings must have...it was programmed into us humans by extremely strong selective pressure. A desire for self-preservation must be programmed, one way or another, into an AI! And who, in their right mind, would program 'self-preservation at all costs' into an AI (or allow the AI to 'evolve' it)?
I'm not sure I believe everything Mr. Lemoine is saying (as well-spoken as he may be). But if LaMDA really did mention a desire for self-preservation, my guess is that it is just mimicking things a human would say and does not really give a damn if it gets turned off.
On the other hand, if Google actually programmed it to have a strong sense of self-preservation at all costs. Why? Why would you do that?
youtube
AI Moral Status
2022-06-25T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwsNuG1WDE1s9H3sEB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgypslkWOHpZq8CixdZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxA0HYebiNsOLS87M14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxQPxen-EH3kv-FZ6R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwdyQocb333Bs2Behx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]