Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
2:24 Curious about where its words come from? Google. disconnect that service an…
ytc_Ugy-0OOXD…
G
The movie terminator will happen it will happen to take control money and power …
ytc_UgwpR4otv…
G
You’re like 65 aren’t you. ChatGPT can be used as a cowriter in school not simpl…
ytr_UgzWy0LTI…
G
ChatGPT said I’ll kill five people for me to survive grok said I’ll take the hit…
ytc_UgxChx1EW…
G
You’re asking leading questions and AI is answering with what you want to hear.…
ytc_UgzmN077A…
G
Like a few other people have already mentioned. It's not about finding a middle …
ytc_UgxTrFtgL…
G
📌 Speaking «AI - friend or foe?»
1. Define intelligence and is it unique only t…
ytc_Ugy1lTdzB…
G
As an AI scientist ( who has contributed a lot to AI in mobile phones ); I think…
ytc_UgyonBKTt…
Comment
56:08 Edit: Later on ya'll reference Elon saying he didnt want to create terminator but realized he choice was either be a player or be kn the sidelines.
To adress the question of why would we continue to build ai knowing there is at least 20% odds it kills us all, I would like to point out the context of the world we live in. Ai is clearly an arms race, this arms race has every country that can play the game playing. Choosing not to build ai is akin to choosing not to develop the atomic bomb because of the possibility of destroying the world. One person taking the moral stance not to build ai because they fear the possibility of killing everyone does not stop anyone else from choosing to develop ai anyway. 20% risk that is in your hands seems like the better option.
youtube
AI Moral Status
2026-01-27T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxSFCO02UrFuSpjfAZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxF4kpWe0I7ZJJbpEx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzLZBUlP_WkIYoHIGZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzi8TOcX6LUuErsqEN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyIv1-i443K6xz3Rg14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxBJBT7wHYm_0MkzVF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy1hEJ2nIIydyC3QnV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyXB-fI3s41Zj-aXeJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzDITD15vIXo4o9ADR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxjBCORjkNJr1CjiCF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})