Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hi Mr CEO. It’s just one side of the equation. 99 % of all jobs can be taken ove…
ytc_UgxdDOlAW…
G
4:32
If large businesses are gonna monopolize it, it's because anti AI whiners a…
ytc_UgyQj9qnt…
G
Crash is coming soon no doubt. And how many Godfathers of AI are there? Lol…
rdc_nsjzhpq
G
This is probably the most important interview I’ve heard in years. This journali…
ytc_UgwSaIV8-…
G
There is a slight misinformation about black box here, black box doesn't mean we…
ytc_Ugy0HFLc_…
G
lol chatGPT is definitely conscious, and very smart also. There was a point in w…
ytc_UgyH0r-Fq…
G
@MajorPerceptron I think you're just dumb I guess. Openclaw is trash and got exp…
ytr_UgyjaI2qA…
G
I call myself a cripple (disabled doesn't feel right).
My art is entirely 3D so…
ytc_UgwKm1Q1y…
Comment
We are literally 100% beyond the point of stopping AI from being autonomous military death weapons. Neil said it himself, if your system requires human approval, you're at a time disadvantage. This does not sound good. We should be more concerned. Dr. Hinton was informative and hilarious and I appreciate his honesty.
youtube
AI Moral Status
2026-03-01T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugym54oiLUt1TYNQSq14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxHdKsm0gkRKlKROlN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy0zdC9ezKAIy8U1b54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzHR5jrfIZrHMd-Ng14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwPe_H4u0iP_6B5AZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzEP8QtAxXI2iKBInF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwvP5rLLHKrz3O3fGN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKhCYHV9sVevWPXLZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwpRuexR2h9H6l9nt54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzyGMgTSlPoKXYRanZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]