Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So, in short, they're teaching AI to terminate us.
Wow, this sounds like a good …
ytr_UgwRk1PXN…
G
@RightAmount Art is something made with a purpose, with one's own experience - …
ytr_UgzJu1L3m…
G
14:08 Simple give them a trogent by making an ai genaration program that has a v…
ytc_Ugw7EyOTe…
G
I think that is just the way it was programmed. Most people hate going to the de…
ytc_Ugz47N3ow…
G
@syzygy4669 idk what world you’re on, but in the REAL industries no one cares if…
ytr_UgxDkqp4P…
G
This wouldn't happen. Radiology is still one of the most competitive specialties…
rdc_fct29td
G
What about when an AI humanoid robot actually uses conventional paint and paper …
ytc_UgySnEGld…
G
There was NOT enough emphasis on job loss..... Its more like job extinction....…
ytc_UgxjjpCdu…
Comment
I’m not sure why we think that a super intelligent AI would lack empathy and compassion along with all the other types of intelligence that prevent us from killing each other. LLMs are trained on human data and human behaviour. Very intelligent people aren’t secretly wanting to take over the world and kill everyone. The super intelligent supervillain only really happens in films. Greedy world leaders are not the most intelligent. Why would AI want to kill humans? I think this a way of amplifying self-deprecating thoughts in which we believe that humans are terrible, unworthy of our own existence and that any super intelligence will undoubtedly understand it and terminate us. I don’t think so
youtube
AI Moral Status
2025-04-28T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzfo3t_x5p2hPuetO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw24u-Pk_DJohHEiNJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzRUsoYOkImYDiW0SZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJ9__zD5djP_96Aj14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyAjUrLQpEhxQjNf9d4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZEGeWDFdUDc9QUwN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjGNYjwAvjCW0_cJB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSLDWkernqrasS9ZN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwhiFVzfYHj8ac6pkZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxjiizwYWXxZdegUBh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]