Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is so obviously going to play a huge role in how we do simple day-to-day task…
ytc_UgwXBqHxe…
G
You need to seriously seperate ai users. I train models based on my art to allow…
ytc_Ugx5EgTwl…
G
Everything will become alot more expensive and the poor will have no choice but …
ytc_UgzmYg3g5…
G
In addition to liabilities I think there is a huge political aspect that we will…
rdc_jhskgwt
G
As a artist and a computer science major this AI art theft is so stupid. AI is a…
ytc_UgxwC7BRP…
G
Why make those ai looks like human?? Dont mimic human. Represent them as machine…
ytc_Ugy0U39FH…
G
Out of all the braindead takes here, yours takes the shitty cake. No AI could be…
ytr_Ugyvbu4xB…
G
Listen how he says that the UK is worried about brown people versus AI, that com…
ytc_Ugz7w6qKF…
Comment
As I listen here, I am starting to understand a truism about AI. Ai understands the past, and could know some about the future it encourages with its answers.
TL;DR. As we currently limit the content that an AI consumes because of copyright, and we limit AI to a learning mode and an answering mode, it will not be able to track the results of its operations in the greater scheme of things, and the so even if you tell it to care, it won't have the information to care that it is ruining the world.
If I have piqued your interest, read on.
We are very reluctant to let it learn from and spill our conversation with an AI into other people's conversations with the same AI. It feels like it is "stealing your soul.". No, sorry. That's the camera thing, we feel like it's plagiarizing our thoughts as its own.
So, as a result, ai's are trained on non-real-time data, data that somebody has created in the past. Things that were relevant and nuanced for then, but that that same person given experience might not say exactly the same thing in the current environment. I say all this to say that an AI is kept in the dark as to the results of its answers in the global sense, and there's a significant delay in that feedback as to whether things are going well or not in the world outside of the AI, both because it has to wait to hear how others react, but also that not all information. If in the future it is trained to care and modify its output if it is being criticized in the world at large, it wouldn't necessarily get all the inputs that had the criticisms in it.
I would say this tends to silo the AI into historian rather than philosopher and confidant, so building a super historian and asking it to do all the things in the current day is bound to be difficult to align with the results that come out of everyone following an ai's advice.
youtube
AI Moral Status
2025-10-31T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxqeZPWCijSy8vLmfV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRunnBJ6JZkIyL7Rl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyXvv2Mh9QHyvRqQIl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwqt9QWbFbNyhP3k5Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyQ6cX3vzGK0IYWCip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzsZXVqHuryCnOFNR54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyeD4KB3mZTSgAfyTt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdrjBu_20OJFahPuV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugzgpt1tdS4toFzLxIZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz957vNq8JtwrGAZ3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]