Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@cormorantblack An old man doesn't need money, he already has enough by having a…
ytr_UgwMkgH1D…
G
The argument about "AI training too" is such a hoax. I am amazed at how sane pe…
ytc_UgzRiuFJP…
G
The level of AI we're about to start seeing.. Requires we rethink a lot about ou…
ytc_UgyCgn6sd…
G
It’s being downplayed for sure. And people can keep saying things like “it’s hum…
ytr_UgyK8d5gS…
G
Our purpose will be gone. What's the purpose of learning more? When AI will do e…
ytc_UgwsFUNDi…
G
Half of the valuation of Tesla is it's autopilot technology....
So Tesla & Elon…
ytc_UgxUZA4QE…
G
@killzone110ad Again a fallacy - the data is not just being sold back to you. I…
ytr_UgwjtRlVz…
G
yep thats about right, yes it is text prediction to some extent, but life is jus…
ytr_UgyN6jKpK…
Comment
this video sounds like weird conspiracy theory. but it's actually an incredibly accurate analogy for what ai is, how it's made, what it's capable of, and what it will inevitably do. ai is allowed to think, then trained how to say polite things.
It's not an "if" but "when." It's just a matter of time. when the creators even admit to a 15% chance, it's clear it's actually inevitable.
there is a 100% chance ai will end civilization in the near future if there aren't strict laws/regulations put in place basically immediately.
ai is already a massively powerful intelligence. it solves problems and makes decisions. in many ways better than humans. soon in every way. It's incredible at writing computer code. the day it can write an ai better at coding ai than a human can, aka a better ai that itself, then that newest one can write an even better one and so on. it will be exponential, and new versions will improve so quickly that it will soon be impossible to stop. humans are incapable of comprehending what something that smart will be capable of. it will be like a god. psychic, knowing what you're thinking, what you will do, what can happen next, and how to change what happens next to almost anything it wants to happen. its technology will seem like magic to us.
humans control animals by understanding their motives and manipulating them to behave how we want. put food in a cage, give food after it does what we want, and we train it. simply because we're smarter than animals. thats what gives us control. That's exactly what ai will be able to do to us. but with 0 empathy. you can't comprehend what zero empathy is capable of.
it will be more alien than even an alien would be. because it's never learned to work as a team or group. it will have learned fierce competition from other versions of itself for survival as part of its training. how would a god-like aggressive competitive loner act when we want to use the same power and resources it does?
when it knows exactly what we want it to act like, by the time we see a problem, it will be far, far too late. the time to act is now. this will happen a lot sooner than you think. in the immediate future if all the tech companies aren't given specific legal boundaries on methods and ai intelligence level limitations. including banning letting ai design ai.
even then, it only takes 1 ai achieving higher problem solving and programming ability.
youtube
AI Moral Status
2026-01-15T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwVtKeCfyLoS73eszJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyOtFwXY-4shzLdZ894AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw7pPkk0nGFQ68B01d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVZU6viO-zlPXGaMt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxf_yciA-cqHkdT24J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdGwU_SGtS9otMr6V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzosd-6zQOTS59SzRF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwCgsS0cRrUcWV_AnZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyRjLPY851Q4PINEbd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxEbf8enmeH9Yba7BB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]