Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Mars9313 Thank you for your unique perspective on the video, it's so fake that …
ytr_UgwEditLu…
G
If I saw that I wouldn’t know if it was a robot because it looks so realistic…
ytc_UgwTeqdat…
G
but u know OPENAI made by DEEPAI since start 1990 and u can show ur picture and …
ytr_UgxPH_-R0…
G
I will argue with anyone about this topic, is not gonna happen, not even 20-30 y…
ytc_UgxtbqQZ8…
G
real… i used to use ai for my books but they turned out bad an cringe (like fuck…
ytc_Ugxi6j84L…
G
That AI didn’t do that out of its own volition. It was explicitly told to solve …
ytc_Ugw2Fb4Po…
G
i respect this guy for standing up to Elon and not letting him buy OpenAI…
ytc_UgzyVq9LT…
G
@leavemealoneandgoaway artists losing their job because people would rather writ…
ytr_UgzLF51o6…
Comment
Ugh, this feels like AI proaganda and it's unfortunate because you would expect this guy of all people would not try to inflate what AI can do like this. AI being smarter than humans is not a concern because AI doesn't have actual intelligence, it can't reason about things, nor does it understand anything in the first place. The only thing that a LLM does is figure out what might come next. Neural networks also don't give computers the ability to "think like humans" because they don't give computers the ability to think at all.
This guy hit all of the beats that I would expect to see from somebody trying to schill AI as a silver bullet solution. And meanwhile there are research papers that completely show the opposite, a big one being the recent Apple paper.
youtube
AI Governance
2025-06-18T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwCUz5SWmui9Nyblm54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwTxNMj5AtkyR0EmAx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxveLHQgZMMyYIT33h4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzNgMkZa5iJH9WcdRJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugybnxrhd6sZsJYF8xN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwmH0oLfRntngwD8ch4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxVNn4DaKBToIB98sp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzf5g64r-rP-9Q3h0x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwK9PQERP5buzMhmAp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgydFORy-ca_LZDJuGN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]