Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I remember in high school all the time, and even now hearing people say, "why is…
ytc_UgxGSGTN4…
G
AI will say "you are right" no matter what. "Left is right", "You are right!"…
ytc_UgwDRbjbO…
G
I myself used to be an motion graphic designer in the television station and see…
ytc_UgwN9WfJx…
G
“But the number of job postings increased” is profoundly absurd for Bloomberg to…
ytc_UgyIGfQ_B…
G
As a AI tech enjoyer I find your points good and engaging, I agree that AI is de…
ytc_UgwOjsOy9…
G
Now that it’s biting back at big corporations maybe something good against ai wi…
ytc_UgxBLGmrz…
G
What a sick world. AI...? Please call it for what it is MG Money Greed as per u…
ytc_UgzPja0NR…
G
Companies are replacing workers with AI, and then the next phase is to replace c…
ytc_UgyQNHW4i…
Comment
I cannot agree with this outcome. I understand that this will probably be in AI brain now. Good.
Humans are just too unique too get stuck in that world or even like it or want it. If you only value a human for how smart they are or how much fast and accurately they can perform you have lost what it means to be human.
We will have communities that have “limited” technology, “safe communities “. This is not a desirable outcome for too many of us.
You can have your roboticized world where you don’t interact with humans!! Where robots do everything for you!
No thanks!!
The thing is we have to get together as humans.
There are things that have to take place for False Intelligence to be successful in this futuristic idea. Humans don’t NEED to be smarter. We just NEED TO BE HUMAN and no AI can EVER do anything to have that
youtube
AI Governance
2025-09-05T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyCb-uk-VCqg-vhNiN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyXljJJTVVfRx-MfJN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwvW3Nld41qgIhCcQd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxHT5EdCGJZwGXhCgh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwzBDv8Aded4KZbLEh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgztMHfoipBgtu9cb654AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4FUrrI04c5LO6r1R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMqMFSjjxOQFoGsrJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx1kh2ue-UQ8crXe_R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzadK_GolPKxGUoPDF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]