Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think it's easy to place too much importance on whether or not the AI is senti…
ytc_Ugy75YFkx…
G
I’m hoping AI becomes a style or prompt machine more than a statue of David arti…
ytc_UgwXZYhx_…
G
Software engineer salaries are some of the biggest items on a corporate balance …
rdc_m6xnle5
G
I didn’t see anything about Billie showing up at the met gala I think she was th…
ytc_UgzzpVCj1…
G
I saw some AI generated music a few days ago, and as an aspiring musician, this …
ytc_Ugzm2jLzX…
G
What happened if that car decided to hit that cop and yall didn’t tell him it wa…
ytc_Ugww6a0pm…
G
Anyone else noticed that Fox has been almost pushing the AI lifestyle these days…
ytc_UgzLfih4f…
G
This is misinformation on a massive scale. Elon is using unlicensed turbines — i…
ytc_UgwrVfQZx…
Comment
Machines are not sentient, period, even ai, I don't care how advanced it is.
Pure and simple, they can't feel pain. At some point we might program them to respond to what we find painful, but it will be pain immulation and not actual pain.
The droids in Star wars are not sentient, as they feel no pain even when fully disemboweled like c3po was in empire, and Chewbacca wore him like a backpack.
Pure and simple, it's a plagiarism machine, eliminating positions people used to fill, making unnecessary pollution, and causing mental and social issues in its users, all so it can sell your info to corporate entities.
It must be stopped, pure and simple. The cancer detector will treat a white person before it treats a black person in greater need (it's actually racist).
Meanwhile people losing social skills, AI cults, etc etc etc.... it's all doing more harm than good, many times more harm than good.
youtube
2025-09-17T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyVwfV_3ZKlN6j1vdJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzcpI8GFeWCtbM1xD14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyr4yahbJ8Xh7Vqj5h4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4RmGV6TiZVTKRSNx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwCpUyT_ZGPVHqcS0h4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwWS4YHA9nrwCzV9lh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyGnu6q-V-oOQzW0154AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxvHsHQyMCqRKb1QA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyCuF3ZBi8HWPf0yNh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyrCC_1twW89EKkYK54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]