Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
GPTHuman AI is solid, it fixes that stiff chatgpt vibe and helps bypass ai check…
ytc_UgxKBeXqd…
G
I guess there's a sadistic person that AI tool. Pure evul. RUP, Young Soul. More…
ytc_UgyHXMh_6…
G
I can't remember where I heard this, but someone once said "The best way to make…
ytc_UgyhCZ-k-…
G
Maybe AI would have created something more along those lines if you actually gav…
ytc_UgyG1aR9H…
G
@verde5738 A reasonable response. I noticed personally that I don't need any spe…
ytr_UgzNdL6u_…
G
@marksmithwas12 I see where you are coming from. As someone that loves video ga…
ytr_UgyQmg4yK…
G
As long as the posisend ai wont be used in self drive cars im happy…
ytc_UgwdJ32Rd…
G
@aquarieaux1443 Modern AI inference is probabilistic, opaque, and pattern-based …
ytr_UgxHodCfg…
Comment
I don't think we're remotely close to having sentient robots and so calls to think about robot rights are very much premature and silly right now, but I think it's pretty problematic to use that to justify claims that robots as a class simply could never have rights at all. If we had a C3PO or a Johnny 5 in real life, it'd be very clear to anyone interacting with it that it's more of a person than a toaster is. An LLM certainly isn't a person in any sense and that is technology that is being used to oppress real humans so I get it, but it just seems weird to categorically state robots should not have rights. It's just weird conflating AI as it stands now with sentient robots. They're nothing alike.
youtube
2025-10-10T20:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyHQdLBGQnbG9XrN254AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2-U8V_1q-TWjUPq94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzc_kbYPP3J64STzy94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzEVCLRlTMLKUUwh214AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYcdap3hipnwL8NOB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZya8GCYlGmIy1E-t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxt30yf0E4kLC9Kltp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzPLHzLfnfp8BFo8Vx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLoSrixqg5Oi_3bb54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNcoeJd3YSuEbyqYl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}
]