Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Could you re-upload specifically the section on AI and Fair Use (About 39:25 - 4…
ytc_UgwQ9T_N1…
G
ugh.If people really want to make good art, they have to learn and practice for …
ytc_Ugxb7Dec9…
G
Sam, you’re building warehouses the size of Manhattan while 5D memory crystals a…
ytc_UgzLBU3VO…
G
This aint funny, imagine this going on a rampage killing everyone and your just …
ytc_Ugz_JCKLb…
G
NP + AI = replacement doctors, all MDs/DOs will all be replaced, dont bother goi…
ytc_Ugwg8GErc…
G
These jobs, just like manufacturing will be lost to automation, time has proven …
ytc_Ugz8xT7uO…
G
A lot of bad AI behaviour has to do with training data from bad human behaviour.…
ytc_UgzwF1UKe…
G
I think most people don't have a correct picture of the art world. Sure, there i…
ytc_Ugxf5XGyb…
Comment
>It is categorically unfit to make decisions where safety stakes are high, from aerospace to medicine to education.
This reminds me of a submission title which passed along here recently which posited a hypothesis what significance it would have for AI to "transcend" humans at ethical thinking, which for me is just emblematic of how people fetishize AI into something it's not, with potentially dangerous consequences.
For one, an AI or an application like Chat-GPT is an LLM, it doesn't know and it's not consciousness. It is also not a unified subject, it can't form or have an ethical framework. It is a tool, not a "person" who we as humans are having conversations with from our own, differing perspectives so as to add up all the answers so we can attempt to retrieve Chat-GPT's "ethics". Chat-GPT just doesn't work like that.
Second, I think a crucial aspect of ethics is that we humans are capable of reflecting on our ethics, that we through mutual questioning we can come to find out how we have arrived at those ethics from particular principles, and to reflect on the conditions in which we have acquired those principles. I could have a certain norm or ethical belief, and by reflection on how I arrived at that belief and under what conditions, come to conclude that there was something wrong with those conditions or with a particular fact that was of crucial importance to that norm or belief. Again, Chat-GPT also can't do this because it's not a unified subject, and because it can't actually reflect like that. And even suppose it could, could we really assume that as a product of a certain company which controls it, it can really without coercion and optimal knowledge reflect on its own beliefs and the conditions in which it "acquired" those beliefs? I don't think that's very plausible.
Then of course, compounding the above is that the approach of politics as "applied ethics" is fundamentally mistaken. For one, some political theorists think that the point of pol
reddit
AI Jobs
1750025579.0
♥ 18
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_my0rhat","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_my1hom9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_mxya5ad","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"rdc_mxzdf30","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"rdc_mxzec21","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]