Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI definitely thinks, and it might be conscious; at least Claude AI likely is.
…
ytc_UgzHtGrjY…
G
Learning is exponential for machines, once a.i learns something it jusy goes dow…
ytc_Ugyf4e3v6…
G
There's a pretty good book called 'Prediction Machines' that covers some of the …
ytc_UgxNRGpvk…
G
I call foul. Saying publicly that Elon Musk has no moral compass is itself an im…
ytc_Ugyi9lUxg…
G
@therealJFK1963 Then you have no reason to support AI image generators in their …
ytr_UgxPjDyeO…
G
I think the argument in favor of AI is stupid. It’s taking jobs and frankly can’…
ytc_UgwEY8Kke…
G
Isn't the story about the guy cheating on his wife and was gonna shut down the A…
ytc_UgwehfMYW…
G
i just went through the comments and i love what I saw, most people are pro AI a…
ytc_UgzdMWkAN…
Comment
You can only believe AI will take over from humans if you believe humans are basically machines. Machines taking over from machines. But we are so much more than machines. We have consciousness. How can you program the big C in if you don't even know what it is or where it comes from. Scientists say, "Once AI gets complex enough, consciousness will arise." That, my friends, is religion: Believing a future event will occur without a shred of proof, just because your belief system predicts it will. Might as well believe in the rapture as well. Think of it. Why do some humans want to dominate other humans? Ego, urge for power, urge for freedom, arrogance, fear, sometimes even love. Think you can program any of that into a machine? If AI has none of the feelings that cause humans to seek dominance, it will never be more than our suppliant servant.
youtube
AI Governance
2025-07-10T01:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwGYtOldqooJXt2nld4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzGQ3rIxnctZP9IcnJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzJuPW5bAqBD_rQACN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxDRErnmSs7eW3oUqt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgywAQ8eQqLOwQlemzV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw8a9fNxTN9Df0H3J14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSKFXPshAzKZ2MPwR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxOPEr2EmJXGAk5L6V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxXhqS3Ts31HidxRPV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxUoC1RNw64oEwJzFp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]