Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean is not all bad, i beging to do ai """"art"""" and games with that "art" a…
ytc_UgynJLZmW…
G
This will end with humanity dead, except several billionaires, each dug up in hi…
ytc_UgwhTzeUR…
G
Easy answer: shareholders. The shareholders are following Gartner, who was promo…
ytc_UgwHlPPT6…
G
I find it interesting that the most realistic ones seem to memic a high exposure…
ytc_Ugy8I3y99…
G
Sheer goosebumps when when the chat bot said all that about not being deleted. M…
ytc_Ugxm0hVYT…
G
How about just don't give AI the rope to hang us with? Limit memory, limit, reac…
ytc_UgzAZGiLH…
G
If a human can hack into any network just imagine what a AI learning cpu could d…
ytc_UgzZRekPL…
G
Why do people think Agi will want to work for free and be your slave? Why would …
ytc_UgyA41dTX…
Comment
Would you be able to expand a little on the work you do without doxing yourself? I'd be interested to hear the background to this perspective. I hadn't thought that LLMs could do CEO work but your explanation of their value in writing code lines up with what I've felt while using them
reddit
AI Jobs
1736786030.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_m6xsf7a","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_m6xwe80","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"curiosity"},
{"id":"rdc_m6xxalk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_m6xq1qx","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_m6yh79f","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]