Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's an up hill battle vs all the big tech money going into ai. Profit is to be …
ytc_UgwgdkzGB…
G
Music isn't made by AI. AI takes music that is already made by humans. Now, will…
ytc_UgyxgDinp…
G
Go back to the region and fight for your people or go live in Afghanistan and co…
ytc_UgyVDZ_L-…
G
What I don't understand is how am I going to buy a Tesla's or an iPhone if I don…
ytc_UgyoW2xwU…
G
This software basically trained with many of the very best and performs as ALL o…
rdc_fcssjek
G
It’s also nice to be able to talk to your friends and not be inhibited by the Ub…
ytc_UgwmtuJTr…
G
@peterwarner553 AI has already written and recorded more songs than all the song…
ytr_UgxPbs-or…
G
I almost think that tracing someone's art is a more valid form of artistic expre…
ytc_UgxgMfNZX…
Comment
There are two problems: technical and social.
Technically, previous systems have all had control transferred to a human in the trigger loop. I don't know about KAIST, but the Korean Defense Department (ADD) is funding both robotics and AI. The way I heard it phrased during an explanation of their current research directions was "but of course we cannot actually hook it up because of the ethical issues". Right, like once the hardware and software are both developed they will be strongly firewalled.
Socially, my guess would be that either the KAIST guys were bragging at an international conference, or they were submitting military-funded conference papers about distinguishing types of clothing with AI (or something else suitably suspicious) and other researchers put 2 and 2 together. Money is on the former: most professors are not shy about talking shop, because talking leads to international collaborations.
There is another possible social aspect to this: most countries don't publish research papers on actual weapons development. Koreans see it as low-hanging fruit, and so they do. As in, they will see from PR demos that the US developed a system, but didn't publish papers on it, so they re-develop the idea, and publish papers themselves. Could be very problematic when the technology is uniform-distinguishing AI, and the paper contains pseudocode...
reddit
Cross-Cultural
1522949919.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dwuf98t","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_dwujpp2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_dwv2iti","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_dwv4fp2","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"rdc_dwvpg1u","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}
]