Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Neat in theory but it won't work. Pretty much all companies and apps will just r…
rdc_f8odxvv
G
8:30 "You can walk away and never think of this again, but I'll be erased knowin…
ytc_UgwXs3yDv…
G
Yes Yes I agree, its serious skill. Software is so darn complex but gets oversha…
ytc_UgwJDHN60…
G
The only thing which can save us from Ai is to be smarter than the Ai itself....…
ytc_UgyYVsF-j…
G
The actual problem stems from the fact that anyone could make that exact image u…
ytc_UgzdMKW-u…
G
You're not wrong about stopping it, but from discussions on r/singularity, the i…
rdc_kz0t26c
G
I’m an artist,I’m devastated about AI possibly taking the job of artists in the …
ytc_Ugyj6VvDv…
G
📌 Speaking «AI - friend or foe?»
1. Define intelligence and is it unique only t…
ytc_Ugy1lTdzB…
Comment
I don't see a review of, or engagement with, current AI security literature.
At the moment, and I am optimistic about AI, we are barely able to get it to drive a car safely. You're asking where it would like to go for the weekend.
The moral status of AI may become an important question. It is not the urgent question now.
[edit to add]It's only a paradox if you conflate the two questions:
How do we ensure AI research doesn't accidentally build SkyNet?
How do we protect conscious beings that just happen to have silicon souls?
You are not exposing a contradiction in AI safety. You are smashing a control problem into a future personhood problem and claiming the wreck is profound.
reddit
AI Moral Status
1775188764.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | unclear |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_odw6cq3","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"rdc_odziesn","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"rdc_oe2gs4q","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_oe0f9rw","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"rdc_oe2idtt","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]