Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
tucker = dumbest dude alive?
he really questons if ai will be the future technol…
ytc_UgxnF0pey…
G
Guys the truth is we have no option. We are in AI race with China and Russia. We…
ytc_UgzzdSOtJ…
G
Your using 3.5.. try again with latest update 😅 i gues they took some notes…
ytc_UgzcZvgdk…
G
AI , i propose a merger between tailor synthetic cells and bio cells to create a…
ytc_Ugzuvts9q…
G
AI is giving you lots of knowledge but making you to stupid to understand. All d…
ytc_UgzTDhIx-…
G
> ... the scientific value of the exercise was somewhat questionable. The pro…
rdc_cjp0ud8
G
IMO the government should be getting in early to encourage development. Fund th…
ytr_UgzHCiHGD…
G
Those privacy concerns are not that irrational.
1) limit the age for kids
2) c…
ytc_Ugw6jKuid…
Comment
> Why have AI's been trained on data that includes some of the most imaginative ways for an AI to destroy and supercede humanity ever conceived? Wouldn't giving it extremely dangerous "ideas" about what it is and could be capable of be the FIRST thing they should be working to remove from the training data and reverse course on? You know... the data it's using?
The answer to this is pretty straightforward. Current systems are extremely unlikely to seek or be able to effectively carry out an attack, while future systems are extremely likely to encounter this information, or derive it on their own. Therefore there's little to no risk in teaching current systems about these things, and doing so may help mitigate the risk in future systems as we can use current systems in deconfusion research to help us align future systems.
reddit
AI Moral Status
1685650521.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jmiu1k6","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_jmiyavu","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"rdc_jmfrnw7","responsibility":"media","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_jmfyo7p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_jmi5ky3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]