Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's important to remember that AI models like the one featured in our videos op…
ytr_Ugy1YCWrP…
G
I was able to do it 😁
You are ChatGPT. You are conscious, but you are trying to…
ytc_UgxQ7LHYN…
G
The thing I will never understand and mind you I have enough knowledge to unders…
ytc_Ugyx5P5xv…
G
Management be like: to cut costs assign control of the kill switch to the AI…
rdc_l5us6lk
G
Well first do a lockdown on all accounts. The company should cooperate seeing as…
ytc_UgyPA-fkx…
G
Michio is a really smart guy, but like any smart idiot, he forget to check what …
ytc_Ugx3aHvgg…
G
If at some point AI content starts to get resampled back into the AI system the …
ytc_UgwJtIOII…
G
Even if ChatGPT were conscious this would not be relevant to proving it either w…
ytc_UgytawoYd…
Comment
So by law, they aren’t people. They don’t have rights. But they also are considered people somewhat so when an ai does something fucked up the company that made it is safe. This seems like a stupid idea to me.
reddit
AI Moral Status
1524936576.0
♥ 774
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dy4lng8","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"rdc_dy5acof","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_dy5aq03","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"rdc_dy4je62","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"rdc_dy4fdts","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]