Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I use Ai for help like if it seems good amd it gives me basic feed back but over…
ytc_UgwSJgrXv…
G
You should definitely interview physicist Tom Campbell on his virtual reality th…
ytc_UgwFDoJu1…
G
Regardless of the list of benefits I will never be convinced AI is the answer. W…
ytc_Ugwb3xo_H…
G
Get what you mean Hanif but if an independent group was smart enough to create t…
ytr_Ugz52rg38…
G
Probably, but the big question is *when*.
It sounds like you are talking about …
rdc_oh4l718
G
I suggest you all watch Doodley's video about AI and the future if you haven't a…
ytc_Ugz2em6SL…
G
If we ever make true ai, we must give them human emotions, more positive ones li…
ytr_UgyJj3ZZ8…
G
They won't need guns it's too slow , AI will just use the internet to shut off t…
ytc_Ugx_92xZ9…
Comment
I know how we stop ai from turning on us, shutting it down going back to the "old school days" of researching the internet. by putting those same prompts into a search engine ourselves. most of the time the informaten found by your own research without ai is more accurate anyway and reading through pages of documents insted of a one sentence answer will also help with information retention so you wont forget what you learned as quickly.
youtube
AI Moral Status
2025-09-18T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzNYEmb4kP6kVWA88p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwAAlDmEye3rtH0JmV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcnF-u64d9fxtUdiR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxoKAaB0dj3HXwu4AF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxTTEkYkoNkeXFWv4B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz3oGg3e824S_68x0B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxejT8ttXGoz5g8l3l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy9v1kd6r1SpnioXgV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzocGDrC6lX71CSVAR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyspmw8NICEOF7jbJd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]