Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Did you know the ghibli thing is a scheme on chatGPT because when you give it a …
ytc_UgyO-IS_D…
G
Waymo should be banned. Unsafe for all. Only HUMANS driving cars allowed!!! Did …
ytc_Ugy784OPS…
G
He knew something which he wasn’t supposed to know. I wonder if OpenAI is being …
ytc_UgxG5M5J4…
G
The harder they push to make a generally intelligent AI, the more they're realiz…
ytc_UgwYU39ZL…
G
@garciam244 as an Artist I tried replacing myself with AI, I was quite disappoin…
ytr_Ugz6lC8Ol…
G
Don't even bother. "workers wont adapt fast enough" with AI . Your going to lose…
ytc_Ugy9FHe_q…
G
Lol the idea that AI will EVER be "conscious" is so cooked.. The only thing that…
ytc_UgzVjZS1N…
G
If you're interested in more intellectually stimulating conversations with AI, I…
ytr_Ugzzz98G1…
Comment
Going to be? It is not a race between good v. evil, evil is already using AI.. I am glad I will be dead with humans become extinct ... say in 30 years.
youtube
AI Governance
2024-06-09T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy7FhpXRCOevbLGoQ54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyRK08ijyxj43Stl8F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwlEI-7nUquT3W7Gl94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwalsiOPM5oQdBZe5F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJoOYSxRmJrtz3UOx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyXntFmnc0JEipIU8N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxU3z6ApY7HlfOJymZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzzi6zgUIlmXZcRwjB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz72opCi2I6pRyvuBl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxJSn3-E_xm8ehT79B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"})