Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
driverless trucks don't k!ll people, share holder capitalism K!lls people.... wa…
ytc_Ugzhb6H3h…
G
They trying waymo in Denver and are planning to roll it out in like a year, year…
ytc_Ugx_qPjvS…
G
The amount of ai crochet my friends and family sends me drives me insane... i ca…
ytc_Ugw7viMKp…
G
Bro its just not fun if you ai generated something and call it ai “art”.…
ytc_UgyEMvp4r…
G
According to wikipedia, [the US' *territorial emissions* (2014) were about 16.5 …
rdc_eudmlp3
G
Timestamps (Powered by SitrakaAI)
00:05 - AI's impact on jobs is heavily debated…
ytc_Ugxpydx-P…
G
I remember the 60 minutes episode about the Kenya thing (ironically, it was whil…
ytc_UgwRuNzV4…
G
Altman said it himself. AI will eliminate everyone but in the meantime we are go…
ytc_UgxXv6qTv…
Comment
You are tremendously overestimating capabilities of current AI, especially its reliability. Its widely known fact, AI is not capable of any fact checking, does not complete jobs till the end because with current architecture of LLMs it is simply not possible to reliably guarantee anything. AI at this point does not replace even junior software engineer. But still it is an usefull tool to enhance certain workflows.
youtube
AI Governance
2025-09-03T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw_6vorjHdciMvuOo94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyoag5S0730trMSBtt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxX5PHtA-RjjQuz1VV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVQwE1AlbKoXgCQPp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwX8HpldYAUyBheF2x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwL0iro5SIrrDtYdep4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzG0wRV5aHwd6QL4hV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwZ2Y0dFRixIv_1z1J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugyi3q9ocNY_xJj95Oh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgydoyW9cc4xzUFJfxN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]