Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That doesn't explain, not even close, the emergent behaviors of LLMs that have s…
ytr_UgxPo0hIR…
G
Where did you go snorkeling? I did volunteering in Belize last year in Placencia…
rdc_dsbbjhe
G
@3:33 is a bad example of needing less people. I'm a health insurance expert and…
ytc_Ugwi3jqNf…
G
She needs to sue the people who made the facial Recognition company as well as t…
ytc_Ugz1h2Tnh…
G
Writing human like essay using an ai and then using another AI to humanize it is…
ytc_UgxSnma57…
G
Again, they want you to focus on Anthropic because all other ai's company have a…
ytc_UgxTCalHL…
G
@odobenus159 >It will never advance beyond infant "inteligence".
Judging by th…
ytr_Ugw98TeXC…
G
Environmental confluences create intelligence, not data shoved into our head lik…
ytc_Ugy4h55Nd…
Comment
You guys are hyping this shit up too much I don't doubt it's impossible but I highly doubt ai will be the downfall of humanity💀
youtube
AI Governance
2024-06-10T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy7FhpXRCOevbLGoQ54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyRK08ijyxj43Stl8F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwlEI-7nUquT3W7Gl94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwalsiOPM5oQdBZe5F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJoOYSxRmJrtz3UOx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyXntFmnc0JEipIU8N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxU3z6ApY7HlfOJymZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzzi6zgUIlmXZcRwjB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz72opCi2I6pRyvuBl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxJSn3-E_xm8ehT79B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"})