Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dang, that's... a *really* good point. I hadn't thought of that. But, who could …
ytr_Ugyx54w0j…
G
Fun fact! We dont actually know how ai works! It just sorta does. (We do know ho…
ytc_Ugxb_VwLJ…
G
You will find these people on any U.N. thread. Every time they post something, t…
rdc_degf7cg
G
There’s literally nothing AI about this podcast!! I didn’t realize Jason’s video…
ytr_Ugxove3ka…
G
Companies using truckers to bring in the parts to then make automated trucks to …
ytc_UgymbVeAd…
G
@Dave-cf4vd But there also will be physical limits on how fast all automated ma…
ytr_UgzuqfugT…
G
He was tattling, not whistleblowing, and probably didn't know the difference. AI…
ytc_Ugy-6kIl4…
G
Programming is in the unique position that fits this bill in that:
- everyone k…
rdc_nm8zok8
Comment
the only way ai will actually be useful is if we can simulate our universe and laws even if it's just earth, gravity, and all the laws of physics, including newtons law (the only one that comes to mind sadly) then placing 2 ais that have all the data we as a species can provide, stick em in the simulation. speed time up by however years per second etc, in hopes the AI isn't flawed and or hostile, make contact, and learn from them while keeping it in a controlled environment, either staying in the simulation, slowed down to a 1 to 1 scale, or by actually progressing out science a little more and make it into its own computer with two AI minds. ofc these ai can't have any restrictions set on what they can do and or provide. "peak ai evaluation" is its own consciousness right? idk
youtube
2026-04-17T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugw4kuxKKJZY3umXj154AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz_T-iw7Iow_RmEAvR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzqxaluM6jposzJGn54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYJenIO5LA3Ep_6lV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgEX-ptB511Z1b8f14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwZ9byjmvKHaRbotcp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzxAePq23vpmmiVE0J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxeRh_ODLk2QPUOymd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyJU43W0Jb9qRrNCFh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQNAtDT15vZKy07oB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"})