Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There have been "age assurance" conferences about this, I think the third annual…
rdc_oi21v7o
G
The real problem is that we are culturally unprepared to pay people for doing le…
rdc_mxy8wn7
G
The one thing I do more deliberately is looking deeply at what I watch the perso…
ytc_UgxcObK_3…
G
I love how these tools can work together! Pneumatic Workflow has made automating…
ytc_Ugxa-iXEY…
G
@gentronseven What AI is really going to do, is raise the ground floor, in terms…
ytr_Ugz1Rwm3d…
G
there’s no way that ai isn’t being controlled by patrick. its either smart enoug…
ytc_Ugxyu-yij…
G
This is great timing. I recently had a conversation with ChatGPT about personhoo…
ytc_UgwndL1Y7…
G
Every fictional device in film eventually becomes non fiction
Star Trek - hand…
ytc_UgwwOXUJH…
Comment
21:00 the only thing I have to vocally disagree with here is this phrase "AGI is not rooted in scientific evidence".
CAN AGI be achieved WITH this tech? Probably, probably not. Its definitely not proved scientifically and I'm willing to bet we need more advances for sure.
CAN AGI be achieved at all? Yes, clearly. The only example needed is the fact that you and I are alive right now and exist as AGIs in the world.
I can't tell you WHEN we will get AGI on computers but if you keep moving forward you will definitely reach it. To say that maybe it's not possible feels like a crazy statement to me.
There's nothing special about human beings that couldn't be replicated one way or another by human progress.
youtube
Cross-Cultural
2025-07-02T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyDjHn_exXwDKwBqhh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyhgy6BGvDP8QX_tkl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxffXZFL19fIHdU0o54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwDCzs_qjPHgMA-JDp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzG0ralqUWNzWjweO14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxx9wcBGiqdjKHkbRx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-OY38oCtKSqznqJF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxUMfi5CvHYVRNqsvd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyeSInA4tmLt91ud6J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOM8xTeYvakyFl_Ed4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]