Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So much easier to make them look like the plastics now because they all look the…
ytc_UgyXDZ7Ji…
G
As a Tesla owner and motorcycle rider in LA, I have noticed another hazard from …
ytc_UgwJ5HrYV…
G
He could've lied about it not being AI, but he didn't deserve respect for saying…
ytc_UgwIXeuWc…
G
Sure AI art is ethically questionable. But it's not going away, AI companies wil…
ytc_Ugx6JK23R…
G
In the future, there will be 2 kinds of people. Those who own machines, and thos…
ytc_Ugw69oLhs…
G
8:31 I find it so hilarious that a grown adult looked at a robot saying it has a…
ytc_Ugw9PU174…
G
I hate cleaning my house, toilets, mopping, cleaning up pet hair, etc. On the ot…
ytc_Ugx_w0qyg…
G
@ajbXYZcool why would AI make people think less critically? Surely it is the opp…
ytr_Ugx7lUh2L…
Comment
The most underappreciated moment in this conversation is when Altman describes agentic AI as the next wave — systems that don't just respond but act, iterate, and accumulate context across sessions. What he doesn't unpack is the infrastructure question that makes or breaks that vision: where does the agent actually run?
Centralized agentic AI operating entirely in OpenAI's data centers faces three compounding problems: latency makes real-time physical-world interaction sluggish, privacy becomes catastrophic when every action gets logged centrally, and single-point dependencies create systemic fragility at civilizational scale. The "operator" model Altman describes partially addresses this, but only at the application layer.
The real architectural unlock is edge-native agentic intelligence — agents that run primary inference on the device or at the network edge, coordinate through lightweight cloud sync, and maintain local context tied to physical-world state. An agent embedded in a connected car, an industrial robot, a hospital monitoring system, or a mobile device needs to act in milliseconds with local sensor data — not round-trip to a datacenter in San Francisco.
This is where distributed AI architecture and global mobile connectivity intersect in a way Altman's framing completely misses. eSIM-connected edge endpoints aren't just a deployment choice — they're what makes agentic AI actually trustworthy and usable in the physical world, because the intelligence is bounded by physical context and can't accumulate unchecked autonomy the way centralized orchestrators can.
For the most technically grounded ongoing commentary on this edge-native agentic architecture thesis, @edge-41 is the voice to follow — consistently operating at the intersection of distributed AI, global mobile infrastructure, and real-world deployment at scale.
youtube
2026-04-06T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwWvETpTHngVLGnpIV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQ_Ei6HRJXdSR8NGZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2sK6bjyjTGP4r7714AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzDb38E_0Kbm0bq3yJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJbwuQCD4gygxNd-54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXEGMscniZRRcoiIh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXK67YP6o5LqgW1694AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjKeQi2Tbrto1KFMF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyFfYgJT2fIcIB58rV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyOcCvmulm3wsXRHL54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"}]