Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't know man, seems to me AI is only good for two things: Useless apps/consu…
ytc_UgydSGi3e…
G
why isnt government looking at health effects from wifi. these automated systems…
ytc_UggdM_Tp_…
G
AI needs to be made illegal. AI has no needs that it will have to compromise fo…
ytc_Ugwn86etD…
G
Im so glad seeing an actual artist not bash ai art and instead giving an in dept…
ytc_Ugz2IrYwW…
G
@sparkls4492 that was aperently true untill ai's started being made that could u…
ytr_UgzCn1Riv…
G
What’s striking about this entire discussion is not that anyone disagrees about …
ytc_UgwmiuTE7…
G
That reminds me of South Park episode in 27 season were Randy tried to save his …
ytc_UgyYL5nGo…
G
You mean the camera that already was there before the squares that aren't needed…
ytc_Ugz_Mrf8S…
Comment
The reason we're short on compute these days is real-time inference for people who want to talk in real time to an AI chatbot. People underestimate just how much math is really happening.
A single short query to a modern chatbot like Claude Opus is 600 teraflops - that's 600 trillion floating point calculations. And the worst part of it is, due to self-attention (which is required for modern AI to work) these calculations require \*all\* the data to be in one place and readily available to the GPU processors - that's why VRAM bandwidth is actually more of a limiting factor than compute in modern workloads.
Here's an aside; suppose you had a math genius like Ramanujan, with pencil and paper, doing floating point operations like those seen in AI operations, 8 hours a day, 5 days a week for a 50-year career. His lifetime output? 12 megaflops. One query to Claude opus would require 50 million Ramanujans working their entire life doing calculations by hand.
Other commentors mentioned SETI@home, but if you distribute a calculation like this to multiple distributed processors connected to the internet, you now have replaced the bandwidth of an on-chip VRAM bus and memory controller with a CAT6 cable and possibly even the open Internet. That degrades performance to essentially zero. There are elegant solutions for hypervisor-controlled data centers with fast interconnects, but they require specialized software matched to highly-specialized hardware; Grandma's old iPad isn't in the picture.
reddit
Viral AI Reaction
1776614378.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:06:44.921194 |
Raw LLM Response
[
{"id":"rdc_oh3h3dn","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_oh3jnee","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_oh3ltog","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"rdc_oh3tb1s","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},
{"id":"rdc_oh5k7o1","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]