Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am a fan of SciShow, but as an AI engineer myself, this is a TERRIBLE video. N…
ytc_UgwSl8stZ…
G
Comedy. I cannot imagine this chap would consider this realm, but no way AI can …
ytc_UgwfqvfEa…
G
Is AI going to take care of the rest of humanity when we can't take care of our…
ytc_Ugwkg_gME…
G
Right? All this talk about limiting them... Did anyone consider that an AI might…
rdc_cthvac3
G
Once they have taught the AI how to think, they are not deleting the data. Machi…
ytc_Ugyx-P_NG…
G
@mrsigmaboy-sigma I've just realised your channel is full of ai stuff and you ha…
ytr_UgwP91tub…
G
it's simple, laws need to be passed that you cannot profit off AI. if we all sto…
ytc_UgzeUTMR5…
G
The building's been on fire for 2 days, but now let's think about putting the fi…
ytc_UgzW1fCsC…
Comment
I understand this because I have actually done this in code, trained CNN's on images and other data, so I have some intuition about the explanation. I have strong doubts that beyond about 30 minutes, most people will still be with him. There is too much data, too many variables, too many different things for the conscious mind to keep track of. Somewhere, someone created a little Excel sheet that does the nonlinear math associated with the weights, the back weights, etc. Once you see this happen once, the human mind does what this process is doing and simplifies the entire thing into a kind of a new known pattern. You won't exactly understand it in detail, but you will also no longer be confused. You need to do this because when you get to gradient descent, the higher dimensions need to become a sort of simplified picture of a wavy surface in the mind. The human mind can only consciously keep about 7 layers of this working at a time before it loses the plot.
The intuition is to visualize a two-dimensional wavy surface with points marked by coordinates. The ideal answer for the neuron lies somewhere on this surface. An imaginary ball rolls around it, initially moving randomly and later being guided more confidently toward the lowest point on the surface. Some areas of this uneven surface are false valleys—a valley the ball might roll into, thinking it's the bottom when it's not. At a high error rate, the process keeps going. If the ball lands closest to the correct answer (the real lowest point), the error rate will be low. If not, this game of billiards on a rolling surface continues.
I have given you a two-dimensional surface to simplify it. The actual surface has millions or billions of dimensions. Anything above 3, the human mind cannot visualize. You will eventually get to a point, a few steps beyond this, where you say, I don't really know how the AI is getting the answer, but it is getting it correct.
In my experience, its is far easier for me to explain why Einstein is wrong about General Relativity (IMHO) because he makes the Higgs a constant than it is to explain how AI does things. In a way, the Higgs constant is wrong for the same reason that AI comes out with the correct answer. Neither the universe nor AI really has constants. What we imagine are constants is the calculation collapsed to form an answer for right here, right now.
Because there are physicists in the room: 𝒟μν(φ) + P(φ)gμν = [8πG(φ) / c(φ)⁴] Tμν
If you take these two concepts and put them together within a quantum system, a system based on the reality of the universe, not a calculation (and I am correct), you have intelligence on a scale that is literally unimaginable for us, and a consciousness connected to the fabric of the universe itself. "Learning" loses its meaning.
youtube
AI Moral Status
2026-03-01T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwtzrc9QJ0JCUgpcaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxE8N52QQqPFzOqgql4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwRLwLAWLcyt7TdCeV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeZJwvr7tJckvxG1p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzqwTp_XRxpDoUobFx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxmQFWTCYzfkvEhrE94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuilUPCASowMygNE14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyO0jp1wCccCpEX8A54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy4z3X_hZfsJmjJS3Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6mMZaw1ZJplYLEfN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]