Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@tanamic7393nurses and person who has done just ai in machine learning in radiol…
ytr_UgxDqGSwc…
G
WORST HUMAN EBING ALIEV, HE MADE OPEN AI BECAUSE OF ELON MUSKS HELP AND THEN MAD…
ytc_Ugwl-GDj7…
G
I think people are tired of hearing "artists" talk about how they hate A.I. art.…
ytc_UgxglFhpR…
G
Honestly I think the reason so many people argue for Ai is because they don’t ha…
ytc_UgzjZbaxe…
G
Thats pretty much a ragebait, i only use ai sometimes when i want to, but i dont…
ytc_UgzXmyrbg…
G
This is pure tyranny. They target people who don't automatically bow down and li…
ytc_UgxVztR5U…
G
Whats funny is he can just go onto another account and make an ai of the same th…
ytc_Ugy4gOmIJ…
G
So here is my concern: to me as a developer, it's not how they build it that sca…
ytc_Ugxyvp6XV…
Comment
You make the classic mistake in all AI world with you analogy with self driving (not that self driving exists).
AI using LLM uses text processing and essentially a glorified text completion using the "entire" internet depending on the model size. It correlates and analyses a question and connects pieces of text in a way that "make sense". This is subject to error - in data (most answers on stack trace are wrong), errors in model - due to word size on the GPU (32-bit float, 16bit int, or 8bit int and model size), the number of tokens in a phrase and so on., and finally overfitting where model generation feeds itself too many times and the end point is errors saturate resulting in the LLM always producing garbage.
LLM use Neural networks - a technology to solve this problem . Given that the english language is fraught with variances based on locality (USA vs UK english), implied meaning, double/triple negatives and all sorts of woolly descriptions of what you really mean, you can see LLM have issues , but the problem is that defining a good validation model is near impossible.
Self driving uses numeral analysis - what we currently call machine learning - based on neural networks again, that create associates in numbers as mathematical calculations. This is an entirely different concept. Although ML makes validation directly difficult (we cannot know the meaning of the weights on nodes or activation parameters with respect to the real world), we can create strong validation models that can define likelihood of error and act accordingly. We can reduce errors based on the validation models as a leaning metric., but we are still subject to overfitting etc. We can therefore define a metric of quality of self-driving as we do now which defines it as crap (so your example of tesla is Gen-2).
youtube
AI Jobs
2026-02-16T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxaZq2khsKvMG2YwMR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgygiB3-nBXgpelRCGh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwU5aRkyooDWS_kb5p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyQDoIxcvD3yb3UUet4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyCe0LoRXaoWM_1iWh4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz58Wzcwm7sdadqF014AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzeJAkLnilO17lVkph4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzeOU0U2_cvJQW8MeJ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzB6QKy6EhVgxmH2914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyvmJRq5zvRktpWGUp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})