Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You all are slightly misunderstanding the OP I think possibly. How I saw it, the…
rdc_jkk8tz5
G
I need a hug but I will not accept one... THE AI ALWAYS ASKS ABOUT MY FAMILY AND…
ytc_UgwQufAEX…
G
That was fun, nice probing questions! I once managed to convince chatgpt that a …
ytc_UgwWKh03x…
G
I’ve been tracking sectors like AI, and clean energy. Companies like NVIDIA (NVD…
ytc_UgwsVgPH3…
G
I don't like the idea that this Chinese person is using Germany's talent to buil…
ytc_UgzyzJKXl…
G
Shocking shock treatment AI Vampires do when they get inside your head they’re …
ytc_Ugx5AaUWv…
G
This is incredible and every artist should know about it because then every AI p…
ytc_UgxGOj08V…
G
You might be an expert in your field, you might even claim to be the Godfather o…
ytc_UgyXPTGNT…
Comment
I think LLMs will turn out to just be one useful one off tool, and that they're not going to get that much better. Companies will continue refining them for a while, but will find that there's only so much you can do with them just like how there's only so much you can do to a plane with the same engine.
The primary reason for my prediction is the fact that there's not much difference between the different models; if LLMs could go way further you'd see far more variation between them.
I wouldn't be surprised if in 10 years we get another AI paradigm that is more intelligent, but I think there will be a plateau for a few years.
youtube
AI Responsibility
2025-10-05T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwepnPRuy9qze-GB0p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxqcXEhFPEf7vcf-3h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwflgJzAyKk-eUi1gt4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxC8nf_3W43KojxSvd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWfQew0g9jOLQKCFZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugww7Y4RI0hEue-UJj94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxoh1JKoPotXbcAuOZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxyAg9rQvt7HzZWEQ54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwa6we-B2WizGwJmpB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxdA2dWHS2l2ksqZ14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]