Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This needs to be spoken about at the dinner table with all their children of all…
ytc_UgxbyeZ1W…
G
Holy fuck that’s messed up. They don’t ask for consent because they don’t want t…
rdc_gts5lg1
G
I have nothing against ai art if it's not taking other artists artwork it will b…
ytc_UgwJtZZZK…
G
And then it is a "ai abserving an ai observing ai abserving another ai observing…
ytc_UgwCLNBxr…
G
My mother thought I was good as a child, but the truth is that she inspired me a…
ytc_UgxR8Yya7…
G
I wouldn't worry much about AI. J would worry though about the people whom have …
ytc_UgzjmR7pj…
G
That's the most awful AI voice I've ever heard. Who programmed that? AI generate…
ytc_UgyAWP9vp…
G
In an ideal world, training AI on medical textbooks and peer reviewed informatio…
ytc_UgwT9Ja0B…
Comment
Fan of the show, specifically how you look at conspiracies and then ground them in reality - often debunking them. I'm an AI expert working on one of the projects you mentioned, so this episode was particularly interesting from my perspective. I think most of what you covered is true - you may have over indexed a bit on the Bing incidents and their meaning - they're less significant than they seem on the surface. I think this topic probably deserves follow-ups, so you might consider a few things moving forward:
1. Being more grounded in how these actually work: you mentioned LLMs and neural networks in two separate beats, however LLMs are a subset of neural networks, specifically they are called transformers. They are a specific architecture of neural network that focus on the ability to learn how to focus and relate information in a huge input: language input. LLMs are really where the game changed - they are able to learn how to structure meaning in language really well. At first it was apparent that they could regurgitate (predict) well-formed language and now, as you mentioned, GPT4 exhibits abstract reasoning. Two things for you: it would be useful to explain the high level of how LLMs work (because they're the important mover right now) and report on how that trends in reality (which I think it will) and projects out each year for 10 years. We're in the middle of teaching LLMs how to use tools, specifically invoke non-model code, this is becoming a standard industry practice as I type this. We're also integrating other forms of modality, LLMs first could only "see" text, we're adding images, sound, haptics, and other senses. We're also working on giving them longer term memory. Generally, these will converge on true AGI very soon and rapidly progresses to ASI. The next generation of LLMs won't be as hardware constrained as they are today, Nvidia's new GPU supercomputer was really designed for training this next generation of LLMs that are an order of magnitude larger at the same time as being close to an order of magnitude more efficient computationally and algorithmically. At the same time custom hardware is being designed to forego GPUs for another leap in model size. At about current gen + 2, we'll start to see AGI, which is around 2027. In 2030, we'll likely see ASI.
2. You focused on the bad side of AI, but per your normal routine, I would suggest you ground the topic in the other side, which the potential upside of AI is equally amazing as its potential downside. Maybe that's not the beat you're looking for, but I think it would be prudent given the gravity of the topic.
3. Something nobody is thinking about (or at least talking about) (that I've seen), is the implications AI has for many of the other topics on this channel. If we, lowly humans, are capable of creating super intelligences within a few thousand years of harnessing fire, what does this mean for things like aliens (hint: they won't be humanoid), the fermi paradox, advanced civilizations, etc. It really provides a realistic lense to view it all through. It might even explain a thing of two...
I know a thing or two about all this, so happy to help understand anything or consult.
youtube
AI Governance
2023-07-07T03:0…
♥ 40
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwDZd4iA4Wo0ie0vXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_Daysgtmt50CSqOJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwoQSnxKQHVuBOtuEN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyrwCrLXvfklilSlsx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwKkZko7Q6x_jBr9Xh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzMSvX8VWSUBYzLjxJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYJAAKtk6aTIocImh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy-UDTt3cjhTuABGBV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyfVCSpfRh782LR6eN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz2sO8Bs6ZNXhCe3PZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}
]