Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey @quan2saucyy915, thanks for your hilariously honest comment! We actually let…
ytr_UgzCUOtgC…
G
I would love to agree that AI is overhyped but I as many others have a feeling t…
ytc_Ugwq4M59V…
G
40:00 – this is one bit that really concerns me. You can't become skilled at any…
ytc_UgwNUKNR5…
G
@GambitsEnd Yes, the answer is wrong, but if you had bothered to calculate the p…
ytr_Ugzoiqacw…
G
Whats stopping a group of people from collectively building relevant databases e…
ytc_Ugw0JzxhC…
G
In future, we can own AI robot to make money for us. We just sit home eat and sl…
ytc_UgzneW3H_…
G
What people now call AI can only give you a probable answer to a question it doe…
ytc_UgzxWfy_D…
G
Yes, the people going on about how the current models are only LLMs and not "rea…
ytr_UgwrzOutM…
Comment
2:00 well I'm not saying we should stop or that we should implement regulations that crushes research, but I am worried about an AI arms race, that us declaring we will win the race is very bad rhetoric, that it triggers others countries to put their foot on the gas, we tend to think we're the only place that has smart people, but the sheer volume of people in the world, many of the geniuses aren't born in the US, and you don't have to be a genius anymore, these tools are increasing the intelligence capacity of everyone. We're raising the intelligence capacity of people across the world, so they definitely will be able to copy and create their own. We can't really stop the knowledge of how we're doing it from getting out, even if the models are closed. You just have to study what is publicly available.
youtube
AI Jobs
2025-08-03T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyJ9YJi5oYfkcuy4uN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzqknioR4Sjwe4z1LZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyS7jCK1QiPoHCJFFZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxOJMNOI5YuFUsQJDV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyzWi_e5wIYJJYqliF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]