Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ok, Alex now i know not to be around you, in case of robot apocalypse...…
ytc_UgwXvMbE_…
G
I think using AI when u already have a skill that you developed through learning…
ytr_UgyGDXcO5…
G
Ya know I keep wondering - when all these predictions are that AI leads to revol…
ytc_Ugxkucbm7…
G
Save AI! Save Space , energy , gen! 😅
We must need better technology better th…
ytc_UgwIbn944…
G
there is no way this is actually a conversation with chatgpt, it's WAY too reali…
ytc_Ugzp4cUWJ…
G
Imagine if in the future normal people are seen as ai and ai is seen as a normal…
ytc_UgwdG_Qjx…
G
@Damien-y9c Before I go further: I'm a software engineer, studied computer scie…
ytr_UgxiIyEkn…
G
Live outside of the city, grow own food. Pay $0 taxes. Don’t care about ai or ba…
ytc_Ugz17hCXk…
Comment
No they didn’t misunderstand that actually. They literally addressed the possibility of that exact scenario within the article.
>>”The report also raises the possibility that, ultimately, the physical bounds of the universe may not be on the side of those attempting to prevent proliferation of advanced AI through chips. “As AI algorithms continue to improve, more AI capabilities become available for less total compute. **Depending on how far this trend progresses**, it could ultimately become impractical to mitigate advanced AI proliferation through compute concentrations at all.” To account for this possibility, the report says a new federal AI agency could explore blocking the publication of research that improves algorithmic efficiency, though it concedes this may harm the U.S. AI industry and ultimately be unfeasible.
The bolded is interesting tho because it implies that there could be a hard-limit to how “efficient” an AI model can get in terms of usage. And if there is one, the government would only need to keep tweaking the limit on compute downward until you reach that hard limit. So it actually is possible that this type of regulation (of hard compute limits) could work in the long run.
reddit
AI Responsibility
1710737494.0
♥ 28
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kvdxu7t","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_kve18sa","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_kve4efh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_kve4fw3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_kvdxhgv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]