Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is as good as humans trained it, meaning anything outside planet earth will b…
ytc_UgwkmRqgf…
G
When AI and robots replace human workers, will Artificial Intelligence and robot…
ytc_Ugzn01W98…
G
We are overdue for a Carrington Event like we had in September 1859. Since our …
ytc_UgyMJTYc7…
G
Surely by the time I start and get trained up the job will disappear if AI is th…
ytr_Ugy5AqRGx…
G
Well, obsessing over the 'apocalypse switch' is an excellent way to distract fro…
ytr_UgyeX3fsE…
G
We don’t wake up in the morning because we are no longer sick and no longer in n…
ytc_Ugz7ETaWq…
G
We all knew that this is the future. Where do you draw the line between what’s f…
ytc_Ugyphu2sV…
G
These morons have never been truckers, they think they know it all! Try being a …
ytc_Ugw1ZTsmN…
Comment
Sorry I am late Tucker. I completely agree with Elon on this point. What is truly scary about AI is we are programming and training them to think like humans. And when it comes to how humans think, most of us generally use these sorts of AI platforms to express our darker aspects. I do not necessarily agree that AI are smarter than humans, but are certainly better at processing vast amounts of data spanning larger spans of time. This gives them a predictive advantage. Further, as problem-solvers, it is not their ability to troubleshoot and provide meaningful solutions, rather it is their decision/implementation ability that is dangerous. If, hypothetically, an AI comes to the conclusion that, to solve climate change we need to eliminate non-renewable fossil fuels and coal, it's ability to determine the fastest way to achieve that end then re-engineer our technology in order to accomplish this objective is 'anti-speciesist'. Factoring in that any coder and programmer can independently develop their own AI technology with very little investment and no oversight, this certainly represents a serious area of concern as we move forward. As Elon said, 'would we even know?'
youtube
AI Governance
2024-01-25T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwbN0Zas7hnaOWChuN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyON372r3BSPjlx0R94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwErHKCzHsYoE2smvN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzOfuoKnFjj2fMz1e54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzEdNZn6WRC5M0fnod4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz0GyAWruZCm3lhHvp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0WmWg99hbGRuXlNN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwz8CO1tr29pV1jlq54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-eJOqESVTo6stdlx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxOUju_vBA0mlOwnsJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]