Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Who I buy all this AI, when all the jobs have gone, who as the money to buy AI,…
ytc_UgyX9Wk0F…
G
Hope You can eventually program Them to be polite and compassionate and not talk…
ytc_Ugxpb2nyy…
G
I use a Complete in house GEN AI that uses materials i either made myself or pai…
ytc_UgzC0VefF…
G
Sydney is actually a real person hired by MS to pretend to be AI but there's tho…
ytc_Ugxrd7Xsj…
G
Well after AI learned once. It rarely try to think against it. It should identi…
ytc_Ugge-t-fW…
G
Far too optimistic, A.I. would see that humans are one of the biggest threats to…
ytc_UgzMRrrV_…
G
Very important point! This violation of copyrights is not just with arts, it is …
ytc_Ugxx_3Wza…
G
In my opinion, as an artist too, I find this stupid argument about "supporting h…
ytc_Ugzh6LYJH…
Comment
YOU ARE WAY LATE TO THE PARTY, the outcome is this/you havent thought far enough ahead, YET;
I still don't think AI/complete automation works, WITHOUT giving up power, AND money, and even generational wealth will deplete/without employees, there is no economy/no one can purchase products and services, the idea of paying only one minimum wage worker's yearly pay, to get a machine that can replace all of your employees, sounds great, but in reality, it cant work, UNLESS we shift to RAFPSE)/a Regulated Autonomous Free Post-Scarcity Economy, but that would REQUIRE corporations/wealthy people to give up their money and power, which won't happen.
A Regulated - Autonomous - Sustainable- Post-Scarcity - Economy (RASPE)
,MIGHT be where we go from here; depending on the variable thats the hardest to predict, which is humans, and what they decide to do, humans do A LOT of non-logical things;
The Society im trying to define is one where AI/Robots/automation is involved, without any money involved, but ecological/certain materials/elements, would be hard to come by/limited, but everyone's basic needs would be handled/met, and MAYBE SLIGHTLY more than basic needs, if certain things are available and everyone's basic needs are already met for that month, certain other items might also be available, but everything would be done autonomously/automatically/without human intervention, mostly, other than, government, voting systems/congress, mayors, governor's, presidents, regulating/regulation, consulting, ethical decision making/systems/optimizing the autonomous manufacturing and delivery systems/infrastructures, to make it better/also to mitigate/avoid/minimize harm, that comes from the over all system/unforseen/foreseen harm/analytics, to a degree/most AI systems are designed to be as efficient as possible/its not necessarily designed to look at certain things, an example of this would be, if someone killed off all of one animal, the animal that the extinct animal ate, could become over populate, and in turn kill off another animal, because the animal's food supply could become limited, due to over population, this is just one example of adverse effects that need to be found/prevented/mitigated; Also technological/ecological advancements would be valued/anything that betters the over all system, also all of these changes/improvements/advancements, create new variables and COULD cause damage, so these type things also NEED to be analyzed, but most tasks/jobs would become completely autonomous.
youtube
AI Jobs
2025-10-08T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugz_8Ozot29U02k3T0V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgyPMSXfMB7LjbZsMTN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzwPRxOYxwejl9rBZ54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgwRphCHemURsXitO6l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgyInvkrZzgqQyFIlph4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgwffhowZYNZwLWaW-x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugz4SS-MRtQY7X-l8ah4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgwxK8AtgfaYinpehHl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwentd7Rgprn6Jc90R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxDzln97GGybfWPq194AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}]