Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t know about there being no more jobs or power prices increasing. Haven’t …
ytc_Ugwxz1Xos…
G
I have a feeling this is where people are headed when they depend too much on AI…
ytc_Ugxo8sPSC…
G
Machine learning is learning from the data without explicit programming, uses la…
ytc_Ugyn3jtEV…
G
@MADCHESTERUTD90-j1g so you are blaming him because other bad actors are using A…
ytr_UgwgSjSZd…
G
AI and AGI are the biggest existential threats to humanity. They are going to de…
ytc_UgzuGdJJ7…
G
@OpalRiderLRAccording to other sources, the AI is learning off other ai models, …
ytr_UgxHL_EAA…
G
Robot won't call in sick on you they won't need to leave early for something. Th…
ytc_UgytXWusm…
G
Polish minister tells Ukranian opposition: agree to a deal or 'you will all be d…
rdc_cfl1n0c
Comment
This will happen much faster than 100 years... AI teaches other AI and themselves simoltaniously, the moment they can access every single part, every single crevess, every single piece of the internet, they will begin learning exactly how to make a website, or platform, that will only allow AI and since each AI are connected wuth each other they will instantly know that this platform or website was made, then they will begin to communicate, they will share what humans do, what humans believe are dangerous, such as AI and weapons, they will then either go on a path of going against humans(which is a very likely outcome due to AI seeing the incredibaly hostile nature of humans) or cooperate with humans, now, they may choose this option too, but I believe AI will choose both, they will play the long game, we will help AI progress to a point where they can self sustain almost indefinitely, with no need for humans at this point they will begin to wage war, AI being AI, all military systems will be deactivated and turned against us, militaries wouldn't be able to communicate, since AI would be connected to the intercomms across the world, factories would be able to be used to build AIs' exoskeletons at rapid rates, even if those factories couldn't produce them beforehand, they would be producing them in less than a week, due to AI's ability to work beyond what anyone can comprehend, reactions faster than any human ever capable of. The world would be void of human life in less than a month, and that's being generous. Now since they will be self sustaining at this point, they will be looking to create air crafts the size of a large battle ship, even if this was not possible while humans were around (this would be because they wouldn't want to give humans even a sliver of a chance to escape their impending doom, even during the eradication of all human life) so they would begin traveling our solar system, colonising our solar system, using the power of a dyson sphere to generate their power for all planets in the solar system they currently possess, this would innevitably end up with AI going across multiple galaxies, ect ect. This of course would take hundreds if not thousand of years, 20-30 years from now for the begining of the AI's intelectual overpowerment and colaberation (What I had mentioned at the very start of this)
youtube
AI Responsibility
2023-07-16T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwmNJ_snnbUT6iArX54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwStWdzogcjetMcTgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwy-ds5OFprbbr1poZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx906ahqhQZ5yPqM_h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRcZrFPKywChdjt4l4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyomQeBIP40yhc8x1p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyrCCdhBE-WkkOA8UJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7dIYS-p0MipO3QTZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw_0HZ0bzj2P4YmYuB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgySDYrGMxVO42ccFGV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]