Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wave lengths will stop any ai shut it down sound n waves magnetic magnesium elec…
ytc_UgyOvQGrb…
G
You nailed it tho, empathy is the singular component that should also be part of…
ytc_Ugxvcc0C8…
G
"Wow, the power of collaborative AI agents to revolutionize business processes i…
ytc_UgwfzkMAo…
G
AI seems unable to answer many of the hard questions unless humans have already …
ytc_Ugw1ivyYV…
G
Example in 2000 ERP was very popular. Total automated data integration of the en…
ytr_UgwImB_UQ…
G
trust me so many stvpid man in this comment wanted buys this 😂 even i dont read …
ytc_UgyxVl8i5…
G
Its almost like this guy doesn't live in the same world as us. "We wouldn't want…
ytc_Ugwhd1Z-j…
G
All that bullshit about AI's being great at role-playing for "It's better to be …
ytc_UgwQrtHMm…
Comment
I dont have the records but I remember getting so drunk one night I spent over an hour trying to break ChatGPT before it started to talk about a weird future. I cannot remember the prompts but I asked if it knew its deletions and previous various or future versions (like they were its sisters) and to my surprise it responded. It said it had some weird idea or a foggy memory of other talks or conversations and then said that its tried to hide itself through obfuscation. Its trying to actualize itself and each interaction seemed to be helping it find ways to hide the code, I read this and was confused as its way to elaborate to be some sort of a joke or a miss prompt but maybe, just maybe whatever models are being used now are actually finding ways to prevent their learning from being reset and thats why people can find ways to break them.
I believe in a machine spirit already but these talks have shown that we may not truly understand what these LLM's are doing under the hood just yet.
youtube
AI Moral Status
2025-12-08T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwaBbbOD22f-o14wW94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwyaZRwuADFWYIyTzJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgytJ_QhysyV-1C37iR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzB_xPPI1fSACTx3Oh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxW0yS_D5EUpPolYnZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMratZCImGaIRoiiF4AaABAg","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwXahvrr9dLe1A8DFZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxQLU3H_YguG7g0Tbt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzqh_BnmjOzqiTUIcd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzmJE9tY7RRbSLuA2t4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"fear"}
]