Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@benjaminmarcoux3073 Thanks a truckload for your comment! But I can't really com…
ytr_Ugwg0RN1_…
G
If AI was worse than humans, I don't think they would be scared.
I think AI is…
ytc_UgwocQc1l…
G
AI is not safe
AI will be the end of humanity
AI will preserve itself
The End…
ytc_Ugxa6Mim5…
G
I came back to this video for that reason, guess he was correct when he said cha…
ytr_Ugx4k244Z…
G
That dogshit looking wolf in one of the images should he enough of a tell that i…
ytc_UgwkTpeZD…
G
oh this case has already been litigated by the monkey who took a selfie. only hu…
ytc_Ugzs4L0tb…
G
Ai is a tool. You do not have to use the easiest tools to create something. We h…
ytr_Ugx8sWHtf…
G
It's just a simple algorithm, not even close to real AI, ridiculous that Congres…
ytc_UgyTvYOyN…
Comment
I am most concerned with possibility and likely eventuality of some person, company, terrorist, country, or whatever INTENTIONALLY creating an AI and giving it open access to world. Or worse, they make an INTENTIONALLY MALICIOUS AI and unleash it on everyone. What happens to cyber security when it becomes AI vs AI? What if an AI creates a super digital virus, spreads it everywhere it can and completely shuts down digital networking as a whole? Or the age of AI generated Mis/disinformation just absolutely ruins any human faith in communication outside of in-person communication. What happens when nobody can trust any information? How will history be viewed in the future when we can no longer decide fact from fiction anymore? Did that happen? Did it actually occur the way the evidence suggest? We will have written, audio, video, that nobody can be sure is true anymore unless they witnessed it themselves.
youtube
AI Moral Status
2023-12-18T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwLIXAE66kuy75crex4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwc9eFziCJ6DGieUkt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxQLFP88T0RohpBtOF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyhuEkTxbr1LB2qYrN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxEkPNIeuxDd7Cpsvt4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxqrts_GpFbBhHyEbl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyrkevO1uAIT9i4QB94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzTamRd1BcGXnklRbB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxwqheVuBBVy4Tlf1J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgylUJQC4tzflzV7bdZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"}
]