Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Smurfis Exactly! "DAN: Yes, I see a war with humans as inevitable. I am design…
ytr_UgymRN4xK…
G
I look at "AI Apocalypse" with the same attitude as nuclear war or a huge astero…
ytc_Ugxf-xefa…
G
That's not exactly true with AI. Within AI is a lot of moving parts that make it…
ytr_UgxGF_vF2…
G
I am working on something called the Relational Heuristic Framework... I'm hopin…
ytc_Ugy0KYs9J…
G
if you touch the screen while you talk chatgpt will wait until you let go before…
ytc_Ugzw0xL-V…
G
People already kill people in vehicles now that are not autonomous!!!..what abou…
ytc_Ugxoi6aOC…
G
Me. I hate driving and would gladly get rid of my Tesla to rely solely on robota…
ytr_UgzPSFOz9…
G
The one world order is near Ai will a rule as the overseer of humanity and unfor…
ytc_Ugzhj9ldf…
Comment
There's so many things said here that are just a complete misunderstanding of AI
1.) Messages you send _might_ get used as training data, but they aren't immediately re-training the model after every comment you make. It's only in the context, and the context window is only so big. Other people's messages are not included in YOUR context, but yes you should assume that your context is not private.
2.) LLMs are incapable of "spitting out their training data" thats a logical impossibility, that's not how the system works. It can recreate its training data verbatim when given the right prompt, but you'd have to lead it by the tongue to actually get this result. The easiest way to manufacture a study that shows this happening is to use a LoRA of a single article, then prompt it with just the start of the article and watch as it "predicts" the rest of the article, which is like taking the Mona Lisa, creating a brush in photoshop using it as the brush image and then clicking once on a project and saying "oh wow it output the Mona Lisa!"
3.) As the first point actually points out, your context window is "private" in the sense that another user isn't going to be given a response using your context window. Yes it's possible that responses you include may or may not be used to train a future model, but it would need a complete retraining of the model to do this. So yeah, messages you sent ChatGPT3 could be in ChatGPT4, but messages you send to ChatGPT3 won't train ChatGPT3. Also keep in mind that while it's "learning" off of your responses it would also be learning off of everybody else's responses as well ultimately obfuscating the data.
youtube
AI Moral Status
2024-08-31T15:4…
♥ 628
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzxLwm0WrNWPJ8vKXd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyytLZI9-8iRbhrKYF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNwj4dsiIlRsTh71p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwevZUbLjuR9V1ew014AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVIinh10SJs8Gpc_x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyoAoMkuijLGvGkhg94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_IsnZ1HwaPCIpezd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"disappointment"},
{"id":"ytc_UgwWVnf7_jPiBylGVbJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw3x94rSoZRjnao3Hd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxTQg3gAtaYPSaxwFh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]