Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It seems to me that AI hallucinating is really just ART! It wants to fill in th…
ytc_UgzO7O3Tg…
G
I swear AI is going to be the death of us..... 2 words "SKYNET" ijs…
ytc_UgylBkKGd…
G
Not just greed; fear of our enemies building AI first, the lust for power, and o…
ytr_UgzlJ-RbM…
G
The robot: I will destroy humans. Me: I WILL DESTROY ROBOTS AND YOU BOZO 🖕…
ytc_UgxwXgPlK…
G
I would have machines have levels of intelligence according to purpose. A toaste…
ytc_UghgvagOq…
G
Amodei's main concern ought to be about how Anthropic will fare amid the stiff c…
ytc_UgzW9zigj…
G
Imo a big part of the problem is that free pron exists. If it all had to be paid…
ytc_UgxeTEHxO…
G
this is stupid. i’ve worked across every type of tech company (big tech, faang, …
ytc_Ugx5IZy8K…
Comment
If left unchecked, AI will eventually come for all the “other things” that people think we “could be focusing on instead”. Everyone is an ai bro until they get one-upped by ai. Unfortunately, using Nightshade and Glaze is in fact an ai generation, so when the ai detecting software flags it as ai, they are not wrong. It is something to consider when submitting your altered work. The last thing I want to point out is that many ai people are claiming that Glaze and Nightshade have already been worked-around, while a solution does exist even the inventor of the solution admits that it is not very reliable. Also simple things like removing water marks, signatures, meta data, or copyright management information ( C date/year by name) is not only a violation of the USCO code but also of the DMCA on a global scale and if proven (like say in model training databases) to have been altered or removed could put the offender on the hook for $25000 USD for each offense, in punitive damages owed to the copyright owner.
youtube
Viral AI Reaction
2025-01-17T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgymOpsROfTsimppaKZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyC-hrqbBYec_p_K3h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxuLVZKalFiT7xpBPd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxyFVQjIA4qSXJjZt94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyYMBxFCH1xWR6bT4B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwGcSO_SobeDaW9M0h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7UvO1EfCMyYfTU1x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzuJY20pzM55Bs_p8t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzgnPrNGhIv3LXoKzt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzuncGtk0ikQjXVmIZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]