Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wont work without unity of the entire population.
The people of USA decide not t…
ytr_UgyIZxP8V…
G
Ik where my kid is going, I’ll pay 10 grand if he gets this treatment, better th…
ytc_UgzP5pH5q…
G
see hat is my question, is it bad to use ai as an assistant to cleanse hours of …
ytr_UgygTZdf-…
G
Says the one who’s channel is litteraly jumpscares that aren’t funny nor scary t…
ytr_UgwOJAKEn…
G
If the military is a head of society, then my brothers and sisters this is just …
ytc_Ugwv2M0eT…
G
Honestly the thing I find most disturbing in the Claude blackmail cases, especia…
ytc_Ugz_SWubh…
G
So the flaw in being a well-meaning question asker is you have no idea when the …
ytc_UgxdgopUV…
G
I made an app and making the data bank was the worst!
Most of the time 90% was …
rdc_ofhd1o4
Comment
I think this is a step too far for me. I get that AI art bad and everything, but were still stress testing what AI can and can't do, and AI is undeniably a valuable too that humanity needs to master as our next step in technological development.
The issue with AI art isn't even the plagiarism, it's the people claiming AI art as original human made work. AI companies have been getting better at letting people opt out of their scrapers and web crawlers, and I think if some governing body forced these companies to allow for opt outs it would fix a lot of issues, but intentionally poisoning an AI instead seams like a bad thing to start widely adopting.
This is probably the wrong place to put it since the art community has been demonizing AI since it's inception, but just remember that humans plagiarize all the time, we all take inspiration from somewhere and make it our own.
youtube
Viral AI Reaction
2024-10-21T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzJVhxC2T47vfhEl4h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzRM8RrUcZWXAhgFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy7woCbrNM1MazIWdx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxyoCxhHn1WLAztF3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJivEqViBPQL6glhF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyHtVUN6d6dXW_AKax4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxsYekHFq7rP8bu1m14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugws6p2r_shZw3F75-d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxFrsNI8WEkaXzmnPd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzF8WM5WutboG9620B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"}
]