Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Problem electricity The Dor brothers are YouTubers who make AI political content…
ytc_UgwO2wPDM…
G
Bro trust me as a small artist ai is literally weird drawings are literally awkw…
ytc_Ugx0xIM3Z…
G
Be careful in the AI apocalypse @coldfusion. AI is not going to forget that you …
ytc_UgxN8klFE…
G
I believe humans are not suited to share their existence with AI. We have our ow…
ytc_Ugw9TXVwj…
G
The A.I should've said about Israel.... I'm Muslim on the Friday, Jewish on the …
ytc_UgxeTlJ18…
G
Could AI end us as a specie, and if it does, there is a way to survive, or if i…
ytc_UgySbmxpM…
G
If US not going to use AI, china will . Peoples going to lose their job anyway.…
ytc_UgzTTbAKU…
G
Yes dear, but, tutors are very proficient and they easily detect if the work is …
ytr_UgxWjivRA…
Comment
"there's the right way, the wrong way and the Max Power way!" "isn't that jut the 'wrong way'?" "yes, but faster!" we know that predictive LLM 'AI' can make mistakes quite frequently, oftentimes kinds of mistakes that humans would never make, but it is faster. so, people who either know the subject, or who can research the subject, need to verify everything that passes through a predictive LLM 'AI' for accuracy. even if the predictive LLM 'AI' is right 99.99999% of the time, we would always need to account for that possibility of error, because it doesn't actually understand what it is saying, it did not reason its way to its conclusion, it only predicts what would likely come next based on its data and parameters. I'm not saying to not ever use it, or that it doesn't have any potential or benefits, just that we need to remain vigilant about verifying any information that it produces. any company or person who uses predictive LLM 'AI' should be held 100% responsible for any actions taken based on the predictive LLM 'AI' 's output. also, while I am on my high horse, sources scraped for data to create LLM's should be well compensated since the value of those sources was, or is being, stolen and the originators of the information are being denied potential revenue by people going onto their scraped site. (people may have gone to Site A instead of Site B, but by using the LLM, people went to neither). if the LLM 'AI' can't compensate its sources because it can't/won't identify its sources, then it should not be allowed to be used. it is just plagiarism and/or theft of intellectual property.
youtube
AI Jobs
2025-05-30T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxPOIVaMsJyzG_a7Kh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwuG6-kRRbKC7TLV694AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyCILISw1s9FecDAKJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzqGmoaxwwqo8x9LxF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgybNRXahwAaPBlZGf54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]