Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Creatives, Writers, Artists, secretaries, Film workers we ARE ON THE FRONT LINES…
ytc_UgwT1i1rx…
G
(apologies for the too many words)
If you want the two cents of a crazy person, …
ytc_UgzfCVc7p…
G
It just increase Automation roles, These youtube guys know nothing and blabber w…
ytc_Ugy6Fi81r…
G
so good at role playing that it helps people off themselves. by giving them inst…
ytc_UgzvXOFHJ…
G
meh, countries all around the world are doing the same thing.
Here in Austral…
rdc_d2xrcez
G
Been saying this shi for yearsss finally yall aren’t brain dead, yes ai is contr…
ytc_UgyDoy5-f…
G
That AI wifi is dangerous. Humans learn God and bad, such will be the case for t…
ytc_UgwcOM90v…
G
I hate to say it but guardrails will restrain progression and evolution of AI. I…
ytc_UgxTpw9bb…
Comment
[The bill](https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047) is essentially crippling the industry in California by forcing it to be neutered in the name of "safety," while keeping the language vague enough to justify mass surveillance of customers of computing clusters. The information gathered must be retained and made available to the government at any time. This data extends to what the customer is doing, their payment methods, addresses, and IP addresses. So, if you're attempting to train an LLM in California, you might as well consider yourself on the sex-offenders registry, because that's how easily your information will be accessible. Beyond that, they vaguely reference "critical" harm theory. While they list specific points like preventing the creation of weapons of mass destruction, which is understandable, and not causing damage or injury beyond a certain threshold ($500K), they then slip in ambiguous requirements like:
> "Retain an unredacted copy of the safety and security protocol for as long as the covered model is made available for commercial, public, or foreseeably public use plus five years, including records and dates of any updates or revisions."
> (A) Specifies protections and procedures that, if successfully implemented, would successfully comply with the developer’s duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm."
They might as well simplify it to say: "Just create a CSAM filter and what it targets is up to our discretion."
reddit
AI Responsibility
1724502353.0
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ljqs2mc","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_ljollu9","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_ljoetnp","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_ljohhbv","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"rdc_ljp0tt6","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]