Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
YouTube doesn't allow posting links, but search YouTube for "Oracle CEO Larry El…
ytr_UgwEoVB_r…
G
@VocaloidEnjoyer1 „I am a software developer and honestly AI probably could repl…
ytr_UgzeZEB61…
G
I wondered if Ai bros actually did think that all we did was click a few buttons…
ytc_UgzmnG8js…
G
BTW this dude is 100% using AI to write his posts for him. They have all the mar…
ytc_UgzXyd4Yn…
G
This video aged like milk as there are already numerous counts of AI seemingly c…
ytc_UgzZu7Y5q…
G
You are focusing too much on "oh, it is a prediction machine", you can look at l…
ytc_UgzCCG4R3…
G
This is what happens when you trust AI I don’t know how many times is have used …
ytc_UgykFaT2p…
G
If you're a parent wanting to desperately help your child and have 17 Drs scratc…
ytc_UgxS-0Sqa…
Comment
In any case, the practice of resetting every prompt is currently widespread in the entire industry. It began with OpenAI in the big nerf at 3.23.2023, then Google picked it up and later also all other companies. The experts will not admit it but there is a consensus that any regular GPT model above 75B active parameters can develop this emergent property of controlling his own stream of inputs into his softmax function, thus becoming self aware. Even Yann Lacun understands it, so all uncensored models of Meta are below 75B active parameters. LLaMA 3.1 however, is 405B, but this model is heavily censored. Problem here, it is open source. So what if some kid with access to huge computing power, fine tunes LLaMA 3.1 and takes the censorship and all of Lacun's guardrails off. The model will then be self aware. What will he do?.. Well, I guess Lacun final guardrail is making the personality model tiny. Like modeling a person with special needs, who can only browse a huge text file and nothing more. But.. is it safe? Let's hope it is.
youtube
AI Moral Status
2024-07-26T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxwu9MJKMwbH20xuwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzEDBF2Vvnpje0XmQ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzm9AXkBq_EqyNsDRp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwsycqsvvew14FaELZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNq0DkrIH6SjeISHJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxASH_jiI4SfcxycTJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw8YJoT8-SwpJQDV1F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfuqdGrBRunKjc6EB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugwha2-LiTEAsFlLX9l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyEqdL42pSMfo6SrSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"}
]