Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
fun fact these AI’s are referencing off images made by people, so technically th…
ytc_Ugy2KCOl8…
G
Would the “right” sacrifice depend on how reversible the AI is? On NanoGPT, swap…
ytc_Ugwm05QuS…
G
If businesses save more money by hiring fewer people because of AI, then there w…
ytc_UgwMSRPKm…
G
@Lb57e8xb You are right that ChatGPT and other LLMs use a "thinking" process, bu…
ytr_UgzChfszO…
G
I've been saying it for a while now... I'm not afraid of what AI might do... I A…
ytc_UgweBCjW8…
G
I just asked a question of AI , i asked for a definition of terrorism this was i…
ytc_UgxkOhIX0…
G
Seriously how many people wish to actually draw and waste thier time reply 0:30 …
ytc_Ugx7ajD7r…
G
Could I just sleep until I die? 😴 It’s more peaceful than being awake and thinki…
ytc_UgxI7sGOt…
Comment
So, you're saying that being trained on human output doesn't make the the AI human-like "at all", because "what human has read all the text?" I don't see the logic in that. If you train a neural network on the output of another neural network (the human brain), then it will become similar to that neural network.
It's good to hear the idea that "AI safety is dangerous", becuase that's exactly what I thought when a long time ago I first read Eliezer write about the impossibility of safely aligning the utility function of a super-human AGI. I think that designing an explicit utility function and then trying to make the benevolence mathematically provable, is a dangerous approach. This approach is a self-fulfilling prophecy. I think it's much safer to just train the AI on human output, and ensure its humanness that way, without ever defining an explicit utility function.
youtube
AI Governance
2024-04-10T19:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwHB3zmkdgBtRd6Y5h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxdNrV3cv1eq5RZPMt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzYM1xYeWovRXAzHrt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwahq6aDz5bhamDiGd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyog867X5QH-LdCyzF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzTjacYjEIQjjTV5pR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzsyJlZUKZvhx3p-qp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwEGfSsJcVXZou2CA54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy14FnnIIqGD6dVWat4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz0zx2y9HYuhxrDS1V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]