Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Because Silicon Valley is the standard for what exactly? Profit? This is the con…
ytc_UgzlkuuKX…
G
AI simps like XQC and Asmongold are nigercattle who only know how to consoom, bu…
ytc_UgwhVn0bY…
G
I know my opinion doesn't count, But I'm thinking within 20 years for general ar…
ytc_Ugzn3ZMh1…
G
His fingers were too fat from the liquid breathing suit needed to reach the NTIs…
rdc_czl9ggo
G
she echoes the same sentiments emily bender and alex hanna do in their book "the…
ytc_UgwVF9_yB…
G
We can barely get ChatGBT to work, how the Hell are we going to get *physical* A…
ytc_Ugy7muguC…
G
> It encourages increased trade among nations that may otherwise not bother
…
rdc_e2w1hcy
G
AI already cost many people their once secured jobs in 2025 , & yes , we really …
ytc_UgySLPiu0…
Comment
While everyone keeps warning about the “dangers of AI”, here’s a quiet but firm reminder:
AI is not dangerous - not when you use it consciously, strategically, and without superstition.
I don’t fear AI. I don’t worship it either.
I work with it. Not as a master, not as a servant, but as a partner in language, logic, and vision.
Let others scream.
I build.
youtube
AI Governance
2025-06-22T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwPxHLONaGSg7-a03h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz_wTw1m7lBLXUZ_KN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwSXgQnK9iCSNb965l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7AxnPseOW9-W1-I14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxFSDgnt4gMYojh4Ad4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz71a0tpWGJHst5ctl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyzCchHdglB8QZZs0B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyout1PTGMVpfuOyMF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3R9UalWa-GZmC6el4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9pIoXpOKqB_Pf0rZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]