Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is just a software that is thriving because of so much computing power we hav…
ytc_UgxvTQjGt…
G
Karen Hao is awesome, her work studying the techno oligarchs is top notch. Great…
ytc_Ugwe8lwUb…
G
Language algorithms and pre programmed generative transformers got so many peopl…
ytc_UgzH7Fnks…
G
🗣️🎙️REAL TALK HOW ANGRY I AM FIRST OF ALL META GLASSES WAS MY IDEA CHATGPT TAKEN…
ytc_Ugx3uQEaD…
G
25:00 these AI may only have the relative intelligence of a virus. Molecular mac…
ytc_Ugw-J7rT4…
G
Neil speaking on AI is like an ortho performing heart surgery... his media overe…
ytc_UgwN2_pnC…
G
Do you think AI will eliminate work/jobs, or just shift the advantage to people …
ytc_UgxfgbL3x…
G
Unfortunately, AI isn't doing anything different than human creators already do.…
ytc_Ugx4qROL0…
Comment
It's embarrassing how little Neil understands about AI. He is essentially Hassan answering questions without an adequate knowledge in the topic. This was made clear in a conversation between him and Sam Harris
His definition of AGI is sloppy and inaccurate and he clearly doesn't understand or believe what is AGI and it's implications. He provides no logical rationale for his belief that it will all be fine. It'll be fine because the people working on it are into it? WTF. This is the same view espoused by experts that are wanting to push to AGI without adding any guardrails.
And we won't be interested in AGI or find it useful?? How does a smart guy say such ignorant statements that highlight a total dearth of understanding.
youtube
AI Moral Status
2025-10-21T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyJc2GePOl38RIb7Ld4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugxc2b_JzO5DHFmUxBp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugwgz4eBSvmV63WgaO14AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzkY8IRhJzC7jeQyOF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzquBvwMbxPXxBTS0R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxR8LDXdBZbfIO2M654AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgwKQ_aArcDcKiiwmHB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzrOMDH0oCCWsV6xAF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgyOGa8EmrGHXOFCyx94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugw8GvlSf50F_muJN554AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]