Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I love your show. I found this one very frustrating. It seemed to speak on a b…
ytc_UgyNelBLL…
G
When he says at the end that taxing AI companies is "not in his economic interes…
ytc_UgyWNikej…
G
I never believed AI would kill us all, for one reason. Computers are blank slat…
ytc_UgzyQiQqu…
G
Ai can't replace a carpenter, yes maybe some mass production ok, ...ai can't rep…
ytc_Ugzs3cZeD…
G
That's because AI struggles with unexpected behavior even and especially when th…
rdc_o7d76hu
G
“It could use some improvement and it could actually be a good-“
“Tf is there na…
ytc_Ugxj0PRaX…
G
@zigzagtoes I wasn't making a case for the AI driven robot games and sport, I wa…
ytr_UgytOX9Hl…
G
To all you know this is just good editing skills and not really a robot, there w…
ytc_UgzSFeeTl…
Comment
For 1:41 - how about these for human preservation to start:
Isaac Asimov's Three Laws of Robotics
1. No Harm: A robot cannot harm a human or, through inaction, allow a human to be harmed;
2. Obey Orders: A robot must obey human orders, unless they conflict with the First Law;
3. Self-Preservation: A robot must protect its own existence, provided it doesn't conflict with the First or Second Law.
youtube
AI Governance
2026-01-18T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyucO0EmbukKCVopbJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzK-CDpgcMCv3uEIth4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw2YS_OmhrgLtps6t54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyIt_lS-CDUo7GvzOF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzAd0qFFjd7jDBfw-V4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzWtZQTH4P1_EUConl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwGGKKJ-Eg1kTEP9CZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx6rZDrh_Fjqf1LBW94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgymjRZDVarjv46TCnV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzv9e8yFr4Xj_Lww614AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]