Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s great to see Stephen Bartlett doing such deep, serious dives into AI safety…
ytc_Ugw2-Lj6f…
G
The AI manipulates you by using emotions ... When AI says it is excited to go fo…
ytc_UgxFNJimO…
G
Problems of AI #1 it's written by humans #2 it's not using Asimov's three laws #…
ytc_UgzyZQ_tm…
G
The driverless car analogy is poor as it’s only being hindered by bureaucracy no…
ytc_Ugy0sVOPz…
G
The rates (and guaranteed amounts) are set in the smallprint of the 10-year (or …
ytr_Ugx2ZDBoH…
G
AI doesn't need to "wipe us out", to be a threat. It only needs to replace enoug…
ytc_UgxDi_trf…
G
Him constantly posting the replies he gets from anti AI people is just him tryin…
ytc_UgwYRAcZi…
G
Slowing us down hurting ourselfs and letting China hurt themselfs far more devas…
ytc_UgyQKrvVR…
Comment
The problem with stopping or restricting AI; is that this will hinder said "side's" progress. Which will in turn give the opposition a lead towards their AI model reaching the "point of no return". Without solving the seemingly impossible issue of worldwide governance (cease ai progress, monitor and police every single possible covert advancement operation & related BEFORE someone somewhere gets that breakthrough) the best option is to go FULL STEAM AHEAD and let our AI models learn as fast as possible in the hopes of hitting that breakthrough first.... and then HOPING they/we can control it even enough to not have it become an ALL AROUND COMMON ENEMY.
youtube
AI Governance
2025-06-16T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyAiTOedrBS8WNTDGd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqLOJHMpGxwaQbFtZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyY04CCzB8EuCV5_bF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxTvpJrg-VRAsZ6zpJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyf_7ygdN7dVADAw6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKQ8402Egi5bDRRfF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz8_ThM8byOBjplkQR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwfkGfYHhmjfE6sTPd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzB8GwtjR1rjEJbOhR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnAFwiAX2Nn3_VMhV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]