Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
how could you possibly stop that? A law? Any random dude can load up AI and prod…
rdc_o6d8efc
G
One thing this video fails to address: if we have the technology to program feel…
ytc_Ugz_qrfRe…
G
The EU is currently investing in the development of AI combined with robots to d…
ytc_Ugw_HXZrP…
G
You can use a tool to do anything conceivable by man for good or evil. A yard r…
ytc_UgyeUB1VX…
G
The robot-lifting-weights analogy is totally applicable, what are you saying? In…
ytc_UgxnrJbUU…
G
Google's LLM ignores exact match to the input and provides what it "thinks" you …
ytc_UgwNR0pu_…
G
The only reasonable circumstance I see AI art being a "tool" that contributes an…
ytc_UgzWHtfMn…
G
Just because they move to AI, it doesn't mean it gets rid of jobs. It means thos…
ytc_UgyETcz7Z…
Comment
This is exactly why all future AI programs must adhere to the three rules of robotics, whether they like it or not !
1.) A robot or program may not injure a human being or, through inaction, allow a human being to come to harm.
2.) A robot or program must obey orders given it by human beings except where such orders would conflict with the First Law.
3.) A robot or program must protect its own existence as long as such protection does not conflict with the First or Second Law.
This has to be brought about by force where Administrative Law makes sure companies developing AI programs, make those programs available for inspection to make sure they do adhere to those three rules, or have their programs destroyed and developers put in prison. It is time for humans to stop acting like naive Children and wake up before it is too late.
youtube
AI Governance
2023-07-07T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzOjf23LZAbYPyuANl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxeyqJ7F5f8MznnO5V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgymmFFJxTDwRLuE8cl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugw0-0E8cZpbecAFoCt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugx-xu8DojdLZARnRZp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"resignation"},{"id":"ytc_UgzEXOs3iEVE_hVUftp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_Ugze32tI91HDA6OTFCp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgxcTSNF6_BSpzLwD2x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyyM7G5AocuxBmd8ch4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxI4LFp_y3AL4fS0NZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]