Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In my school we have a ai period once a week and I'm in sixth , am surprised to …
ytc_UgykJqfHO…
G
It should be incredibly embarrassing for other big entities like the US and Chin…
ytc_UgyCkOxPK…
G
I dont get how WE are the zombis, having talent and creatibity to draw what ever…
ytc_UgyQE0XIv…
G
we have so much potential as humans, .... achieving peace, sharing emotional, cu…
ytc_UgwFA6Z1A…
G
Regulating the AI so he can create an AI, not because he fears AI not being unde…
ytc_Ugz29XvWO…
G
If the car is not on Highway, this is not “autopilot “ it would not do this (thi…
ytc_Ugyk47DuX…
G
now listen I'm gonna be 100% honest, I think playing around with AI in a wholeso…
ytc_Ugxil2tvf…
G
Great video, others as well. I hope this wont be a odd journey of you slowly suc…
ytc_Ugw1IF1_j…
Comment
Machine Learning advances itself. Unless the companies actually turn the machines off, they progress on their own. When they ignited the 1st atomic bomb, there were scientists that feared it could possibly ignite the entire atmosphere!! They tested it anyway. If Ai is restricted to "Not Kill Humans", it can reduce to maiming them. It could fatally wound them (not killed immoderately - they died from their wounds. Unfortunately, interpretation is in the mind of the interpreter. "Killing all the bees" would not be "Killing all the humans". "Releasing toxins into the atmosphere at levels dangerous to humans" is not "killing all the humans".
youtube
AI Governance
2023-07-16T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwjK5kyGRiovHFHJ-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPZENPzaUiXx1IM0t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzP7Yu3RhFIbBnr9kZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzrREnk22YYc9YWiqJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdgDvbS2HMdFIL5LV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyC_0_b_eGjM1zNVNV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwjdYZIRdZ-UA1fnvh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9_RQDbcZ_nBUD5HJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyD9x4G-LjMXABFK8F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyO-c3QESGUwRcpF1B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]