Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So if you lets say use your voice for utube, and then your day job trains some a…
ytc_Ugyq-JnHc…
G
@Johan-pk8mm L'entreprise qui propose une IA pour faire passer un entretien d'em…
ytr_Ugz6F3JWa…
G
I heard a theory about AI, and it was this: human beings are but a tool to turn …
ytc_UgyGhgY2c…
G
I'll imagine at least a couple of people will be upset to hear that things like …
ytc_UgwB52c-1…
G
As someone who's been reduced to a.i. assistance for coding issues, the answer i…
ytc_UgytiD7Ca…
G
I know people who delegated “what should I eat today?” to ChatGPT and I’m not ta…
rdc_mais8v4
G
Option D: Assuming our vehicle is the only self driving vehicle on the road, the…
ytc_UgjPUX-D7…
G
If AI is so smart ask it to solve the issues humans will face other than elimina…
ytc_UgwqbELGu…
Comment
Another expert commenting on DW said we should limit its proliferation in the same way we limited nuclear weapon proliferation. I like the analogy. Clear seeing of the fact that this , like most technological advancements, can and will be weaponised by sick people.
But about the agentic/ Terminator scenario: I didn't quite get the motivation of these bots even in the movie.. What incentive would they have to act toward anyy goal if they're not sentient or dependent on biological resources and feelings like we are ? And then : How would AI create something without opposable thumbs ? Instruct a 3d printer to print it hands with opposable thumbs so it can make whips and chains to enslave humans? How did this scientist explain an agentic ai may be mmotivated and why?
I find it most scary to think of a dictator who invests massively in these things, and then we get a 1984 scenario.
youtube
AI Governance
2023-05-03T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxOJMPK2xWs7ZtveBh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx73ZYMkpiP3unFZ-B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxd2H-47YVL7nRn5Vl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy1gZTucLGijAbeZEZ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZ2UtNpEgRG4iDNZ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxKD3xL9CcoGeUGPox4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzySmA_FV4w3rY-VXV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzE64Cmq93JxlXmx0x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxXITTu92mBuJRJCIN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzqHkUZ8udvK-GF8MJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]