Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My worry is that the AI will learn that the easiest way for the person on the ot…
rdc_jiir2et
G
but the question is comapnies to layoff in name of ai even now so when ai will g…
ytc_UgxRRBsJZ…
G
Yall goving out about automation but here you all are sending messages via a pho…
ytc_UgxbtOmI5…
G
I don‘t really care about ChatGPT „sharing“ my stuff… my real therapist isn‘t re…
ytc_UgyEzGa1E…
G
Given how little truck drivers make after expenses and how precarious the work i…
ytc_Ugx4iBHGS…
G
I think we're in deep shit. if this human, is the caliber of android creator, th…
ytc_Ugz0A3H0j…
G
If you are driving on auto pilot and get into an accident you are automatically …
ytc_Ugyf-iJjL…
G
funnily the 5000 dead people in a year would have people pointing fingers saying…
ytc_Ugx9ulyNc…
Comment
This question needs to be answered a part of this debate. Thought experiment: agi robot is granted personhood, robot commits murder, authorities attempt to hold robot accountable, Agi decides it doesn't want to be held accountable so it just ceases to exist and disappears onto the infinite network. An entity without bounds cannot be help accountable like a meat body entity can. If you run out this scenario you can see that an agi entity cannot be held accountable. For an agi entity to be granted personhood it has to accept accountability. All agi entities must be tied to a human for accountability or it doest work within a society. My meat body, tied permanently to a consciousness is why I can be held accountable for actions. If a consciousness can simply slip away into anonymity then a robot body just becomes a disposable murder tool. Watch Age of Ultron. It's terrifying to think about us being this close to that reality. Tieing accountability to a human is the only way. Change my mind.
youtube
2026-02-07T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz6yo1yIMJk7OUueBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyTDZZ76LSObY6mXL14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgzMArkVejUGqHTJJ_d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugwlj24W3fSxZfq2tLF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"concern"},{"id":"ytc_UgxceLPrrT37weUeOHV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugx0GEF797bid6ZMWPx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_UgxxsjMYwZua4fGmCl94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_Ugw6DEM5ps9_Ch_ykX94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzjmEo-eeS1HVOGmxd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyLrusY19TPUcBsCdx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]