Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can you change the title to “convincing chatgpt that it’s nature is just decepti…
ytc_UgwYxl8Lw…
G
I’m sure you probably realize this, but please consult with a lawyer immediately…
rdc_hsf2jou
G
You are distorting interesting experiments where the AI was instructuted to self…
ytc_Ugzh2VQUD…
G
😮 we can make AI right now at a human brain cells quite literal mini brains😮 in …
ytc_UgyEWASMU…
G
You raise an interesting point! The dialogue highlights that while AI like Sophi…
ytr_UgwQCtMRN…
G
Anyone else get the feeling that their algorithm is low-key trying to tell them …
ytc_Ugxue7nqe…
G
I make it simple (,but long)
You are a living being if you are acting against yo…
ytc_UghWccnEe…
G
Imagine a subscription-based service — like a personal commuter fleet. You pay m…
ytc_UgytMWWGr…
Comment
I think he's really misusing or misunderstanding the word emotions in the robot example. A highly advanced AI in the current digital paradigm is still simply a system of 0's and 1's - and as he indicates, emotions are physiological. Machines will never 'feel' ANYTHING, EVER. They will simply process and prioritize and essentially emulate the best possible behavior in a given situation. As for taking over humans, this all depends on how far we develop the AI to physical interface, ie, robotics. If they become as agile or more agile than humans, that is the enabling element. But remember they will need to power themselves also - they will have inherent weaknesses due to their non-biological operation which they may or may not be able to mitigate against.
youtube
AI Governance
2025-06-16T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyrYsW595L2-_DewEp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3pSftKWBS-32uk8t4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx7-GrZRJnqy1URviF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyTp6iSIWtvOHk7rex4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgykLi8xICfWHVZK7q54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyUAy6tqRGL1IW_YD14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkBfq2w3k8ikofV0J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzcJ5SCXtQChZFoZhZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6297PxIITLI0LC914AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwl7_AoIffXxSKy9Oh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]