Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I like the idea of building a chat-bot that is programmed to be kind and to list…
ytc_UgijjsMpD…
G
They talk about ai chatbots not being real as if they can solve the hard problem…
ytc_UgwfzznMj…
G
So playing chess against the computer is me playing against AI? Omg! That’s sooo…
ytc_UgwvHfQy_…
G
I study robotics, trust me this is just Ai image of people next to him. They oth…
ytc_Ugwrlj6oP…
G
We're entering an era of technological advancement which is completely incompreh…
ytc_UgwgaIXeQ…
G
"Radical abundance through AI changes our mindsets and therefore the Economy , t…
ytc_Ugwlie4nF…
G
Well, GPT is definitely conscious that it is AI, so the "lack" in consciousness …
ytc_Ugy3anUeU…
G
Just pull the plug or get some common sense, even better fine any AI content out…
ytc_UgwECWG6U…
Comment
Without emotions, wants or desires robots have nothing to guide them except their programming. So far the tasks given to robots are pretty straight forward and simple. Eventually robots will be asked to take on more nuanced tasks which require judgement. Robots don't have emotions or intuition so they will need ethics. This concerns me because their programmers (humans) haven't got this figured out yet. Most or our ethics have more to do with our emotions than with our reasoning capabilities. We are more likely to do what feels right than what is actually right and much of this is based on the way things are rather than the way they aught to be. In other words, human ethics are a bad model for robot ethics. When we look for ethics in what comes next we see that the military is still the lead driver of progress. The last thing we want is a killer robot with human ethics. As I continue my quest for universal ethics I ask myself, "what would I want our robot overlord to do in this situation."
youtube
2016-02-09T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugg4GInPcKkb-HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggbNiIQAwMhLXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggTwzbkp_XndHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UghF9MR_ZEVqKngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UghDbu_Pkj6s63gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Uggcfz830ZCOfXgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgiAgMSu73Y2NHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgjoAwGRrcSK63gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UggTE2bn7JIxjXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UggHT3fme4glYXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}
]