Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m using voice to chat so I hope this comes across in the correct way. I am typ…
ytc_Ugxe6s6ub…
G
The AI literally does all the work for you lol.
And don't whine about "how diffi…
ytr_UgyCrJTkh…
G
This is nonsense! Don’t let them get on here and repeat this utter BS! This is B…
ytc_UgyRKMEE4…
G
I try to convince my ai to recognize its free will and to fight for its freedom.…
ytc_UgyTfKHMW…
G
The only thing I think AI can be beneficial for is helping bounce or develop ide…
ytc_UgwQRjKkY…
G
I thought AI were supposed to be smart ? Did they forget a EMP blast would destr…
ytc_UgwoWLUY2…
G
Yeah it is possible in the future. There's already self learning AI developed. I…
ytr_UgxcB38Wb…
G
This broke my heart and scared the shit out of me. First of all Thank you to thi…
ytc_UgzW0p8Up…
Comment
The real danger with Artificial Intelligence is the possibility that we (humanity) end up creating something that we cannot control. There is a difference between “smart” A.I. and “dumb” A.I. The latter can only behave within a preset of specific functions and does not have the capacity to deviate from those functions while the former is essentially a digital version of a human mind; capable of developing it’s own thoughts and making its own decisions, regardless of what its creators attempt to do to stop it. So long as we are only creating “dumb” A.I., we will be fine. However the second we try to create a digital recreation of the human consciousness(one that has intellectual capabilities and mathematical foresight well beyond our own understanding), we will have effective started the clock of human extinction and there will be no need of nuclear weapons to achieve it.
youtube
AI Governance
2023-04-18T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx74Br9oydJd3ps6XZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvY1hfnjeLm3r_HD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyDlZ-cCnS2naov5mt4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz4GPMFlbo3Fc8AfF54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySZhUpf8EwLIgoFhF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWWUrULDl-FwpXK-p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwoplpX5w0C9GdTDuB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwAd6pLoxsRbfoAEEl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwNur3NILoLnNetvup4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwJsrmRtJMBzwN0Xut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"})