Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One thing I don't get: for the AI to be dangerous it has to become first self co…
ytc_UgyEJaDMJ…
G
I disagree. If anything its under hyped. Ai is able to roleplay, code, create ar…
ytr_UgyOrEQDx…
G
I truly think deep fakes started during Covid. I have an ex that I caught taking…
ytc_UgwVMhWH5…
G
AI doesn't scare me, unless it somehow become capable of using nuclear code laun…
ytc_UgxlOSKbp…
G
The thing is that artist haven't seen the potential in AI, I am a software engin…
ytc_UgzofsZ-E…
G
When making a robot:
1. Do make them bullet proof
2. Don't give them skills
3. …
ytc_Ugw5HtOdT…
G
Many non-artists dislike AI. Heck, i draw for a living and despite not consideri…
ytr_UgyL9Imr1…
G
We fear everything what we doesnt know. For the record, sure we fear of AI cuz w…
ytc_UgxAyB-HS…
Comment
Fixing it? Why do we even have it in the first place if it has so many issues? Why not just wait until ai is reliable enough to start using around important sections of categorisation? Though to be fair improving the ai will be much more quicker and better with actual results rather than just testing.
youtube
AI Bias
2022-12-20T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgymvF94k1tohAsnjjF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzEqkr1s5S7YQQ4oJJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQZXQeNxy0P3pI2lt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_UgydiVWaD7kRUn_BwpJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzl7gCtgE68qqp5PQ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwdE6tF8j3w0VhHHv54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyT-hrIlKlzgaMMsaR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxExEgQCJS4BUXFLdx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz_MLAqL-Y680DxOWh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugzr2htxa6LQtlfvpLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}]