Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I like the ai fooler thing, but i just dont like how it desaturates.
also, why …
ytc_UgxmpsZUr…
G
Ok, but can AI choose to not be conscious without first being conscious?
I feel…
ytc_UgyuuT0G9…
G
If it was a dude deepfaked on another dudes body with like a big wiener i dont t…
ytc_Ugwne7_tp…
G
Surveillance will have an army of 3d printed payload carrying drones with facial…
ytr_UgwoYkNMa…
G
This is what people are missing. In 20 years every company could have an army of…
ytr_UgwKbXoFW…
G
When people are so gullable they believe what 4o says while also not understandi…
ytc_UgzGKsJzf…
G
@lakotaspirit5789plus they get to keep their AI R&D, it’s not like they are star…
ytr_UgznCWFI6…
G
Right now CEOs are busy implementing AI. They hire consultants and once confiden…
ytc_Ugw9hD5nV…
Comment
I am extremely skeptical of anything where the inner workings are not open to thorough external study. These algorithms that are used in public service really ought to be treated to the same standard as cryptographic and hashing algorithms are. It seems like all good cryptographers have a healthy distrust for those types of algorithm and their creators and they certainly would never endorse an algorithm that is just 'magic'. An algorithm only gets approved and supported once it has passed multiple rounds of independent analysis.
Instead we're running into the situation where just the fact that a computer spat the number out legitimises the process used, even if the underlying process is flawed.
youtube
2022-08-02T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzlXOQKhu81L1VmW_l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxihmdhLFeRIvFwNjR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAwagQotk4MnUsnkZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwoWRPz12XGtdYJ2Jt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyiWL701Ay_X6WggNp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyFmxfPLc7-fu8NBe94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzCipvwbYvM1MSg9PF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzw1HpZ80FJcFxLf3V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzWUEVl61qI-v1Z5Pt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwAfbr8kFObYfHVaj14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]