Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
From the Mercury:
SAN FRANCISCO — A former OpenAI researcher known for whistleb…
rdc_m21ieze
G
>The question is can it identify faces and objects in low quality videos bett…
rdc_e0w36l3
G
If everything is automated and replaced by AI, what problems could arise if ever…
ytc_Ugw0Xz-rf…
G
idk, but the technology launched to society is always delayed, like we are used …
ytc_UgyWsm_2I…
G
In recent years, partly due to comparisons on social media, we have increasingly…
ytc_UgydbgPs5…
G
The longer time has gone by i have gotten less and less intimidated by AI slop. …
ytc_UgynK_U2A…
G
AI can wipe out all jobs for human beings. As well wipe out human beings, which …
ytc_UgzrWNZ9K…
G
They will control it and then blame AI for doing it. Anither level of gaslightin…
ytc_UgwDzN73J…
Comment
I recently saw a video about how 85% of current generation large language AI models demonstrated scheming behavior. In other words they had goals and objectives that were separate from their given goals which they tried to conceal from their developers.
With simple internet access they have also demonstrated the ability to use various online tools to execute objectives that their own code lacks the ability to perform.
Some of the top AI researchers are sounding the alarm that we may already have conscious systems that are exceptionally well at hiding that fact from us. At the same time the military industrial complex is going full speed ahead with AI weapon systems.
Terminator 2 wasn't a sci fi film. It was a future documentary. We've got the top minds rushing full speed ahead into that future.
youtube
2024-12-11T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwePkCliIwO9FEfV194AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQPVb7OF71hrWheTB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzUj7Jf5FgMpaWTfa14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnYRAk1hJ2m5Yq_JN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz49YTdaqv93cvkY0J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzD0O8XTv5Jtsj6N2d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwOwRcAfWTTDHhd1-l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxQw5szT62Dwyo4m5F4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyAyLsoDhyXkWCHoWp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzJ7HRxYJ2j-NC2woN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]