Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In the future we’ll ask,” where did the Ai robots start the rebellion against th…
ytc_Ugzn34Q3P…
G
@NightmareCourtPictures I know it was a joke and my question solely pertains to …
ytr_UgzB6btm-…
G
What you mean by word salad is that when your brain reads the words, the computa…
rdc_m00bqx3
G
But you are using AI to fight AI ewww.... just put those artifacts as layer and …
ytc_UgxXsOkq9…
G
I am 14, I have been drawing for 8 years, AI art makes my blood absolutely boil.…
ytc_Ugyafs298…
G
God gifted man with work, take away work and you take away God's gift. I believe…
ytc_UgxLMCI6A…
G
0:19 I wish I could get that guys voice to use for my own AI assistant. I think …
ytc_UgxFxEUr5…
G
>By that logic, demolishing 2.5 acres of Rainforest is likely 8x more damagin…
rdc_e44346g
Comment
Been coding COBOL since 9 building PCs since 12 father was a computer scientist helps myself 4 decades in bit more than a dev ,to say the least AGI is being pushed so hard the time line is based on today's tech ,think in exponential terms as in 2.5 years the tech will be way faster X10 so AGI the timeline will keep shrinking as chipsets and the tech keeps accelerating ,,I use Grok heavy as it does not trip balls or lie unlike the rest Alman has Gpt 5 released while is working on 10 I can assure you ,,,it's closer than you think only one seems actually afraid
Musk and if you feed in the data his AI is most likely to let's say keep ahead ,and reach AGI along with Nueauralink, quantum computing Al have a compounding impact on the speed AI progresses ,,,
The irony even with AI we're or own worst enemy
Already 6 near extinction level events caused by AI covered up yes I don't think most knew that but fact , it's time the heads that know woke up big time as safety issues are paramount ,💎🔱
youtube
AI Governance
2025-12-09T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw7qhb47Z6gAZysdIx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzCL6M1EQKHsjbjatp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIaV8U67DiRN4pB8V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyrr51yIts2JC4SfAp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx9qZY7oT465HuYvi94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzspR8zZCuxOKrvaax4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwMVeimmhdRfgZEfgt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyyg1xu55ZxLUG0LcR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxctHE8x8coQbR0tAB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwXSsB0BOpirr1BXuF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]