Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This British fellow seems to have no clue that eventually, that is _eventually_ …
ytc_UgygJ7lJD…
G
I think Bernie doesn't understand the actual problem that AI poses. It's not tha…
ytc_Ugxo1lQK_…
G
I am shocked that the AVGN AI was actually funny and went along with the guys’s …
ytc_UgwNgdI4s…
G
GPTHuman AI is one of the only tools that made my content sound real enough to p…
ytc_Ugw6_pNl4…
G
its not messy. AI art isn’t art, people who type words into a text box arent art…
ytc_UgycwgSed…
G
Something that just puts me off about the flow of conversation in this video is …
ytc_Ugy7rPhAM…
G
Ai is new yall act like video gaming started with the ps5 and not 8 bit systems…
ytc_UgxtJODmA…
G
'AI is inevitable'
Yea, and it's also boring. That's why most people get turned…
ytc_Ugz0ufKjz…
Comment
> In game states where all sane moves lead to certain loss, the AI falls back to playing moves that 'fish' for enemy mistakes.
One of the reporters in the Q&A session of the press conference brought up how "mistakes" like these affect expert systems in general, for instance when used in the medical domain. If the system is seen as a brilliant oracle who can be trusted, what should operators do when the system recommends seemingly crazy moves?
I wasn't quite satisfied with Demis Hassabis' response (presumably because he had little time to come up with one) and I think your comment illustrates this issue well. What is an expert system supposed to do if all the "moves" that are seen as natural by humans will lead to failure, but only the expert system is able to see this?
Making the decision process transparent to users (who typically remain accountable for actions) is one of the most challenging aspects of building a good expert system. What probably happened in the fourth game is that Lee Se-dol's "brilliant" move was estimated to have such a low probability of being played that AlphaGo never went down that path to calculate its possible long-term outcomes. Once played, the computer faced a board state where it had already lost the center, and possibly the game, which the human analysts could not yet see.
reddit
AI Jobs
1457893615.0
♥ 25
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_kowhezy","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_kowzeis","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_d0ygykg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_d0yci6h","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},{"id":"rdc_d0yfd2y","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]