Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If children are already "lagging behind" in spite of all the current technology …
ytc_UgwdN8Vb5…
G
if youtube recommended me TWO videos of people talking about the AI stuff my dad…
ytc_Ugwyv5wEi…
G
Nothing more advanced than a 20 questions box that can guess what Pokemon you ar…
ytc_Ugzf2yvgT…
G
What about that developer using ai discovered it deleted the database and then l…
ytc_Ugw1NRdiM…
G
You have no clue what good psychiatrists do. Maybe Ai can supplement the dire ne…
ytr_UgwshXRf3…
G
who needs a gestapo when you got drones. 0 human error, emotions, or trust need.…
ytc_Ugz49YTda…
G
@kaligath6616 honestly, your argument was the first thought on my mind, as I wa…
ytr_UgzsZFypf…
G
Like when you accidentally say something that triggers that ai bot to go dirty 😭…
ytc_UgyR18-PS…
Comment
To suggest this case and others like it are evidence that LLM based AI agents will not imminently and competently displace human lawyers is naive.
A single LLM like GPT-3 or GPT-4 is like a genius with no capacity for reflection. Speaking the first thing it thinks with no review. If an accurate answer is not forthcoming (e.g. how to win an unwinnable case), it naturally makes something up because that's what is most consistent with being helpful (being helpful is it's prime directive).
In this case, GPT-3 didn't fail. It did a superb job of exactly what it was set up to do: predict what a lawyer would most likely say given the absolute premise that the lawyer has something helpful to say.
This is already a very well understood and solvable problem.
Advanced techniques such as "chain of thought", and simply incorporating accuracy into the reward function, are already yielding promising results. None of these techniques were deployed in the case in question, so failure was a likely result.
youtube
AI Responsibility
2023-09-13T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzDGZfGItK9LX2OAMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdYPkIAaHaRkeQuil4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYx8QkEw5qEIlVa_V4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyStPwsGTg_T6SQsMV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxowcuLHbRZgbj5O7d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxts-i6mZS9sahdjeR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBK4e_MpA_S778Hlp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyE0HauJPrrOzypmcd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxiWicCvPsoko-JQc14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzPyi3Axuars712SjV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}
]