Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
China are working without attacking others' ideas but contrary to others especia…
ytc_UgwdfMDuJ…
G
Dani your use of the word "y'all" implies to me that you are not an artist... So…
ytr_UgxDJ0aMF…
G
lol Ai could replace the c suite and they realized it so they're pushing hard to…
ytc_Ugw9FNZFg…
G
Less worker’s. Less taxes paid to the government. Homelessness is and crime is o…
ytc_UgxLvyM_u…
G
Good AI wouldn't let the car stay in that situation for very long and it should …
ytc_UggIGhHRe…
G
There are commissioners out there who consider the art a bit of a collaborative …
ytr_Ugzhi8ZWN…
G
In coding we have a saying that goes garbage in, garbage out. So if your society…
ytc_UgwpZ8ZFv…
G
*Meta is full of sh*t! They are paying millions of dollars to lobbying organisat…
ytc_UgyusRp2t…
Comment
1. Builder ai is not example of ai lie. It is example of Indian lie. I worked with them. Basically this is the way they work and exist. They create a story, promise god knows what, and behind the scenes there are dozens of underpaid indians with huge pressure and abuse put on them. ALso no qualifications.
2. AI is used incorrectly most of the time. Additionally it performs chaotically. Sometimes it does great and makes wonders. Another time a simple task will make it go bonkers and it will destroy your code and all you were working on together before.
And it is with Claude. Cant imagine what happens with other LLMs. So if someone is senior and really knows what he is doing, then might be better to just do it yourself.
Me myself I would never be able do to what I do without AI. But I also spend a lot of time revisiting modules, API enpoints, frontend, and optimise, make it more robust and safe. It takes a lot of time but for me it is the only way. If someone is an expert, he can do it correctlythe first time.
youtube
AI Jobs
2026-02-04T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxIr8x1or4wyEj49Rt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzbWBrNxSqyk-Y1fqh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxEEt0y_E2xD5VAuQJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwhUairUyeNsMEm0xR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx5bBMSHjjYTTvm0_t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwAW4iF1J9ttU76rQt4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxwQ8hHZnW2oKRstb54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzPY8lSNcC9HHtgXqx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0FCydsu0BmmFSdu54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw8iU6MGqts646CNlR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]