>
That may be true, but the proxy for assessing the quality of the dev is the code.Yes, the quality of the dev is a measure of the quality of the code they produce, but once a certain baseline has been established, the quality of the dev is now known independent of the code they may yet produce. i.e. if you were to make a prediction about the quality of code produced by a "high quality" dev vs. a "low quality" dev, you'd likely find that the high quality dev tends to produce high quality code more often.
So now you have a certain degree of knowledge even before you've seen the code. In practice, this becomes a factor on every dev team I've worked around.
Adding an LLM to the mix changes that assessment fundamentally.
> The point is that an LLM in no way changes this.
I think the LLM by definition changes this in numerous ways that can't be avoided. i.e. the code that was previously a proxy for "dev quality" could now fall into multiple categories:
1. Good code written by the dev (a good indicator of dev quality if they're consistently good over time)
2. Good code written by the LLM and accepted by the dev because they are experienced and recognize the code to be good
3. Good code written by the LLM and accepted by the dev because it works, but not necessarily because the dev knew it was good (no longer a good indicator of dev quality)
4. Bad code written by the LLM
5. Bad code written by the dev
#2 and #3 is where things get messy. Good code may now come into existence without it being an indicator of dev quality. It is now necessary to assess whether or not the LLM code was accepted because the dev recognized it was good code, or because the dev got things to work and essentially got lucky.
It may be true that you're still evaluating the code at the end of the day, but what you learn from that evaluation has changed. You can no longer evaluate the quality of a dev by the quality of the code they commit unless you have other ways to independently assess them beyond the code itself.
If you continued to assess dev quality without taking this into consideration, it seems likely that those assessments would become less accurate over time as more "low quality" devs produce high quality code - not because of their own skills, but because of the ongoing improvements to LLMs. That high quality code is no longer a trustworthy indicator of dev quality.
> If a dev uses an LLM in a non-pragmatic way that produces bad code, it will erode trust in them. The LLM is a tool, but trust still factors in to how the dev uses the tool.
Yes, of course. But the issue is not that a good dev might erode trust by using the LLM poorly. The issue is that inexperienced devs will make it increasingly difficult to use the same heuristics to assess dev quality across the board.
(Works on older browsers and doesn't require JavaScript except to get past CloudSnare).