LLMs make for great tech demos, but when it comes to writing code for production that actually does something new and useful, it hasn’t impressed me at all.
Maybe other related or not so far areas like SRE. How is SRE these days? Can you still work the way you want to work? Are you being forced to switch as well?
LLMs make for great tech demos, but when it comes to writing code for production that actually does something new and useful, it hasn’t impressed me at all.
Maybe the thing has been done in general, but not in a way that’s useful for me. That’s why it looks good in tech demos. If I ask AI to write what I need, it will give me an answer, but it won’t actually work and integrate in the ways I need for production. The last time I tried it gave me 70 lines of code, the real end result was thousands. The AI version would look cool in a demo though.
* Accessibility. Accessibility isn’t a huge challenge unless you’re in a business with a pattern of largely ignoring it. Then it can be a huge challenge to fix. AI won’t be enough and it nightly likely require outside help.
* Speed. If you want faster executing software you need to measure things. AI will be learning from existing code that likely wasn’t well measured.
Fintech has a ton of regulations. Everything layered over and over with tests. There's a form of extreme engineering where fintech runs tests in production, meaning that the systems in place are robust enough to handle bad code and junk data.
To be fair the code they produce is dogshit, so it isn't a problem.
I am baffled about how each company are jumping into LLMs without considering anything about their own privacy when 10 years ago, just using GitHub with a private repository could have been an issue.
> To be fair the code they produce is dogshit, so it isn't a problem.
That's not a problem for managers and CTO that are just being brainwashed by marketing and LinkedIn posts that all their engineers should use Cursor.
2. Low tolerance for LLM-induced errors: - Network protocols / telecom software - Medical software - Aerospace, automotive
3. Performance-critical code: - Game engine / graphics engine development (probably an area where we'll see them soon) - Kernels, drivers, microcontrollers.
etc. Not all is lost yet.
AI is predictive. Most people will fall to a comfort zone where AI tells them what to do. But you should become an expert and be one of the few who are telling it what to do.
Every month CTO meeting is about them pushing software engineer to use Cursor.
Also, my main issue is not really AI not being good enough. If a company is fine getting sh*t code then let's go full AI, but I love my job, I love solving issues, coding, working with new paradigm, trying solutions, failing, improving, etc. I don't want to be a prompt expert and being asked to review AI generated code all day long.
Of course, it is a very personal opinion, but I think it is still shared by a decent bunch of people.
Businesses often rely on these systems - and they rely on the processes to protect them so are reluctant to adopt AI