I’m yet to hear anyone saying that chatGPT can navigate the complex series of design decisions needed to create a cohesive app (unless of course, it was trained on something exactly the same). Many people report spending an inordinate amount of time rectifying the mistakes these LLMs make. It sounds like a glorified autofill (I haven’t used them yet). I shudder to think about the future of the software ecosystem if an entire generation is trained to rely entirely on them to create code.
LLM is great for writing code in small snippets. I’ve used it for quickly writing batch files, for instance. I couldn’t be bothered to look up how to format something obscure. So I use an LLM like ChatGPT to do the bulk work, then I just double check what it gave me.
I wouldn’t use it for anything over ~100 lines at a time. Just like with long conversations, it will have a tendency to “lose the plot” and start forgetting things that it said early on. Because as things get added to the conversation it has to parse more and more data. So it’ll start to drift off topic as conversations get longer.
It can also be handy for debugging sections of code. Because programming is just a form of language with strict grammar/diction/spelling rules. And a LLM will be really really good at spotting stupid grammar mistakes. It’ll instantly notice your missing semicolon and point it out to you, which can save you a ton of frustration.
Just like with any tool, how well it works is entirely up to the user. It will likely progress to the point of being able to manage longer code eventually. But right now it’s still incredibly useful as long as you accept its limitations and work within them.
I’m yet to hear anyone saying that chatGPT can navigate the complex series of design decisions needed to create a cohesive app (unless of course, it was trained on something exactly the same). Many people report spending an inordinate amount of time rectifying the mistakes these LLMs make. It sounds like a glorified autofill (I haven’t used them yet). I shudder to think about the future of the software ecosystem if an entire generation is trained to rely entirely on them to create code.
LLM is great for writing code in small snippets. I’ve used it for quickly writing batch files, for instance. I couldn’t be bothered to look up how to format something obscure. So I use an LLM like ChatGPT to do the bulk work, then I just double check what it gave me.
I wouldn’t use it for anything over ~100 lines at a time. Just like with long conversations, it will have a tendency to “lose the plot” and start forgetting things that it said early on. Because as things get added to the conversation it has to parse more and more data. So it’ll start to drift off topic as conversations get longer.
It can also be handy for debugging sections of code. Because programming is just a form of language with strict grammar/diction/spelling rules. And a LLM will be really really good at spotting stupid grammar mistakes. It’ll instantly notice your missing semicolon and point it out to you, which can save you a ton of frustration.
Just like with any tool, how well it works is entirely up to the user. It will likely progress to the point of being able to manage longer code eventually. But right now it’s still incredibly useful as long as you accept its limitations and work within them.
I think you’re right at the minute. Whether you’ll be right in the future I’m less certain.