Just a regular Joe.

  • 1 Post
  • 720 Comments
Joined 3 years ago
cake
Cake day: July 7th, 2023

help-circle



  • JoetoHacker News@lemmy.bestiver.seDon't let AI write for you
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    17 days ago

    I haven’t done 9-5 programming (edit: and especially not on a single codebase!) in many years. My job slowly evolved to be more architecture and collaboration (between product teams, architects & dev teams), with the odd code to kickstart a project or diving in to help resolve an issue.

    In that respect, it’s great to be working more directly with code again. Both in languages I know well, and those I am learning. Most teams around me are embracing AI tools, and coordination is easier.

    As a general risk to cognitive ability - it absolutely comes with risks. But there have always been people who want to understand more, and those who are happy to solve a problem without really understanding it (often producing poor code / laying landmines, costing everyone time)

    We now have more tools at our disposal to build & maintain quality systems. Those who learn to use the tools well and understand the problem domain will outperform those who don’t. Those who can’t (or won’t) will slowly be pushed out. And businesses who can’t differentiate will dig their own graves.


  • If you can, then do it.

    I work faster with CLI coding agents though. Previously I would spend less time designing and more time iterating on code as ideas evolve - this was natural, as coding was slow enough to let me think through the design as I went.

    Now, I spend more time on design, and then have mini-sessions to implement it. I prefer scaffolding codebase + types, then implementing it feature by feature.

    The more I use AI, the better I am getting at this flow, and the better the resulting code is. New projects get completed in days, and now come with good docs.

    Where it really adds value nowadays (but not in the beginning) are cross-project changes and coordination. UI changes + API changes + DB changes + Workflow changes, different codebases & languages… Feature design & Implementation plan, discuss and agree, then implement it across all projects, each with their own norms & constraints.


  • They can increase the efficiency in developing quality software. For most use-cases, we still want active human collaboration and review… describe the intent, discuss approaches, let it implement changes one by one, fixing/reviewing as you go. More like pair programming, with fewer typos.

    But the moment you just wave through some change without thinking, you’re not doing software engineering anymore.

    It’s similar when writing… invest the time, think through it and discuss it, review as you go, and you can get quality out of it. I hate it for normal email/letters, but I love it for docs, where everything can be cross-referenced, queried, updated more consistently and faster than I could manage it myself. Same caveat applies.




  • If I were Iran (I’m not, ftr) and China (also not), I would only offer safe passage to ships flying non-US-buddy flags and insured by a Chinese shipping insurer. In turn, the Chinese insurer insists on indemnity from sanctions until the Iran situation is concluded at the UN, to restore worldwide economic stability.

    Safe passage for the rest would be pending the ceasefire & reparation negotiations (to be held in a public UN forum in NYC).

    I shudder at the thought.


  • a distinct lack of sufficient push-back to use established packages and libraries

    Yes, but OTOH there is the terrible tendancy of some developers to use every package under the sun, creating integration pain. If the technical solution is well defined or simple enough (eg. add particular http headers, min/max/isEven functions), then let the agent just do it instead of pulling in a potential security risk. If it’s finicky and error prone (eg. JWTs, crypto), then use the established libraries…


  • JoetoTechnology@lemmy.world“ChatGPT said this” Is Lazy
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    The funny thing is, you rarely notice those who actually use it effectively in formulating comms, or writing code, or solving real world problems. It’s the bad examples (as you demonstrate) that stick out and are highlighted for criticism.

    Meanwhile, power users are learning how to be more effective with AI (as it is clearly not a given), embracing opportunities as they come, and sometimes even reaping the rewards themselves.


  • JoetoTechnology@lemmy.world“ChatGPT said this” Is Lazy
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Heh. I often use LLMs to strip out the unnecessary and help tighten my own points. I fully agree that most people are terrible at writing bug reports (or asking for meaningful help), and LLMs are often GIGO.

    I think the rule applies that if you cannot do it yourself, then you can’t expect an LLM to do it better, simply because you cannot judge the result. In this case, you are more likely to waste other people’s time.

    On the other side, it is possible to have agents give useful feedback on bug reports, request tickets, etc. and guide people (and their personal AI) to provide all the needed info and even automatically resolve issues. So long as the agent isn’t gatekeeping and a human is able to be pulled in easily. And honestly, if someone really wants to speak to a person, that is OK and shouldn’t require jumping through hoops.


  • There is still value in understanding how things work under the hood, even if it’s not something you will do every day. Although I agree that this particular tail-eating example may be hard to pull off.

    On a general note: personalized learning is already one of the most human-positive uses of AI. Sure, we’d all prefer an infinitely patient and all knowledgable human (with cookies, milk and hugs) at our beckon call, but as we’re short on qualified staff due to world fucked-up-ness, supplementing it with inexpensive tailored learning tools seems like a good thing.


  • JoetoTechnology@lemmy.world“ChatGPT said this” Is Lazy
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    23
    ·
    1 month ago

    It’s only a problem if they claimed it as their own or it didn’t add value, AND it wasted your time as a result.

    Sometimes the experts just know how to search more effectively in their domain (which nowadays is increasingly using the right context/prompt with some AI, and formerly known as Google-Fu before google search turned to shit)

    To be genuinely helpful and polite, they’ll do a little legwork to respond personally and accurately… others might be super busy, or just dicks who don’t respect you or your time.

    Try not to be that dick yourself, though. If you are asking someone for help, show your work and provide relevant info so they don’t waste their time.



  • JoetoTechnology@lemmy.world“ChatGPT said this” Is Lazy
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    12
    ·
    1 month ago

    It is about respecting everyone’s time…

    Example, if an executive were to claim: “We don’t have any solution to X in the company” in an email as a justification for investment in a vendor, it might cost other people hours as they dig into it. However, if AI fact-checked it first by searching code repos, wikis and tickets, found it wasn’t true, then maybe that email wouldn’t have been sent at all or would have acknowledged the existing product and led to a more crisp discussion.

    AI responses often only need a quick sniff by a human (eg. click the provided link to confirm)… whereas BS can derail your day.

    We should share our knowledge and intelligence with AIs and people alike, and not ignorance. Use the tools at our disposal to avoid wasting others’ valuable time, and encourage others to do the same.


  • JoetoTechnology@lemmy.world“ChatGPT said this” Is Lazy
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    24
    ·
    1 month ago

    Sure… copy & paste is copy & paste.

    However, LLMs can help to formulate a scattered braindump of thoughts and opinions into a coherent argument / position, fact check claims, and help to highlight faulty thinking.

    I am happy if someone uses AI first to come up with a coherent message, bug report, or question.

    I am annoyed if it’s ill-researched/understood nonsense, AI assisted or not.

    Within my company, I am contributing to an AI-tailored knowledge base, so that people (and AI) can efficiently learn just-in-time.