It’s incredible that people can feed up to one million tokens to LLMs and yet they still most of the time fail to take advantage of that enormous context window. No wonder people say that the output generated by LLMs is always crap… I mean, they’re not great but at least they can manage to do a pretty good job - that is, only IF you teach them well… Beyond that, everyone has their own effort + time / results ratio.

"Engineers are finding out that writing, that long shunned soft skill, is now key to their efforts. In Claude Code: Best Practices for Agentic Coding, one of the key steps is creating a CLAUDE.md file that contains instructions and guidelines on how to develop the project, like which commands to run. But that’s only the beginning. Folks now suggest maintaining elaborate context folders.

A context curator, in this sense, is a technical writer who is able to orchestrate and execute a content strategy around both human and AI needs, or even focused on AI alone. Context is so much better than content (a much abused word that means little) because it’s tied to meaning. Context is situational, relevant, necessarily limited. AI needs context to shape its thoughts.
(…)
Tech writers become context writers when they put on the art gallery curator hat, eager to show visitors the way and help them understand what they’re seeing. It’s yet another hat, but that’s both the curse and the blessing of our craft: like bards in DnD, we’re the jacks of all trades that save the day (and the campaign)."

https://passo.uno/from-tech-writers-to-ai-context-curators/

#AI #GenerativeAI #LLMs #Chatbots #PromptEngineering #ContextWindows #TechnicalWriting #Programming #SoftwareDevelopment #DocsAsDevelopment

  • makeshiftreaper@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    9 months ago

    I was trying to politely get you to use logic to understand that AI is not some inherently better tool because it’s new and money is being spent on it so I’ll be blunt:

    I have seen zero evidence that code output by AI justifies the multitude of costs that come at its implementation. You lose the opportunity to train junior devs, it fucks up testing, it hurts the quality of developers, it’s unnecessarily verbose, and makes frequent type mistakes. I want you to provide solid evidence AI outputs code at the same quality or better than a traditional developer before I will agree that learning AI skills is a benefit in the corporate environment