• 8 Posts
  • 844 Comments
Joined 1 年前
cake
Cake day: 2023年6月1日

help-circle

  • I did actually, and it worked, though they may have changed it by now.

    Think I have a screenshot somewhere…

    Edit: they’ve definitely altered the way it works. I’m sure there’s a way to get around whatever guardrails they added with enough creativity, unless they’ve completely rebuilt the model and removed any programming training data.





  • Not just problematic, but consequential too:

    Richard Stallman has also embarked upon a decades-long political project to normalize sexual violence. Under his ideological leadership, the free software movement is unsafe, particularly for women. Women represent just 3% of the free software community,2 compared to 23% of industry programmers generally.3 This is no accident. There is a pervasive culture of sexism and a stark lack of accountability in free software, and it begins with Stallman’s unchallenged and reprehensible behavior.








  • A llm making business decisions has no such control or safety mechanisms.

    I wouldn’t say that - there’s nothing preventing them from building in (stronger) guardrails and retraining the model based on input.

    If it turns out the model suggests someone killing themselves based on very specific input, do you not think they should be held accountable to retrain the model and prevent that from happening again?

    From an accountability perspective, there’s no difference from a text generator machine and a soda generating machine.

    The owner and builder should be held accountable and thereby put a financial incentive on making these tools more reliable and safer. You don’t hold Tesla not accountable when their self driving kills someone because they didn’t test it enough or build in enough safe guards – that’d be insane.