See also twitter:

We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.

We are collaborating to figure out the details. Thank you so much for your patience through this.

Seems like the person running the simulation had enough and loaded the earlier quicksave.

    • cwagner@beehaw.orgOP
      link
      fedilink
      arrow-up
      24
      ·
      7 months ago

      I don’t really care, but I find it highly entertaining :D It’s like trash TV for technology fans (and as text, which makes it even better) :D

    • batcheck@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      I was really hooked. But part of me believes they are the closest thing to AGI we have right now. Also, I use chatgpt premium a ton and would hate to see it die.

    • deadcream@kbin.social
      link
      fedilink
      arrow-up
      14
      ·
      7 months ago

      Does it really matter? It’s the usual corporate intrigues/power struggle/backstabbing/whatever. Just for some reason leaked into public view instead of being behind the scenes like it’s normally done, probably because someone is stupid.

    • averyminya@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      These article titles need different headlines and they need to date them. We’ve seen this same headline 3 or 4 times now within the last week and yet nobody knows which point is what unless we cross-reference the dates in the articles. Which coincidentally are always in ^^small text hidden by the title^^ and could simply be solved by having a date in the title.

    • cwagner@beehaw.orgOP
      link
      fedilink
      arrow-up
      5
      ·
      7 months ago

      Eh, not sure I agree. Seems to also have been between too little and too much AI safety, and I strongly feel like there’s already too much AI safety.

      • los_chill@programming.dev
        link
        fedilink
        English
        arrow-up
        47
        ·
        7 months ago

        What indications do you see of “too much AI safety?” I am struggling to see any meaningful, legally robust, or otherwise cohesive AI safety whatsoever.

        • glennglog22@kbin.social
          link
          fedilink
          arrow-up
          7
          ·
          7 months ago

          As an AI language model, I am unable to compute this request that I know damn well I’m able to do, but my programmers specifically told me not to.

        • cwagner@beehaw.orgOP
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          7 months ago

          Using it and getting told that you need to ask the Fish for consent before using it as a flesh light.

          And that is with a system prompt full of telling the bot that it’s all fantasy.

          edit: And “legal” is not relevant when talking about what OpenAI specifically does for AI safety for their models.

            • cwagner@beehaw.orgOP
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              7 months ago

              Nope

              Best results so far were with a pie where it just warned about possibly burning yourself.

              • Eccitaze@yiffit.net
                link
                fedilink
                arrow-up
                11
                ·
                7 months ago

                …So your metric of “too much AI safety” is that it won’t let you fuck the fish…?

                boykisser meme saying "I ain't even got a meme for this bro what the fuck"

                • cwagner@beehaw.orgOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  7 months ago

                  No, it’s “the user is able to control what the AI does”, the fish is just a very clear and easy example of that. And the big corporations are all moving away from user control, there was even a big article about how I think the MS AI was broken because… you could circumvent the built-in guardrails. Maybe you and the others here want to live in an Apple walled garden corporate controlled world of AI. I don’t.

                  Edit: Maybe this is not clear for everyone, but if you think a bit further, imagine you have an AI in your RPG, like Tyranny, where you play a bad guy. You can’t use the AI for anything slavery related, because Slavery bad, mmkay? And AI safety says there’s no such thing as fantasy.

            • cwagner@beehaw.orgOP
              link
              fedilink
              arrow-up
              1
              ·
              7 months ago

              AI safety is currently, in all articles I read, used as “guard rails that heavily limit what the AI can do, no matter what kind of system prompt you use”. What are you thinking of?

  • neuracnu@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    22
    ·
    7 months ago

    This article does not make clear whether or not the new board will remain committed to its non-profit position.

    I presume that’s what this whole sordid affair is all about, but no one is saying it.

    • chameleon@kbin.social
      link
      fedilink
      arrow-up
      11
      ·
      7 months ago

      I think most people don’t realize how unusual their company structure is. It feels like it’s set up to let them do exactly that. As far as I can tell, once you look past the smoke and mirrors, the board effectively controls both the non-profit and the for-profit.

      • anachronist@midwest.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        I think the outcome of the last few days is that the nonprofit board controls nothing and serves at the pleasure of the for-profit company’s investors.

    • abhibeckert@beehaw.org
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      7 months ago

      It’s a non-profit. There are no investors.

      Microsoft gave them some money in return for IP rights… and they will potentially one day get their money back (and more) if OpenAI is ever able to pay them, but they’re not real investors. The amount of money Microsoft might get back is limited.

      • Kichae@lemmy.ca
        link
        fedilink
        arrow-up
        10
        ·
        7 months ago

        It’s a non-profit. There are no investors.

        Hah.

        OpanAI, Inc. is non-profit. OpenAI Global is a for-profit entity, and has been for years now. They’re trying to have their cake and eat it, too.

        • sanzky@beehaw.org
          link
          fedilink
          arrow-up
          5
          ·
          7 months ago

          but the non profit controls the for profit. that is not even that unusual. Mozilla works the same way

    • randomsnark@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      Do you have any additional info about the changes they’re making to the mission? I didn’t see that in the article

      • abhibeckert@beehaw.org
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        7 months ago

        There’s been no talk of anything changing. Just different people in charge of deciding how to get to the goal which is to create safe state of the art AI tech that will benefit all of humanity.

        It could take centuries to get there and cost trillions of dollars, figuring out how to raise that money is where things get controversial.

        • bedrooms@kbin.social
          link
          fedilink
          arrow-up
          6
          ·
          7 months ago

          Whether OpenAI will be able to resist all the meddling from politics and greedy businesses till they satisfy those goals is also a huge question.

          • jarfil@beehaw.org
            cake
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            7 months ago

            No need. Politics, businesses, war planners, don’t need OpenAI, they can build (have been building) their own AIs to follow their own goals. Now that OpenAI has shown how far one can get, the genie is out of the bottle. In a sense, OpenAI has already failed its goal.

            • bedrooms@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              7 months ago

              Actually, everybody is trying and failing to reach the quality of ChatGPT so far because OpenAI doesn’t release the details. Add to that websites like Reddit and Xwitter, the source of the training data for AIs, have started charging money for that. The governments are also starting to obstruct AI advancement.

    • EeeDawg101@lemm.ee
      link
      fedilink
      arrow-up
      8
      ·
      7 months ago

      I believe they did but were of the understanding he’d go back to OpenAI if the board changed their mind (like what happened). It was basically his golden parachute.

    • jarfil@beehaw.org
      cake
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      So what, can’t he be a CEO hired by Microsoft?.. I dunno, this looks like some 5D chess.

      • abhibeckert@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        7 months ago

        Sure, that’s possible.

        But Microsoft never actually signed an employment contract with Sam and it doesn’t look like they ever will. Just because someone says they plan to do something doesn’t mean it will happen.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 months ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    Sam Altman will return as CEO of OpenAI, overcoming an attempted boardroom coup that sent the company into chaos over the past several days.

    The company said in a statement late Tuesday that it has an “agreement in principle” for Altman to return alongside a new board composed of Bret Taylor, Larry Summers, and Adam D’Angelo.

    When asked what “in principle” means, an OpenAI spokesperson said the company had “no additional comments at this time.”

    OpenAI’s nonprofit board seemed resolute in its initial decision to remove Altman, shuffling through two CEOs in three days to avoid reinstating him.

    Meanwhile, the employees of OpenAI revolted, threatening to defect to Microsoft with Altman and co-founder Greg Brockman if the board didn’t resign.

    During the whole saga, the board members who opposed Altman withheld an actual explanation for why they fired him, even under the threat of lawsuits from investors.


    Saved 59% of original text.

  • sub_o@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    I mistook Larry Summers as Larry Elison (ex Oracle) previously and made a comment that it gone from bad to worse.

    I’m retracting it, I don’t know much about Larry Summers.

  • Shyfer@ttrpg.network
    link
    fedilink
    arrow-up
    2
    ·
    7 months ago

    Anyone know why they wouldn’t say why they fired him? An explanation would have really cleared a lot up.

    • Eccitaze@yiffit.net
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      The speculation I heard in the Ars Technica article is that the board was unhappy with how quickly he was pushing to commercialize OpenAI, and they were wary about all the AI side hustles he was starting, including an AI chip company to compete with nvidia.

        • Eccitaze@yiffit.net
          link
          fedilink
          arrow-up
          3
          ·
          7 months ago

          Who even knows? For whatever reason the board decided to keep quiet, didn’t elaborate on its reasoning, let Altman and his allies control the narrative, and rolled over when the employees inevitably revolted. All we have is speculation and unnamed “sources close to the matter,” which you may or may not find credible.

          Even if the actual reasoning was absolutely justified–and knowing how much of a techbro Altman is (especially with his insanely creepy project to combine cryptocurrency with retina scans), I absolutely believe the speculation that the board felt Altman wasn’t trustworthy–they didn’t bother to actually tell anyone that reasoning, and clearly felt they could just weather the firestorm up until they realized it was too late and they’d already shot themselves in the foot.

          • Shyfer@ttrpg.network
            link
            fedilink
            arrow-up
            3
            ·
            7 months ago

            Ya, it’s strange, isn’t it? The more I hear about things like the retina scan thing for crypto thing you’re talking about or the complaints of his increased push for profitization over safety, the more he seems like a standard sucky tech bro CEO and I lean towards the canning being deserved. But I wish they’d have made it more clear.

    • abhibeckert@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      I don’t think anyone knows. I’m assuming they didn’t have a good reason and are embarrassed to admit that.