• 33 Posts
  • 410 Comments
Joined 1 year ago
cake
Cake day: February 2nd, 2024

help-circle
  • In other news, a piece from Paris Marx came to my attention, titled ā€œWe need an international alliance against the US and its tech industryā€. Personally gonna point to a specific paragraph which caught my eye:

    The only country to effectively challenge [US] dominance is China, in large part because it rejected US assertions about the internet. The Great Firewall, often solely pegged as an act of censorship, was an important economic policy to protect local competitors until they could reach the scale and develop the technical foundations to properly compete with their American peers. In other industries, itā€™s long been recognized that trade barriers were an important tool ā€” such that a declining United States is now bringing in its own with the view theyā€™re essential to projects its tech companies and other industries.

    I will say, it does strike me as telling that Paris was able to present the unofficial mascot of Chinese censorship this way without getting any backlash.








  • New piece from Baldur Bjarnason: AI and Esoteric Fascism, which focuses heavily on our very good friends and their link to AI as a whole. Ending quoteā€™s pretty solid, so Iā€™m dropping it here:

    I believe that the current ā€œAIā€ bubble is an outright Neo-Nazi project that cannot be separated from the thugs and fascists that seem to be taking over the US and indivisible from the 21st century iteration of Esoteric Neo-Nazi mysticism that is the TESCREAL bundle of ideologies.

    If that is true, then there is simply no scope for fair or ethical use of these systems.

    Anyways, hereā€™s my personal sidenote:

    As Iā€™ve mentioned a bajillion times before, Iā€™ve predicted this AI bubble would kill AI as a concept, as its myriad harms and failures indelibly associate AI with glue pizzas, artists getting screwed, and other such awful things. After reading through this, its clear Iā€™ve failed to take into account the political elements of this bubble, and how itā€™d affect things.

    My main prediction hasnā€™t changed - I still expect AI as a concept to die once this bubble bursts - but I suspect that AI as a concept will be treated as an inherently fascist concept, and any attempts to revive it will face active ridicule, if not outright hostility.




  • Baldurā€™s given his thoughts on Bluesky - he suspects Zitronā€™s downplayed some of AIā€™s risks, chiefly in coding:

    Thereā€™s even reason to believe that Edā€™s downplaying some of the risks because theyā€™re hard to quantify:

    • The only plausible growth story today for the stock market as a whole is magical ā€œAIā€ productivity growth. What happens to the market when that story fails?
    • Coding isnā€™t the biggest ā€œwinā€ for LLMs but its biggest risk

    Software dev has a bad habit of skipping research and design and just shipping poorly thought-out prototypes as products. These systems get increasingly harder to update over time and bugs proliferate. LLMs for coding magnify that risk.

    Weā€™re seeing companies ship software nobody in the company understands, with edge cases nobody is aware of, and a host of bugs. LLMs lead to code bases that are harder to understand, buggier, and much less secure.

    LLMs for coding isnā€™t a productivity boon but the birth of a major Y2K-style crisis. Fixing Y2K cost the worldā€™s economy over $500 billion USD (corrected for inflation), most of it borne by US institutions and companies.

    And Y2K wasnā€™t promising magical growth on the order of trillions so the perceived loss of a failed AI Bubble in the eyes of the stock market would be much higher

    On a related note, I suspect programming/software engineeringā€™s public image is going to spectacularly tank in the coming years - between the impending Y2K-style crisis Baldur points out, Silicon Valley going all-in on sucking up to Trump, and the myriad ways the slop-nami has hurt artists and non-artists alike, the pieces are in place to paint an image of programmers as incompetent fools at best and unrepentant fascists at worst.




  • New piece from Brian Merchant: ā€˜AI is in its empire eraā€™

    Recently finished it, hereā€™s a personal sidenote:

    This AI bubbleā€™s done a pretty good job of destroying the ā€œapoliticalā€ image that techā€™s done so much to build up (Silicon Valley jumping into bed with Trump definitely helped, too) - as a matter of fact, itā€™s provided plenty of material to build an image of tech as a Nazi bar writ large (once again, SVā€™s relationship with Trump did wonders here).

    By the time this decade ends, I anticipate techā€™s public image will be firmly in the toilet, viewed as an unmitigated blight on all our daily lives at best and as an unofficial arm of the Fourth Reich at worst.

    As for AI itself, I expect itā€™s image will go into the shitter as well - assuming the bubble burst doesnā€™t destroy AI as a concept like I anticipate, itā€™ll probably be viewed as a tech with no ethical use, as a tech built first and foremost to enable/perpetrate atrocities to its wielderā€™s content.