The last response seems like lightly edited chat GPT output imo.
Chat gpt output is indistinguishable from that of annoying long-winded centrist libs
Yeah I was going to comment about this too, but yes IMO because it almost surely it is otherwise that person is completely lacking in reading comprehension i.e how on earth is “making [optimal] decisions” fundamentally different from “mambo jambo about threats and opportunities”. Also the optimal decisions just mentioned previously is literally the opposite of the definition of “one size fits all”. Also it goes without saying it goes with that general air of superiority, the stereotypical confidently wrong response lol.
But maybe I’m wrong and it is in fact very common to make these mistakes, heck I’m not a native English speaker maybe I’m missing something here lol.
Look at the little PMC piggies squirming that the beast they’ve unleashed is going to eat them next
Moopsy!
Moopsy :3
“You just don’t understand! AI would make obvious and good decisions. That’s not what CEOs are for AT ALL”
-Camilo TribinI got GPT4 to write some angry comments about this and it is sort of uncanny:
Leadership is far more than data points and algorithms! It encompasses intuition, human connection, ethical decision-making, and countless intangibles that a machine could never grasp.
Organizations thrive on human connections, shared values, and mutual aspirations. The idea that an algorithm could replace the heartbeat of a company is baffling.
While AI can augment and support decision-making, the visionary leadership and human touch that CEOs provide are irreplaceable. Let’s not lose sight of what truly matters.
This is the human touch that CEOs bring to the table, folks.
Let’s not lose sight of what truly matters.
Shit, was it my turn to watch what truly matters?
you don’t need a human brain to make decisions when all presented options are “increase value for the shareholders”. A computer can do that
To be honest “make green number go up” is probably the best task you could find for an AI to replace humans with. All the actual work and whatever takes real multi-variable thought, prediction, skill and consideration.
exactly. for all the reductionism behind the idea that our leadership (politicians, CEOs etc) has no real agency because they are compelled to act in a way that furthers capitalism and profit inevitably, it’s only rationally consistent to replace them with machines. Someone ought to tell them that
there is AI that makes financial decisions but that AI shouldn’t be a chatbot
Cutting costs and increasing prices isn’t a difficult job when you’re a psychopath.
LMAO dude, that’s not what CEOs do, that’s like a mid-level director of engineering. The CEO’s job is to prevent the rest of the executives from looting the company before the shareholders can do it.
Noooooo, not like that! Automation is only for people who didn’t go to Harvard!
The funny thing is, the only barrier here is context size. Right now, LLMs have laughably bad context size (or attention spans, in human terms, it’s basically how much information a Brian or model can keep active at any point in time) compared to humans, but that’s going to change. It’s not difficult to foresee a near future of LLMs with very, very, superhumanly large context sizes that could make human leadership seem ridiculously incompetent in comparison. Here’s the thing, pyramid-like organizational structures are extremely common because we necessarily have layers of abstraction; the head of the organization can’t do their job effectively if they’re worried about whether Bob the Welder is going to make it in on time or if that invoice has got paid yet; likewise, Bob the Welder can’t do his job if he’s getting pulled off work to go sit in marketing meetings all day. There’s only so much attention any one person can give in a day. The biggest problem is that information gets lost between these layers of abstraction, values don’t necessarily remain consistent, and policies and practices aren’t uniformly applicable, which can make it difficult for customers and even employees to navigate the normal processes of an organization, let alone the abnormal ones.
As LLM context sizes reach superhuman levels, it’s conceivable that they could end up flattening organizational structures by being able to be both Bob’s supervisor and the CEO (or at least the CEO’s assistant), and being able to keep all of the organization’s context, down to the individual employee and customer needs, in mind at all times when making decisions. A government or corporation run by a properly aligned super-context AI could possibly be the closest thing we’re going to get to utopian leadership, and would likely be both more ethical and more effective than human leadership.
The problems facing the world today does not come from leaders having too short attention spans or inadequate access to information. The problems comes from these rulers representing bourgeois rather than proletarian interests. No amount of bazinga is going to overcome class conflict and make the dictatorship of the bourgeoisie make decisions that benefit the masses.
It’s possible that if giant-context models are freely available, flat-structured organizations run by AI could outcompete less agile pyramid-structured organizations. It is possible we could see the bourgeoisie hoisted by their own petard.
I should write a piece how Neoliberalism is already carving up the CEO (and other) leadership positions. Hedge funds and other capital vultures constantly shuffle the corporate suite to suite their interests, so there is absolutely a place for computers as labor-saving devices for managing a portfolio of companies by these huge capital conglomerates. Cyberpunk was only wrong about the aesthetics.
EDIT: My real hot take should be that CEOs are undergoing Proletarianization. The masters of Capital reveal themselves as its greatest slaves.
I’ve noticed this too. There are already C-level agencies where a hedge fund can dial up a specific board for their purpose, from fucking over the founders of a seed start-up, to pivoting from a pro-consumer growth model to profit maximisation, to “Strip the copper wiring before the smallholders notice”.
See also the heads of banks and financial institutions going from among the richest captains of industry to mere money butlers with values 2 orders of magnitude below their clients.
There’s no longer a spectrum of bourgoise, there’s your local used car salesman or medium business owner with 30 employees. And then there’s the mega rich. Everyone in between is now Labour Aristocracy and eventually, they’re gonna realise that.
lol that last reply is so great because jr ceos with no experience on average outperform experienced ceos and its not even close
LLMs can already replace those shitty organization wide emails that go out telling us all that we matter, justifying raises below the rate of inflation, and pretending like screwing the client is “actually” delivering added value to the client.
shareholders should be clammoring for LLMs to replace CEOs, because LLMs aren’t going to jack up executive payouts right before the take on a bunch of debt and file for bankruptcy.
bit idea- marxist who hates the term pmc and defends megacorp ceos against their shareholders because he doesn’t want to divide the working class
I love the idea of a struggle session between the guys who lick CEO boots and the guys who think LLMs are self-aware, the only problem is
deleted by creator
This is honestly the first profession where AI could do a better job
Can a chatbot golf? Can it get drunk at network meetings? Can it make up dumbass reactionary ideas and pay people to replicate them?
No, AI will never be able to replace CEOs.
Actually computers are extremely qualified to be CEO because the only trait you need is being willing to fuck over as many people as possible to maximize profits for your shareholders. The less empathy, the better. Shareholders want absolute psychos who will fire 50% of the workforce with no hesitation to make line go up by 2%.