He may be a sucker but at least he is engaging with the topic. The sheer lack of curiosity toward so-called “artificial intelligence” here on hexbear is just as frustrating as any of the bazinga takes on . No material analysis, no good faith discussion, no strategy to liberate these tools in service of the proletariat - just the occasional dunk post and an endless stream of the same snide remarks from the usuals.
The hexbear party line toward LLMs and similar technologies is straight up reactionary. If we don’t look for ways to utilize, subvert and counter these technologies while they’re still in their infancy then these dorks are going to be the only ones who know how to use them. And if we don’t interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.
Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.
No, LLMs are not “AI”. No, mocking these people is not “reactionary”. No, cloaking your personal stance on leftist language doesn’t make it any more correct. No, they are not on the verge of developing superhuman AI.
And if we don’t interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.
Have you read like, anything at all in this thread? There is no way you can possibly say no one here is “interacting with the underlying philosophical questions” in good faith. There’s plenty of discussion, you just disagree with it.
Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.
What the fuck are you talking about? We’re “handing it over to them” because we don’t take their word at face value? Like nobody here has been extremely opposed to the usage of “AI” to undermine working class power? This is bad faith bullshit and you know it.
I see it as low-key crybully shit to come here, dunk on Hexbears and call them names for not being “curious” enough about LLMs, and act like some disadvantaged aggrieved party while also standing closer to the billionaires’ current position than anywhere near those they’re raging at here.
this is a shitposting reddit clone, not a political party, but I generally agree that people on here sometimes veer into neo-ludditism and forget Marx’s words with respect to stuff like this:
The enormous destruction of machinery that occurred in the English manufacturing districts during the first 15 years of this century, chiefly caused by the employment of the power-loom, and known as the Luddite movement, gave the anti-Jacobin governments of a Sidmouth, a Castlereagh, and the like, a pretext for the most reactionary and forcible measures. It took both time and experience before the workpeople learnt
to distinguish between machinery and its employment by capital, and to direct their attacks, not against the material instruments of production, but against the mode in which they are used.
However you have to take the context of these reactions into account. Silicon valley hucksters are constantly pushing LLMs etc. as miracle solutions for capitalists to get rid of workers, and the abuse of these technologies to violate people’s privacy or fabricate audio/video evidence is only going to get worse. I don’t think it’s possible to put Pandora back in the box or to do bourgeois reformist legislation to fix this problem. I do think we need to seize the means of production instead of destroy them. But you need to agitate and organize in real life around this. Not come on here and tell people how misguided their dunk tank posts are lol.
I think their position is heavily misguided at best. The question is whether AI is sentient or not. Obviously they are used against the working class, but that is a separate question from their purported sentience.
Like, it’s totally possible to seize AI without believing in its sentience. You don’t have to believe the techbro woo to use their technology.
We can both make use of LLMs ourselves while disbelieving in their sentience at the same time.
Is that such a radical idea?
We’re not saying that LLMs are useless and we shouldn’t try and make use of them, just that they’re not sentient. Nobody here is making that first point. Attacking the first point instead of the arguments that people are actually making is as textbook a case of strawmanning as I’ve ever seen.
The sheer lack of curiosity toward so-called “artificial intelligence” here on hexbear
Definite what you mean by “curiosity.” Is it also a “lack of curiosity” for people to dunk on and heckle NFT peddlers instead of entertaining their proposals?
is just as frustrating
Even at its extremes that I don’t agree with myself, I disagree here. No, it is not just as frustrating.
No material analysis
Then bring some. Don’t just say Hexbears suck because they’re not “curious” enough about the treat printers.
is straight up reactionary
And hating on leftists in favor of your unspecified “curiosity” position is what exactly by comparison?
Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance
What does your “curiosity” propose that is actual resistance and not playing into their hands or even buying into the marketing hype?
Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.
As it stands, the capitalists already have the old means of information warfare – this tech represents an acceleration of existing trends, not the creation of something new. What do you want from this, exactly? Large language models that do a predictive text – but with filters installed by communists, rather than the PR arm of a company? That won’t be nearly as convincing as just talking and organizing with people in real life.
Besides, if it turns out there really is a transformational threat, that it represents some weird new means of production, it’s still just a programme on a server. Computers are very, very fragile. I’m just not too worried about it.
It’s not a new means of production, it’s old as fuck. They just made a bigger one. The fuck is chat gpt or AI art going to do for communism? Automating creativity and killing the creative part is only interesting as a bad thing from a left perspective. It’s dismissed because it’s dismissals, there’s no new technology here, it’s a souped up chatbot that’s been marketed like something else.
As far as machines being conscious, we are so far away from that as something to even consider. They aren’t and can’t spontaneously gain free will. It’s inputs and outputs based on pre determined programming. Computers literally cannot to anything non deterministic, there is no ghost in the machine, the machine is just really complex and you don’t understand it entirely. If we get to the point where a robot could be seen as sentient we have fucking Star Trek TNG. They did the discussion and solved that shit.
The fuck is chat gpt or AI art going to do for communism?
I think AI art could be great but chatGPT as a concept of something that “knows everything” is very moronic
AI art has the potential to let random schmucks make their own cartoons if they input just a little bit of work. However, this will probably require a license fee or something so you’re probably right
Personally I would love to see well-made cartoons about Indonesian mythology and stuff like that, which will never ever be made in the west (or Indonesia until it becomes as rich as China at least) so AI art is the best chance at that
Okay, but the only reason that ai art could help that is because Indonesian mythology doesn’t have the marketability for a budget and real artists because capitalism. It doesn’t subvert the commodification of art.
Yeah, and as long as we’re living in that capitalistic hellworld, AI art existing allows those stories to be told instead of the same old euromedieval-hobbit-meadow thing that’s the basis of every fantasy movie and game that came out for the last 60 years
Just cause a computer can make it doesn’t mean anyone will see it.
A lot of Indonesian people, and other people (like me) who are interested in other cultures would see it. It would at the very least begin the process of allowing cultural diversity to even reach the rest of the world
As it stands now, poor people in poor countries don’t even have the funds/leisure time to start their own animations (or other similar hobbies). AI art solves that
The reason western art/videogames/cartoons are so popular is not because the culture is inherently more watchable, but because only westerners (and Japanese) ever had the capital to fund their own animation studios. People watch media because it’s well-made, or because it’s already popular and other people are talking about it. AI art can’t fix the latter, but it can fix the former.
Kinda, but like cool ML is alphafold/esm/mpnn/finite elements optimizers for cad/qcd/quantum chemistry (coming soon™). LLMs/diffusion models are ways of multiplying content, fucking up email jobs and static media creators/presumably dynamic ones as well in the future.
I doubt people are aware that rn biologists are close to making designer proteins on like home pc and soon you can wage designer biological warfare for 500k and a small lab. Or conversely, making drugs for any protein-function related disease.
I doubt people are aware that rn biologists are close to making designer proteins on like home pc and soon you can wage designer biological warfare for 500k and a small lab. Or conversely, making drugs for any protein-function related disease.
Please elaborate in as much detail as possible, ideally with numerous hyperlinks. (I’m less surprised by this than you might think, but would greatly appreciate being clued into what’s going on in this arena right now, as I’ve been largely cut off from information about it for years now.)
Basically you can (right now) fix protein part from one protein and hallucinate/design protein backbone backwards from it, using something like 4090, and that protein with high probability will fold as predicted. As an example fig. 3 in 2, shows you can design origami-like structures, which is not useful but very impressive, considering how long protein folding was dogshit despite compute power thrown into it.
Taking alphafold structures you can make proteins binding to other proteins, even without knowing nothing else, have appreciable expectation (>1 %) it will work. Which is how you can make designer viruses, if you were so inclined.
Drugs for now is not solved via neural networks, but they are working towards it, and i don’t see a reason why design of structures binding to known protein structures won’t work, it seems if anything else easier.
So, after taking some time to digest this information, I have a couple of follow-up questions, if you don’t mind answering them.
First of all, where do things stand with drugs? Is it just not something academics are working on, but presumably being done (or already finished) within proprietary institutions (e.g. Big Pharma)? Can you point me to some recent papers on the subject?
Secondly, what about enzymes? Binding proteins are interesting, certainly, but it’s enzymes that really excite me the most. Is anyone working on custom enzyme design, and if so, can you link some papers on that? In looking more closely at that Nature paper, I see that enzymes are something of a work-in-progress as yet. If you have anything else on the subject, I’d welcome that, but if there’s nothing else of note there, that’s fine.
Thank you for mentioning this to begin with, by the way, I really appreciate the info you’ve already shared!
With drugs third paper references it, i think in the next 6 months people expect neural net check of compounds binding affinities (https://www.biorxiv.org/content/10.1101/2023.11.01.565201v1.abstract), but here quantum chemistry (neurally based) is lagging behind, they still can’t do large molecules (>30 atoms) reliably. Basically rn the big bad boy is david baker lab, they do all this exciting stuff, you can periodically check google scholar for new developments like i do.
With enzymes (as i understand) the problem is to make them work, they can make them bind, but they can’t make them move to do stuff.
Alphafold can’t make conformations for now, and its a harder problem, so maybe in 2 years they can develop something reliable, as for now its mainly shenanigans of biasing folding programs into new conformations
It’s a glorified speak-n-spell, not a one benefit to the working class. A constant, unrelenting push for a democratization of education will do infinitely more for the working class than learning how best to have a machine write a story. Should this be worked on and researched? absolutely. Should it not be allowed out of the confines of people who understand thoroughly what it is and what it can and cannot do? yes. We shouldn’t be using this for the same reason you don’t use a gag dictionary for a research project. Grow up
astroturfing only works when your views tie into the mainstream narrative. Besides, there’s no competing with the people who have access to the best computers, most coders, and have backdoors and access to every platform. Smarter move is to back up the workers who are having their jobs threatened over this.
He may be a sucker but at least he is engaging with the topic. The sheer lack of curiosity toward so-called “artificial intelligence” here on hexbear is just as frustrating as any of the bazinga takes on . No material analysis, no good faith discussion, no strategy to liberate these tools in service of the proletariat - just the occasional dunk post and an endless stream of the same snide remarks from the usuals.
The hexbear party line toward LLMs and similar technologies is straight up reactionary. If we don’t look for ways to utilize, subvert and counter these technologies while they’re still in their infancy then these dorks are going to be the only ones who know how to use them. And if we don’t interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.
Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.
That’s because it’s not artificial intelligence. It’s marketing.
Oh my god it’s this post again.
No, LLMs are not “AI”. No, mocking these people is not “reactionary”. No, cloaking your personal stance on leftist language doesn’t make it any more correct. No, they are not on the verge of developing superhuman AI.
Have you read like, anything at all in this thread? There is no way you can possibly say no one here is “interacting with the underlying philosophical questions” in good faith. There’s plenty of discussion, you just disagree with it.
What the fuck are you talking about? We’re “handing it over to them” because we don’t take their word at face value? Like nobody here has been extremely opposed to the usage of “AI” to undermine working class power? This is bad faith bullshit and you know it.
I see it as low-key crybully shit to come here, dunk on Hexbears and call them names for not being “curious” enough about LLMs, and act like some disadvantaged aggrieved party while also standing closer to the billionaires’ current position than anywhere near those they’re raging at here.
this is a shitposting reddit clone, not a political party, but I generally agree that people on here sometimes veer into neo-ludditism and forget Marx’s words with respect to stuff like this:
However you have to take the context of these reactions into account. Silicon valley hucksters are constantly pushing LLMs etc. as miracle solutions for capitalists to get rid of workers, and the abuse of these technologies to violate people’s privacy or fabricate audio/video evidence is only going to get worse. I don’t think it’s possible to put Pandora back in the box or to do bourgeois reformist legislation to fix this problem. I do think we need to seize the means of production instead of destroy them. But you need to agitate and organize in real life around this. Not come on here and tell people how misguided their dunk tank posts are lol.
I think their position is heavily misguided at best. The question is whether AI is sentient or not. Obviously they are used against the working class, but that is a separate question from their purported sentience.
Like, it’s totally possible to seize AI without believing in its sentience. You don’t have to believe the techbro woo to use their technology.
We can both make use of LLMs ourselves while disbelieving in their sentience at the same time.
Is that such a radical idea?
We’re not saying that LLMs are useless and we shouldn’t try and make use of them, just that they’re not sentient. Nobody here is making that first point. Attacking the first point instead of the arguments that people are actually making is as textbook a case of strawmanning as I’ve ever seen.
Definite what you mean by “curiosity.” Is it also a “lack of curiosity” for people to dunk on and heckle NFT peddlers instead of entertaining their proposals?
Even at its extremes that I don’t agree with myself, I disagree here. No, it is not just as frustrating.
Then bring some. Don’t just say Hexbears suck because they’re not “curious” enough about the treat printers.
And hating on leftists in favor of your unspecified “curiosity” position is what exactly by comparison?
What does your “curiosity” propose that is actual resistance and not playing into their hands or even buying into the marketing hype?
I actually told my wife “watch this, I’m going to get a smug reply from UlyssesT,” so thank you for not disappointing.
You made a smug post. Smug in, smug out.
Kind of weird to make me that important to you in your personal life, too.
Yeah, sharing what’s happening on my phone with my wife is weird.
Yes it is if you’re invoking the specific usernames of people you’re seething about.
Normal behavior
I don’t tell people IRL about my arguments online because I know that shit’s boring to anyone except me
Good job, you recognize cause and effect
I want to hijack this opportunity to ask about pigeons. Are they soft and fluffy?
Yes
Give me ur pigeon
NOOOO PIGIN HOW COULD COMMUNISM BETRAY ME LIKE THIS?!?!
Hey, I’m just going to cuddle with it for a few hours. Hogging all the cuddles is bourgeois
I heard pigeons cooing in a park once and now I want one
deleted by creator
As it stands, the capitalists already have the old means of information warfare – this tech represents an acceleration of existing trends, not the creation of something new. What do you want from this, exactly? Large language models that do a predictive text – but with filters installed by communists, rather than the PR arm of a company? That won’t be nearly as convincing as just talking and organizing with people in real life.
Besides, if it turns out there really is a transformational threat, that it represents some weird new means of production, it’s still just a programme on a server. Computers are very, very fragile. I’m just not too worried about it.
It’s not a new means of production, it’s old as fuck. They just made a bigger one. The fuck is chat gpt or AI art going to do for communism? Automating creativity and killing the creative part is only interesting as a bad thing from a left perspective. It’s dismissed because it’s dismissals, there’s no new technology here, it’s a souped up chatbot that’s been marketed like something else.
As far as machines being conscious, we are so far away from that as something to even consider. They aren’t and can’t spontaneously gain free will. It’s inputs and outputs based on pre determined programming. Computers literally cannot to anything non deterministic, there is no ghost in the machine, the machine is just really complex and you don’t understand it entirely. If we get to the point where a robot could be seen as sentient we have fucking Star Trek TNG. They did the discussion and solved that shit.
I think AI art could be great but chatGPT as a concept of something that “knows everything” is very moronic
AI art has the potential to let random schmucks make their own cartoons if they input just a little bit of work. However, this will probably require a license fee or something so you’re probably right
Personally I would love to see well-made cartoons about Indonesian mythology and stuff like that, which will never ever be made in the west (or Indonesia until it becomes as rich as China at least) so AI art is the best chance at that
Okay, but the only reason that ai art could help that is because Indonesian mythology doesn’t have the marketability for a budget and real artists because capitalism. It doesn’t subvert the commodification of art.
Yeah, and as long as we’re living in that capitalistic hellworld, AI art existing allows those stories to be told instead of the same old euromedieval-hobbit-meadow thing that’s the basis of every fantasy movie and game that came out for the last 60 years
Just cause a computer can make it doesn’t mean anyone will see it. That’s where the capitalism comes in.
A lot of Indonesian people, and other people (like me) who are interested in other cultures would see it. It would at the very least begin the process of allowing cultural diversity to even reach the rest of the world
As it stands now, poor people in poor countries don’t even have the funds/leisure time to start their own animations (or other similar hobbies). AI art solves that
The reason western art/videogames/cartoons are so popular is not because the culture is inherently more watchable, but because only westerners (and Japanese) ever had the capital to fund their own animation studios. People watch media because it’s well-made, or because it’s already popular and other people are talking about it. AI art can’t fix the latter, but it can fix the former.
Kinda, but like cool ML is alphafold/esm/mpnn/finite elements optimizers for cad/qcd/quantum chemistry (coming soon™). LLMs/diffusion models are ways of multiplying content, fucking up email jobs and static media creators/presumably dynamic ones as well in the future.
I doubt people are aware that rn biologists are close to making designer proteins on like home pc and soon you can wage designer biological warfare for 500k and a small lab. Or conversely, making drugs for any protein-function related disease.
Please elaborate in as much detail as possible, ideally with numerous hyperlinks. (I’m less surprised by this than you might think, but would greatly appreciate being clued into what’s going on in this arena right now, as I’ve been largely cut off from information about it for years now.)
https://www.science.org/doi/10.1126/science.add2187
https://www.nature.com/articles/s41586-023-06415-8
https://www.sciencedirect.com/science/article/abs/pii/S1476927122000445
Basically you can (right now) fix protein part from one protein and hallucinate/design protein backbone backwards from it, using something like 4090, and that protein with high probability will fold as predicted. As an example fig. 3 in 2, shows you can design origami-like structures, which is not useful but very impressive, considering how long protein folding was dogshit despite compute power thrown into it.
Taking alphafold structures you can make proteins binding to other proteins, even without knowing nothing else, have appreciable expectation (>1 %) it will work. Which is how you can make designer viruses, if you were so inclined.
Drugs for now is not solved via neural networks, but they are working towards it, and i don’t see a reason why design of structures binding to known protein structures won’t work, it seems if anything else easier.
Awesome, thanks!
So, after taking some time to digest this information, I have a couple of follow-up questions, if you don’t mind answering them.
First of all, where do things stand with drugs? Is it just not something academics are working on, but presumably being done (or already finished) within proprietary institutions (e.g. Big Pharma)? Can you point me to some recent papers on the subject?
Secondly, what about enzymes? Binding proteins are interesting, certainly, but it’s enzymes that really excite me the most. Is anyone working on custom enzyme design, and if so, can you link some papers on that?In looking more closely at that Nature paper, I see that enzymes are something of a work-in-progress as yet. If you have anything else on the subject, I’d welcome that, but if there’s nothing else of note there, that’s fine.Thank you for mentioning this to begin with, by the way, I really appreciate the info you’ve already shared!
With drugs third paper references it, i think in the next 6 months people expect neural net check of compounds binding affinities (https://www.biorxiv.org/content/10.1101/2023.11.01.565201v1.abstract), but here quantum chemistry (neurally based) is lagging behind, they still can’t do large molecules (>30 atoms) reliably. Basically rn the big bad boy is david baker lab, they do all this exciting stuff, you can periodically check google scholar for new developments like i do.
With enzymes (as i understand) the problem is to make them work, they can make them bind, but they can’t make them move to do stuff.
Alphafold can’t make conformations for now, and its a harder problem, so maybe in 2 years they can develop something reliable, as for now its mainly shenanigans of biasing folding programs into new conformations
This is great information, thank you so much!
It’s a glorified speak-n-spell, not a one benefit to the working class. A constant, unrelenting push for a democratization of education will do infinitely more for the working class than learning how best to have a machine write a story. Should this be worked on and researched? absolutely. Should it not be allowed out of the confines of people who understand thoroughly what it is and what it can and cannot do? yes. We shouldn’t be using this for the same reason you don’t use a gag dictionary for a research project. Grow up
It has potential for making propaganda. Automated astroturfing more sophisticated than what we currently see being done on Reddit.
astroturfing only works when your views tie into the mainstream narrative. Besides, there’s no competing with the people who have access to the best computers, most coders, and have backdoors and access to every platform. Smarter move is to back up the workers who are having their jobs threatened over this.
deleted by creator