Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this)
Iām really really not happy about this. There is one person Iāve been trying to keep out for the last few years and now they can come crawl all my fucking posts?? And report my account!?
Edit: apparently being protected should offer me some protection still.
saw this via a friend earlier, forgot to link. xcancel
socmed administrator for a conf rolls with liarsynth to āexpandā a cropped image, and the autoplag machine shits out a more sex-coded image of the speaker
the mindset of ājust make some shit to pass musterā obviously shines through in a lot of promptfans and promptfondlers, and while thatās fucked up I donāt want to get too stuck on that now. one of the things Iāve been mulling over for a while is pondering what a world (and digital landscape) with a richer capability for enthusiastic consent could look like. and by that I mean, not just more granular (a la apple photo/phonebook acl) than this current y/n bullshit where a platform makes a landgrab for a pile of shit, but something else entirely. āyeah, on my gamer profile you can make shitposts, but on academic stuff please keep it formalā expressed and traceable
even if just as a thought experiment (because of course thereās lots of funky practical problems, combined with the āhumans just donāt really exist that wayā effort-tax overhead that this may require), it might inform about some avenues of how to to go about some useful avenues on how to go about handling this extremely overt bullshit, and informing/shaping impending norms
(e: apologies for semi stream of thought, itās late and iām tired)
25085 N + Oct 15 GitHub ( 19K) Your free GitHub Copilot access has expired
tinyviolin.bmp
fig. 1: how awful.systems works
it just clicked for me but idk if it makes sense: openai nonprofit status could be used later (inevitably in court) to make research clause of fair use work. they had it when training their models and that might have been a factor why they retained it, on top of trying to attract actual skilled people and not just hypemen and money
Thereās no way this works, right? Itās like a 5y.o.'s idea of a gotcha.
This would be like starting a tax-exempt charity to gather up a large amount in donations and then switching to a for-profit before spending it on any charitable work and running away with the money.
iām not a lawyer and iāve typed it up after 4h of sleep, trying to make sense of what tf were they thinking. theyāre not bagging up money, theyāre stealing all data they can, so itās less direct and itād depend on how that data (unstructured, public) will be valued at. then, what a coincidence, their proprietary thing made something useful commercially, or so were they thinking. sbf went to court with less
Thereās no way this works, right?
the US legal system has this remarkable ālittleā failure mode where it is easily repurposed to be not an engine of justice, but instead of engine of enforcing whatever story you can convince someone of
(the extremely weird interaction(s) of āeverything allowed except what is deniedā, case precedent, and the abovementioned interaction mode, result in some really fucking bad outcomes)
Today I was looking at buying some stickers to decorate a laptop and such, so I was browsing Redbubble. Looking here and there I found some nice designs and then stumbled upon a really impressive artist portfolio there. Thousands of designs, woah, I thought, it must have been so much work to put that together!
Then it dawned on me. For a while I had completely forgotten that we live in the age of AI slopā¦ blissfull ignorance! But then I noticed the common elements in many of the designsā¦ noticed how everything is surrounded by little dots or stars or other design trinkets. Such a typical AI slop thing, because somehow these āAIā generators canāt leave any whitespace, they must fill every square millimeter with something. Of course I donāt know for sure, and maybe Iām doing an actual artist injustice with my assumption, but this sure looked like Gen-AI stuffā¦
Anyway, I scrapped my order for now while I reconsider how to approach this. My brain still associates sites like redbubble or etsy with āart things made by actual humansā, but I guess that certainty is outdated now.
This sucks so much. I donāt want to pay for AI slop based on stolen human-created art - I want to pay the actual artists. But now I can never knowā¦ How can trust be restored?
Sadly I think the only way to trust you are not getting a lot of AI art is by starting to follow a lot of artists you like on social media. Just going to a site which sells things seems a bit risky atm.
Iāve taken to calling the constant background sprinkles and unnecessary fine detail in gen ai images āgreeblesā after the modelling and cgi term. Not sure if they have a better or more commonplace name.
Itās funny, meaningless bullshit diagrams on whiteboards backgrounds of photos were a sure sign on PR shots or lazy set dressing, and now theyāre everywhere signifying pretty much the same thing.
this demented take on using GenAI to create documentation for open source projects
https://lobste.rs/s/rmbos5/large_language_models_reduce_public#c_j8boat
Good sneer from āInternet_Janitorā a few comments up the page:
LLMs inherently shit where they eat.
The top commentās also pretty good, especially the final paragraph:
I guess these companies decided that strip-mining the commons was an acceptable deal because theyād soon be generating their own facts via AGI, but that hasnāt come to pass yet. Instead theyāve pissed off many of the people they were relying on to continue feeding facts and creativity into the maws of their GPUs, as well as possibly fatally crippling the concept of fair use if future court cases go against them.
oh hey that would be my comment š
It was a pretty good comment, and pointed out one of the possible risks this AI bubble can unleash.
Iāve already touched on this topic, but it seems possible (if not likely) that copyright law will be tightened in response to the large-scale theft performed by OpenAI et al. to feed their LLMs, with both of us suspecting fair use will likely take a pounding. As you pointed out, the exploitation of fair useās research exception makes it especially vulnerable to its repeal.
On a different note, I suspect FOSS licenses (Creative Commons, GPL, etcetera) will suffer a major decline in popularity thanks to the large-scale code theft this AI bubble brought - after two-ish years of the AI industry (if not tech in general) treating anything publicly available as theirs to steal (whether implicitly or explicitly), Iād expect people are gonna be a lot stingier about providing source code or contributing to FOSS.
Yeah, Iām no longer worried that LLMs will take my job (nor ofc that AGI will kill us all) Instead the lasting legacy of GenAI will be a elevated background level of crud and untruth, an erosion of trust in media in general, and less free quality stuff being available. Itās a bit like draining the Aral Sea, a vibrant ecosystem will be permanently destroyed in the short-sighted pursuit of ādevelopmentā.
the lasting legacy of GenAI will be a elevated background level of crud and untruth, an erosion of trust in media in general, and less free quality stuff being available.
I personally anticipate this will be the lasting legacy of AI as a whole - everything that you mentioned was caused in the alleged pursuit of AGI/Superintelligencetm, and gen-AI has been more-or-less the āfaceā of AI throughout this whole bubble.
Iāve also got an inkling (which I turned into a lengthy post) that the AI bubble will destroy artificial intelligence as a concept - a lasting legacy of ācrud and untruthā as you put it could easily birth a widespread view of AI as inherently incapable of distinguishing truth from lies.
In other news, a lengthy report about Richard Stallman liking kids just dropped.
Hacker News has a thread on it. Its a dumpster fire, as expected.
Jesus GNU Christ, Live your life so that no one ever produces a systematic classification of your opinions that looks like this
Ted_Danson_choosing_between_clam_chowder_fountain_and_bees_with_teeth.webm
I donāt think anything in the report is new, is it? Isnāt this the exact weirdness that got him kicked off the board in the first place? I was shocked when he was quietly added back to the board; I really thought the allegations would stick the first time.
Nice to have it all in one place though.
Thereās a little bit of new stuff in there, but itās all just corroborating the old or relatively minor. Still, itās a lot in one place.
I had heard some vague stuff about this, but had no idea it was this bad. Also, I didnāt know how much of a fool RMS was. : āRMS did not believe in providing raises ā prior cost of living adjustments were a battle and not annual. RMS believed that if a precedent was created for increasing wages, the logical conclusion would be that employees would be paid infinity dollars and the FSF would go bankrupt.ā (It gets worse btw).
Little of this was news to me, but damn, laid out systematically like that, itās even more damning than I expected. And the stuff that was new to me certainly didnāt help.
Very serious people at HN at it again:
The only argument I find here against it is the question of whether someoneās personal opinions should be a reason to be removed from a leadership position.
Yes, of course they should be! Opinions are essential to the job of a leader. If the opinions you express as a leader include things like āsexual harassment is not a real crimeā or āwe shouldnāt give our employees raises because otherwise theyāll soon demand infinite payā or āthereās no problem in adults having sex with 14 year olds and me saying that isnāt going to damage the reputation of the organization I leadā youāre a terrible leader and and embarrassment of a spokesman.
Edit: The link submitted by the editors is [flagged] [dead]. Of course.
The only argument I find here against it is the question of whether someoneās personal opinions should be a reason to be removed from a leadership position.
What do these people think leadership is?
No, obviously opinions like
- āif my MIT AI Lab mentor had sex with an underage sex worker on Epsteinās teen rape island, that was only because he thought she consentedā,
- āstealing a kiss from a woman is fine and not a sexual assault, maybe perhaps at most itās supposedly sexual harassment which is not real and is actually fineā,
- āI donāt believe in bereavement leave. What if all your close friends and family die one after another? Itās conceivable you would be gone from the office for days, or weeks, if not months.1 What if you lie about who is dying?ā,
- āOvertly sexualizing āparodyā ceremonies for a semi-fictitious church of Emacs centering around unprepared girls and women in my audience are fine and when people participate in them, there is certainly no peer pressure involved, not that I care if there isā,
- āItās fine to throw a tantrum about Emacs supporting another compiler infrastructure Not Invented Here. LLVM/Clang is supported by Apple and has a permissive license instead of GPL so itās basically proprietary, right?ā,
- ā
You may have heard or read critical statements about me; <a href=https://website.made.by.my.sychophants.example.com>please make up your own mind.</a>
ā,
are in the same category as āI think pineapple on pizza is delicious/disgustingā when it comes to evaluating someoneās aptitude as a leader.
I advocate for Free Software despite RMS. I recognize the value of his good contributions and that I might not even have the concept of Free Software and its value without him. I donāt want to throw the baby out with the bathwater, and the editors of the report make it clear that neither do they. I think Stallman is an embarrassment and a liability for the Free Software movement. I respect his moral integrity on software freedom and some other political causes (including his clumsy, yet justified condemnations of police brutality, and boycott of Coca-Cola company due to their use of fascist death squads to suppress Colombian trade unions), but his awful takes on issues of basic respect and empathy toward women, suspiciously fervent wilingness to defend sexual relations between teenage minors and adults, and a number of other gaffes (both ones listed in the report and some that are less morally detestable, but still embarrassing) are still bad enough that Iād be willing to elect an inanimate carbon rod as the leader of the movement before him.
1: Itās conceivable that Richard Matthew Stallman has a secret humiliation fetish he indulges in by installing Oracle products on his secret Windows 11 computer while drinking Coca-Cola. I do not wish to imply that Richard Matthew Stallman has a secret humiliation fetish he indulges in by installing Oracle products on his secret Windows 11 computer while drinking Coca-Cola, but I will simply point out itās conceivable that Richard Matthew Stallman has such a secret humiliation fetish involving the aforementioned details, and that I have conceived such a scenario simply to prove it is conceivable, that (etc.).
Especially leadership of a political organization thatās basically just there to turn his opinions into code and publish his essays.
Something to which they, and people like them, are entitled
the lobste.rs thread is a trash fire too.
of note is that the Stallman defenders from about 3 years back (when he waded in unprompted in a mailing list meant for undergrads at MIT and was pretty damn sure that Marvin Minsky never had sex with one of Epsteinās victims, and if he did, it would have been because he was sure she wasnāt underage) have registered https://stallman-report.com which redirects to their lengthy apologia. Could be worth taking into account fi you want to spread the original around
Ignorance is a choice. That thread is full of bad choices.
Top level comment at time of posting:
āThis might not look that bad, but consider the post-USSRā¦ā
???
No need for these soviet level mental gymnastics. You can just say he needs to be removed permanently.
As more and more browsers are enshittifying, this is a small reminder that Brave is not a great alternative.
I bear news from the other place!
https://www.reddit.com/r/australia/comments/1g3zt5b/hsc_english_exam_using_ai_images/
Post content reproduced here:
autoplag image of some electronics on a table
hello, as a year 12 student who just did the first english exam, i was genuinely baffled seeing one of the stimulus texts u have to analyse is an AI IMAGE. my friend found the image of it online, but thatās what it looked like
for a subject which tells u to āanalyse the deeper meaningā, āanalyse the composerās intentā, āappreciate aesthetic and intellectual valueā having an AI image in which you physically canāt analyse anything deeper than what it suggests, itās just extremely ironic š idk, [as an artist who DOESNT use AI]* i might have a different take on this since iām an artist, what r ur thoughts?
*NB: original post contains the text: āas an artist using AI imagesā but this was corrected in a later comment:
also i didnāt read over this after typing it out but, meant to say, āas an artist who DOESNT use AIā
In a twisted way, this makes sense as an exercise for English class. Why would someone go to an autoplag image generator, type in a prompt (perhaps something like ālaptop and smartphones on a table at a lakefrontā) and save this image. Itās a question I canāt easily answer myself. Itās hard to imagine the intention behind wanting to synthesize this particular picture, but itās probably something weāll be asking often in the near future.
I can even understand the shrimp Jesus slop or soldiers with huge bibles stuff to an extent. I can understand what the intended emotional appeal is and at least feel something like bewilderment or amusement about the surreality of them. This one would be just banal even if it were a real photo, so why make this? The AI didnāt have intent or imbue meaning in the image but surely someone did.
[https://x.com/shinboson/status/1846000415793463684?s=46](Roko gets dunked on.)
He had some remarkable tweets a few weeks back about how he was the equal of billionaires or some shit. I wish I had shared it.
Is he some kind of computer fondler irl?
heās a mathematician, is or was a lecturer somewhere i think
If heās so smart, why did he put the car in the asteroid belt and not on a road?
That launch happened Feb 2018. By that time, I was already solidified as a musk sceptic and didnāt pay attention to the hubbub. Thinking back on it:
- Why was this a thing?
- per wikipedia:
Musk explained he wanted to inspire the public about the āpossibility of something new happening in spaceā as part of his larger vision for spreading humanity to other planets.ā
What I like about the phrasing āpossibility of something newā is that nothing new really happened with that launch. Weāve already sent all kinds of junk into space in configurations varying in impressiveness.
- Naming the mannequin Starman falls apart since the eponymous starman is an extra terrestrial. Just goes to show that Musk is not a Real NerdTM and just makes surface level references to look cool.
I was also a Elon skeptic back-then, but Iāll admit I did get a kick out of the ādonāt panicā dashboard.
But golly does he read H2G2 completely wrong (transcript):
I think and it highlighted an important point which is that a lot of times the question is harder than the answer. And if you can properly phrase the question, then the answer is the easy part. So, to the degree that we can better understand the universe, then we can better know what questions to ask. Then whatever the question is that most approximates: whatās the meaning of life? Thatās the question we can ultimately get closer to understanding. And so I thought to the degree that we can expand the scope and scale of consciousness and knowledge, then that would be a good thing.
Itās backwards! It misses the joke! It took thousands of years and they got a nonsensical answer before any question! It took a thousand more and they got a nonsensicalāincompatibleāquestion! It has been theorized that should someone understand the universe it would be replaced by something more complicated! It has also been theorized this has already happened! Also regarding scale of knowledge, Trin Tragula definetly showed that the One thing you canāt afford to have in this universe, is a sense of perspective!
Surely his reading comprehension isnāt actually this bad, and he only got a bad meme-cliffnotes version of the radio-series/books/movies!?!
Surely his reading comprehension isnāt actually this bad, and he only got a bad meme-cliffnotes version of the radio-series/books/movies!?!
Ow boy do I have some stories for you about his takeaways from other media he consumed. For example, and a worse take.
(There was also a take which I remember where he mentioned that in the Culture humans are basically pets of the AIs (which is mentioned in the books yes, but only by the people who are anti/sceptical of The Culture, The Culture itself makes it pretty clear they are not pets. (In the last book they even seem to fulfill some more vital role in keeping the AIs sane, and people have full autonomy in a way that pets never do)). Couldnāt find this take sadly. As everything is now about the Haitian pets thing (erugh, racist shits), and also this āhumans are petsā thing is an not uncommon misreading of people reading the culture novels).
fucking hell how are Muskās Banks takes even worse than I remember? that was the point where I realized I hated the fucker so youād think thereād be nothing left to feel rage over but no, Musk corpsefucking Iainās memory once he was too dead to tell Musk to go fuck himself is absolutely doing it
āutopian anarchistā I assume means āit would be nice in theoryā.
Nah, his ego is big enough that heās probably some version of the an-cap, with a personal political based in āI could give everyone this utopian world if the government and the āwoke mobā and the labor organizers would just get out of my way.ā
a36A!!!
aaaaaaaaaaaaaaaaaaaaaaaaaaAAAAAAAAAAA!!!
Why was this a thing?
Publicity stunt for both SpaceX and Tesla, as well as Musk himself, and a successful one at that.
ššš
Hope Muskās existence and influence today was worth it, nerds from 2018!
Think this might be the first tweet of Roko I somewhat agree with. At least Roko did something somewhat intellectual in creating pascals wager for nerds. Musks intellectual accomplishments are worse. He thinks the derivative function is some sort of glorious masterpiece of math, and he doesnāt seem to understand chess. I think the only things he really created was the handle that sinks into the car, and the look of the cybertruck.
Funny to see the Rationalists start to turn on their glorious savior from AGI doom. (Which has been happening for a while now it seems, some even argue he never actually interacted with anybody from the Rationality community (btw before he blocked people being able to see all people you follow on twitter he followed slatestarcodex)))
Okay but āElon is not smarter than meā is a universally true statement in the exact same way as ādumb as a rockā is a universally applicable idiom.
Indeed, it isnāt that Roko is smart, it is the bar is so low.
Quick sidenote, you cocked up the formatting on the hyperlink - youāre supposed to put [text in square brackets and](the link in circle brackets) like this
Thanks!
I suspect itās the frontend on awful that fucked this up, viewing their post in plain/preview shows the correct formatting
placing my bet on the trailing
.
it may be helpful to know that, at least on the platforms I have tried, you can highlight text and paste a link, and the awful.systems will handle the bracketing for you.
lol fandom could have been even worse
data moat
iirc they had tools to import data from other wikis into theirs, but not tools to export.
they have the MediaWiki database dumps, which are XML so you can do anything with them!! *
* the actual page text is a single field
Thatās just the kind of innovation we need to get over this primitive and outdated impulse to cooperate with one another.
imagine how they could have monetized it
Surely Wikia could have catapulted to the upper echelons of the Fortune 500 if they had just moved faster to gatekeep the facts about gender-swapped Lady Vegeta being a rare card in set 27 of the Dragonball gacha game
ok my first thought was to make a joke about castle warfare, despite my knowledge set being ephemera from a childhood appreciating tech trees in video games. So I did some research:
- The etymology of āmoatā is that it comes from the word āmotteā. I will not elaborate.
- Moats were effective against early forms of siege warfare, like battering rams, siege towers, and mining out the foundations of a castleās defences, or anything that required approaching the castle directly
- Moats were made somewhat obsolete by siege artillery, which did not need to be in the direct vicinity of the castle
Err so yeah. Make your own jokes, ig.
Anyway, this has been MoatFactsā¢ļø. Paging @skillissuer@discuss.tchncs.de for better commentary*
In this context, āmoatā is a cargo-cult invocation of Warren Buffett and Benjamin Graham. Just another square on the hackernews bingo
idk what to exactly put there, moat is still an obstacle even in modern context, but assault on a castle with a moat using modern weaponry would be hilariously one-sided. you can suppress defenders with something, use a bridge layer to get inside the moat, then let combat engineers do their shenanigans to āopenā castle one way or another. or you can use helis to do the same, or you can just level it all with artillery or airstrike or maybe even loads of ATGMs
that said itās not completely useless. moats but dry were used as a part of fixed fortifications in ww1 quite successfully. freshly invented electrified barbed wire fence and machine guns made them quite hard to pass, especially if you are, say, a peasant from tula oblast born in 1898 that has never seen powerline before. i think the last proper moat use in large-scale warfare happened during iran-iraq war, in battle of the marshes, when iraqis flooded previously dry area known as fish lake and put underwater coils of barbed wire and high-voltage cables. defensive tactic used there was to shoot at assaulting iranians to make them abandon or fall out of their boats or amphibious vehicles, then when they were in the water high voltage lines were energized. iranians eventually crossed the marshes entirely using speedboats. maybe itās not that outdated considering that last recored bayonet charge happened in 2004 (by brits in iraq). ymmv
I will internalise this for the next time data moats come up!
Me, a nuclear engineer reading about āGoogle restarting six nuclear power plantsā
lol, lmao even
Future headline: āGoogle quietly shuts down six nuclear power plantsā
Zitronās given commentary on PC Gamerās publicly pilloried pro-autoplag piece:
Heās also just dropped a thorough teardown of the tech press for their role in enabling Silicon Valleyās worst excesses. I donāt have a fitting Kendrick Lamar reference for this, but I do know a good companion piece: Devs and the Culture of Tech, which goes into the systemic flaws in tech culture which enable this shit.
As anyone whoās been paying attention already knows, LLMs are merely mimics that provide the āillusion of understandingā.