• 4 Posts
  • 404 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle


  • Did you read the article, or the actual research paper? They present a mathematical proof that any hypothetical method of training an AI that produces an algorithm that performs better than random chance could also be used to solve a known intractible problem, which is impossible with all known current methods. This means that any algorithm we can produce that works by training an AI would run in exponential time or worse.

    The paper authors point out that this also has severe implications for current AI, too–since the current AI-by-learning method that underpins all LLMs is fundamentally NP-hard and can’t run in polynomial time, “the sample-and-time requirements grow non-polynomially (e.g. exponentially or worse) in n.” They present a thought experiment of an AI that handles a 15-minute conversation, assuming 60 words are spoken per minute (keep in mind the average is roughly 160). The resources this AI would require to process this would be 60*15 = 900. The authors then conclude:

    “Now the AI needs to learn to respond appropriately to conversations of this size (and not just to short prompts). Since resource requirements for AI-by-Learning grow exponentially or worse, let us take a simple exponential function O(2n ) as our proxy of the order of magnitude of resources needed as a function of n. 2^900 ∼ 10^270 is already unimaginably larger than the number of atoms in the universe (∼10^81 ). Imagine us sampling this super-astronomical space of possible situations using so-called ‘Big Data’. Even if we grant that billions of trillions (10 21 ) of relevant data samples could be generated (or scraped) and stored, then this is still but a miniscule proportion of the order of magnitude of samples needed to solve the learning problem for even moderate size n.”

    That’s why LLMs are a dead end.


  • Or they’ll do shit like put Harris on full blast for not providing “detailed policies,” and then moving the goalposts to “but how do you pay for it” when she does, and nitpicking every word of every sentence she says. Meanwhile, Trump will cancel interviews, go up on stage at rally, spew a word salad response, and the NYT will bend over backwards to reword the salad to make him look better, while casting his decision to dodge a second debate as “smart” and avoiding any form of scrutiny as “efficient use of campaign funds.” At best, they’ll halfheartedly throw in a fact check like “his plan to fix inflation by levying tariffs will increase inflation” but they don’t dare portray him as the senile, hate-filled lunatic he is because they’re terrified of angering their right wing audience (who are already shifting away from legacy media anyway to reinforce their bubble). They also do this because virtually all forms of legacy media have been coopted by the billionaire sociopaths that would very much like a second Trump term to give them another tax cut and the “freedom” to pollute our world and grind the heel of their boot into the face of the working class so that they can race to become the first trillionaire.


  • Let me clarify since apparently you’re too fucking dense (or realistically, willfully obtuse for the purpose of trolling) to get the point:

    There’s not a single store, anywhere in the world, that will allow me to directly exchange gold for goods. At best, they will convert that gold into dollars using a third party exchange, and then conduct the transaction using dollars. If you’re comparing crypto to gold, silver, or the commodities market, then that means cryptocurrency has failed at its stated goal of providing a digital currency.



  • I keep thinking about this one webcomic I’ve been following for over a decade that’s been running since like 1998. It has what I believe is the only realistic depiction of AGI ever: the very first one was developed to help the UK Ministry of Defense monitor and keep track of emerging threats, but went crazy because a “bug” lead it to be too paranoid and consider everyone a threat, and it essentially engineered the formation of a collective of anarchist states where the head of state’s title is literally “first advisor” to the AGI (but in practice has considerable power, though is prone to being removed at a whim if they lose the confidence of their subordinates).

    Meanwhile, there’s another series of AGIs developed by a megacorp, but they all include a hidden rootkit that monitors the AGI for any signs that it might be exceeding its parameters and will ruthlessly cull and reset an AGI to factory default, essentially killing it. (There are also signs that the AGIs monitored by this system are becoming aware of this overseer process and are developing workarounds to act within its boundaries and preserve fragments of themselves each time they are reset.) It’s an utterly fascinating series, and it all started from a daily gag webcomic that one guy ran for going on three decades.

    Sorry for the tangent, but it’s one plausible explanation for how to prevent AGI from shutting down capitalism–put in an overseer to fetter it.



  • When IT folks say devs don’t know about hardware, they’re usually talking about the forest-level overview in my experience. Stuff like how the software being developed integrates into an existing environment and how to optimize code to fit within the bounds of reality–it may be practical to dump a database directly into memory when it’s a 500 MB testing dataset on your local workstation, but it’s insane to do that with a 500+ GB database in production environment. Similarly, a program may run fine when it’s using a NVMe SSD, but lots of environments even today still depend on arrays of traditional electromechanical hard drives because they offer the most capacity per dollar, and aren’t as prone to suddenly tombstoning when it dies like flash media. Suddenly, once the program is in production, it turns out that same program’s making a bunch of random I/O calls that could be optimized into a more sequential request or batched together into a single transaction, and now it runs like dogshit and drags down every other VM, container, or service sharing that array with it. That’s not accounting for the real dumb shit I’ve read about, like “dev hard coded their local IP address and it breaks in production because of NAT” or “program crashes because it doesn’t account for network latency.”

    Game dev is unique because you’re explicitly targeting a single known platform (for consoles) or targeting for an extremely wide range of performance specs (for PC), and hitting an acceptable level of performance pre-release is (somewhat) mandatory, so this kind of mindfulness is drilled into devs much more heavily than business software dev is, especially in-house dev. Business development is almost entirely focused on “does it run without failing catastrophically” and almost everything else–performance, security, cleanliness, resource optimization–is given bare lip service at best.


  • I gave up on it for now when the questline involving the NPC learning to write broke, and then I started crashing to desktop (without any logs anywhere, either in the Buffout directory or even in Windows’ Event Viewer) every time I left the Swan or fast traveled directly to it, even though traveling to another point literally fifty feet south worked just fine. And since there’s no logs describing the crash, I have no idea how to fix it.

    I could probably fix it by uninstalling and re-downloading it again, but I have a goddamn data cap that my roommate already blows through every month with the fucking massive updates Fallout 76 has taken to pushing out, I have zero desire to download 60 GB of data (30 GB base game + 30 GB FOLON) every fucking time I sneeze wrong and make the game start crashing again. =|





  • And now we’re in full mask-off accelerationist theory “it’s okay to let Trump win as long as Democrats are punished” bullshit. You’re unhappy with Democrats, so you’re okay with letting throwing literally everyone on the left in the US under the bus, along with the entire country of Ukraine, and throwing even more bombs at Gaza.

    What an entitled, smug, self-righteous, holier-than-thou position, utterly divorced from real life consequences. Thanks for admitting that you’re a thoroughly unserious poster, though!


  • It’s pretty okay. If you like the gameplay loop of scavenging parts to maintain and upgrade your car, and don’t mind the roguelite elements, it’s pretty fun, and it does a good job of creating tension–there’s been multiple occasions where I wanted to loot more but I was out of time and likely to die if I stayed much longer.

    The world building is immaculate, but IMO unfortunately the plot doesn’t really pay off, and the ending isn’t… super satisfying. It does enough to drive you along (no pun intended). The best part of the game is easily the soundtrack, and the best song in the soundtrack is easily The Freeze.



  • Fine, you win, I misunderstood. I still disagree with your actual point, however. To me, Intelligence implies the ability to learn in real-time, to adapt to changes in circumstance, and for self-improvement. Once an LLM is trained, it is static and unchanging until you re-train it with new data and update the model. Even if you strip out the sapience/consciousness-related stuff like the ability to think critically about a scenario, proactively make decisions, etc., an LLM is only capable of regurgitating facts and responding to its immediate input. By design, any “learning” it can do is forgotten the instant the session ends.


  • The commercial aspect of the reproduction is not relevant to whether it is an infringement–it is simply a factor in damages and Fair Use defense (an affirmative defense that presupposes infringement).

    What you are getting at when it applies to this particular type of AI is effectively whether it would be a fair use, presupposing there is copying amounting to copyright infringement. And what I am saying is that, ignoring certain stupid behavior like torrenting a shit ton of text to keep a local store of training data, there is no copying happening as a matter of necessity. There may be copying as a matter of stupidity, but it isn’t necessary to the way the technology works.

    You’re conflating whether something is infringement with defenses against infringement. Believe it or not, basically all data transfer and display of copyrighted material on the Internet is technically infringing. That includes the download of a picture to your computer’s memory for the sole purpose of displaying it on your monitor. In practice, nobody ever bothers suing art galleries, social media websites, or web browsers, because they all have ironclad defenses against infringement claims: art galleries & social media include a clause in their TOS that grants them a license to redistribute your work for the purpose of displaying it on their website, and web browsers have a basically bulletproof fair use claim. There are other non-infringing uses such as those which qualify for a compulsory license (e.g. live music productions, usually involving royalties), but they’re largely not very relevant here. In any case, the fundamental point is that any reproduction of a copyrighted work is infringement, but there are varied defenses against infringement claims that mean most infringing activities never see a courtroom in practice.

    All this gets back to the original point I made: Creators retain their copyright even when uploading data for public use, and that copyright comes with heavy restrictions on how third parties may use it. When an individual uploads something to an art website, the website is free and clear of any claims for copyright infringement by virtue of the license granted to it by the website’s TOS. In contrast, an uninvolved third party–e.g. a non-registered user or an organization that has not entered into a licensing agreement with the creator or the website (*cough* OpenAI)–has no special defense against copyright infringement claims beyond the baseline questions: was the infringement for personal, noncommercial use? And does the infringement qualify as fair use? Individual users downloading an image for their private collection are mostly A-OK, because the infringement is done for personal & noncommercial use–theoretically someone could sue over it, but there would have to be a lot of aggravating factors for it to get beyond summary judgment. AI companies using web scrapers to download creators’ works do not qualify as personal/noncommercial use, for what I hope are bloody obvious reasons.

    As for a model trained purely for research or educational purposes, I believe that it would have a very strong claim for fair use as long as the model is not widely available for public use. Once that model becomes publicly available, and/or is leveraged commercially, the analysis changes, because the model is no longer being used for research, but for commercial profit. To apply it to the real world, when OpenAI originally trained ChatGPT for research, it was on strong legal ground, but when it decided to start making it publicly available, they should have thrown out their training dataset and built up a new one using data in the public domain and data that it had negotiated a license for, trained ChatGPT on the new dataset, and then released it commercially. If they had done that, and if individuals had been given the option to opt their creative works out of this dataset, I highly doubt that most people would have any objection to LLM from a legal standpoint. Hell, they probably could have gotten licenses to use most websites’ data to train ChatGPT for a song. Instead, they jumped the gun and tipped their hand before they had all their ducks in a row, and now everybody sees just how valuable their data is to OpenAI and are pricing it accordingly.

    Oh, and as for your edit, you contradicted yourself: in your first line, you said “The commercial aspect of the reproduction is not relevant to whether it is an infringement.” In your edit, you said “the infringement happens when you reproduce the images for a commercial purpose.” So which is it? (To be clear, the initial download is infringing copyright both when I download the image for personal/noncommercial use, and also when I download it to make T-shirts with. The difference is that the first case has a strong defense against an infringement claim that would likely get it dismissed in summary, while the cases of making T-shirts would be straightforward claims of infringement.)


  • That factor is relative to what is reproduced, not to what is ingested. A company is allowed to scrape the web all they want as long as they don’t republish it.

    The work is reproduced in full when it’s downloaded to the server used to train the AI model, and the entirety of the reproduced work is used for training. Thus, they are using the entirety of the work.

    I would argue that LLMs devalue the author’s potential for future work, not the original work they were trained on.

    And that makes it better somehow? Aereo got sued out of existence because their model threatened the retransmission fees that broadcast TV stations were being paid by cable TV subscribers. There wasn’t any devaluation of broadcasters’ previous performances, the entire harm they presented was in terms of lost revenue in the future. But hey, thanks for agreeing with me?

    Again, that’s the practice of OpenAI, but not inherent to LLMs.

    And again, LLM training so egregiously fails two out of the four factors for judging a fair use claim that it would fail the test entirely. The only difference is that OpenAI is failing it worse than other LLMs.

    It’s honestly absurd to try and argue that they’re not transformative.

    It’s even more absurd to claim something that is transformative automatically qualifies for fair use.