

The translations and distribution of .srt files seems like a clear derivative work. For hard subs, though, yes.


The translations and distribution of .srt files seems like a clear derivative work. For hard subs, though, yes.
And since it seems like you’re not shying away from shows with some fucked up things kids probably shouldn’t have watched but are kinda formative anyways


This article is from two weeks ago, fix your bot


From what I can see, this is something the Thunderbird team had developed for their own internal tooling, and they’re open sourcing it.


After reading through the GitHub docs, the most impressive thing is that they open sourced their Thunderbolt coding agent for Claude Code. There are quite a few skills available for implementation planning, dependency/build environment setup, coding, linting/cleanup, QA, and managing agent pull requests. Pretty good examples if you are looking at building Claude Code skills.


It sounds like a step further than open-webui; it’s an enterprise grade client-server model for access to agents, workflows, and centralized knowledge repositories for RAG.
In addition to local chatbot for executive/admin use, I can see this being the backend for developers running Cursor or some other AI enhanced IDE, with local knowledge stores holding proprietary documents and running against local large models.
I am also curious about time share and prioritization of resources; I assume it would queue simultaneous requests. Presumably this would let you more effectively pool local compute, rather than providing A100 GPUs to each developer that may sit unused when they’re not working.
Edit: Somewhat impressively, this whole stack does not even include a local inference provider; so it does everything except local models right now, and requests are forwarded to cloud inference providers (Anthropic, OpenAI, etc). But it does have the backend started for rate limiting and queuing, and true “fully offline/local” is on the roadmap, just not there yet.
The more memory, the better. On a discrete GPU, you want to focus on the VRAM, but the Mac platforms have the benefit of integrated memory, which is shared between the system and graphics processor, so it can hold much larger models in memory than in VRAM alone.
For comparison, an RTX 3080, which has 12GB of VRAM, gives a pretty decent token rate on models which fit in memory (typically 110 tok/s on a 7-11B parameter model like mistral 7b).
However, a Mac mini 32GB could run a more advanced model like DeepSeek R1 with 32B parameters, at about 11 tok/s. However, when running the same 7B parameter model as the RTX 3080, it would still only generate 14-18 tok/s.
So you really need to balance capability/more advanced models with speed. If you’re okay submitting a prompt and walking away, the Mac mini is great value due to its integrated memory. But if you’re just getting started, you may find it frustratingly slow.
Finally, for comparison, most of the closed cloud models like Opus, ChatGPT, and MiniMax are closer to 700B parameters, an order of magnitude larger than what a layperson could run locally. These models are finally getting to the point where they’re ‘useful’ for complex tasks like unattended coding, but good luck getting 1TB of VRAM outside of a datacenter.


Did you have anyone in a hiring position review your resume? Resume writing is an entire skill, and often, they need to be tailored to the organization where you are applying to work.
There are a number of other factors, depending on who you talked to; do they have positions available? Is there a hiring freeze? Does the person you are talking with know the job requirements?
If you really know the office, there is almost certainly someone local with hiring authority, whose job it is to interface with the headquarters. You will need to apply through the HQ Human Resources system, but they may have some authority to pull your resume from the applicant pool, but generally, these are competitive positions and they are not allowed to directly hire.
If they have contract opportunities, though, you should figure out who the vendor is and apply through the company’s website instead.


deleted by creator


A motorcycle. You can’t outrun the radio.


According to researcher justhaifei1, the vulnerability was responsibly disclosed to Adobe Security
No, this is not responsible disclosure; the guy notified Adobe at the same time as publication. He claims to justify by saying he is seeing this in the wild, but “responsible” does not mean what he says that means.


So far, has a single legal challenge against scraping ever been successful?


Another way to consider it is that performance gains in Cachy are six to eighteen months ahead of “stable” Linux. But that performance increase does mean things are more likely to break with rolling updates.


Not curious, Canonical is widely seen as antithetical to open source ethos. But it is stable and has put in a lot of work for vendor support, which is why so many distros (including Mint) are downstream derivatives from Ubuntu.
If this is accurate, why does Fedora use zram by default?
https://fedoraproject.org/wiki/Changes/SwapOnZRAM
Seems like the author has some legitimate credentials, and I have explicitly noticed the OOM on Fedora SilverBlue when processing shaders in Steam (possible memory leak in Baldurs Gate 3, but still a hard crash when OOM).


Apple did release updates for end of life iOS versions going back to version 15 because of this, for devices going back as far as the original iPhone SE and iPhone 6S, which are well over ten years old.
https://support.apple.com/en-us/100100
https://9to5mac.com/2026/03/11/apple-rolls-out-ios-and-ipados-updates-for-older-devices/
Note the right hand steering wheel.


I hope this is an 8-bit theatre spinoff.
Yes