

What that really means is they’ll be vetted for any indications of disliking Trump.
Developer and refugee from Reddit
What that really means is they’ll be vetted for any indications of disliking Trump.
The thing that’s really got me excited is remembering all the Republicans who claimed Trump was the guy to vote for if you wanted peace and that Democrats were all warmongers.
Fucking mendacious dipshits, the lot of them.
Kind of a weird one, but the beginning theme to the original Ghost in the Shell: Stand Alone Complex has stuck with me.
The song is called “Inner Universe,” but the lyrics are Russian.
Those are the only interests he has.
That’s an odd thing to say. For one thing, there are plenty of physical activities that one could get a reasonable description of from ChatGPT, but if you can’t actually do them or understand the steps, you’re gonna have a bad time.
Example: I’ve never seen any evidence that ChatGPT can properly clean and sterilize beakers in an autoclave for a chemical engineering laboratory, even if it can describe the process. If you turned in homework cribbed from ChatGPT and don’t actually know how to do it, your future lab partners aren’t going to be happy that you passed your course by letting ChatGPT do all the work on paper.
There’s also the issue that ChatGPT is frequently wrong. The whole point here is that these cheaters are getting caught because their papers have all the hallmarks of having been written by a large language model, and don’t show any comprehension of the study material by the student.
And finally, if you’re cheating to get a degree in a field you don’t actually want to know anything about… Why?
Why fight against it? Because some of these students will be going into jobs that are life-or-death levels of importance and won’t know how to do what they’re hired to do.
There’s nothing wrong with using a large language model to check your essay for errors and clumsy phrasing. There’s a lot wrong with trying to make it do your homework for you. If you graduate with a degree indicating you know your field, and you don’t actually know your field, you and everyone you work with are going to have a bad time.
I’m waiting for the results of the lawsuits. I said “I’m not so sure,” but that doesn’t mean I think the election was stolen, either. It means I’m not certain either way.
But it doesn’t really matter either way. Say ironclad evidence comes out that he stole the election; we have no means to walk it back.
I’m not so sure, but sadly our constitution doesn’t have a mechanism to rewind anyway. All we could do is impeach and remove him, and then just keep impeaching and removing successors until we get someone sane.
Personally, I’m for a constitutional amendment to fix this, but it’ll never happen.
Seems like it’s cheaper and more efficient just to pay people to fuck on camera.
Yup. The linked article has been updated to that effect. Both Democrats, the assassin was impersonating a police officer, and both are in grave condition.
Democrats need to start hiring personal security. We’re at that point.
Oh my God… The best/worst thing about the idea of AI porn is how AI tends to forget anything that isn’t still on the screen. So now I’m imagining the camera zooming in on someone’s jibblies, then zooming out and now it’s someone else’s jibblies, and the background is completely different.
I disagree. The richest are such greedy, vile scumbags that they’ll notice it long enough to chortle with glee.
But then yeah, they’ll promptly forget all about it.
The trick to using an AI agent effectively is already knowing exactly what you want, typing the request out in excruciating detail, and being a good developer who properly reviews code so you catch all the errors and repetition the AI agent will absolutely include.
So… Yeah. 100% agree. AI agents are useful, but impossible to use if you aren’t already skilled with code.
I genuinely don’t know what to do with people like him. On the one hand… Yeah. He knowingly hired undocumented people, making him a hypocrite, and he just voted to have those people forcibly deported against his own interests, making him a fucking dumbass.
At the same time, he seems to be showing actual remorse, and that should definitely be encouraged. The only - only - way this country has even the slightest shot at recovery is by flipping large numbers of the orange shit-gibbon’s supporters, like this guy.
I really want to believe that’s possible. I don’t think it is, but I want to believe it.
Edit: Missed the part in the article where these guys had valid work visas.
Literally Hitler’s playbook.
Well, technically, yes. You’re right. But they’re a specific, narrow type of neural network, while I was thinking of the broader class and more traditional applications, like data analysis. I should have been more specific.
That’s only part of the problem. Yes, JavaScript is a fragmented clusterfuck. Typescript is leagues better, but by no means perfect. Still, that doesn’t explain why the LLM can’t recall that I’m using Yarn while it’s processing the instruction that specifically told it to use Yarn. Or why it tries to start editing code when I tell it not to. Those are still issues that aren’t specific to the language.
But it still manages to fuck it up.
I’ve been experimenting with using Claude’s Sonnet model in Copilot in agent mode for my job, and one of the things that’s become abundantly clear is that it has certain types of behavior that are heavily represented in the model, so it assumes you want that behavior even if you explicitly tell it you don’t.
Say you’re working in a yarn workspaces project, and you instruct Copilot to build and test a new dashboard using an instruction file. You’ll need to include explicit and repeated reminders all throughout the file to use yarn, not NPM, because even though yarn is very popular today, there are so many older examples of using NPM in its model that it’s just going to assume that’s what you actually want - thereby fucking up your codebase.
I’ve also had lots of cases where I tell it I don’t want it to edit any code, just to analyze and explain something that’s there and how to update it… and then I have to stop it from editing code anyway, because halfway through it forgot that I didn’t want edits, just explanations.
I can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy “dataset” that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.
But I don’t think we’re anywhere near there yet.
I’m of two minds on this.
On the one hand, I find tools like Copilot integrated into VS Code to be useful for taking some of the drudgery out of coding. Case in point: If I need to create a new schema for an ORM, having Copilot generate it according to my specifications is speedy and helpful. It will be more complete and thorough than the first draft I’d come up with on my own.
On the other, the actual code produced by Copilot is always rife with errors and bloat, it’s never DRY, and if you’re not already a competent developer and try to “vibe” your way to usablility, what you’ll end up with will frankly suck, even if you get it into a state where it technically “works.”
Leaning into the microwave analogy, it’s the difference between being a chef who happens to have a microwave as one of their kitchen tools, and being a “chef” who only knows how to follow microwave instructions on prepackaged meals. “Vibe coders” aren’t coders at all and have no real grasp of what they’re creating or why it’s not as good as what real coders build, even if both make use of the same tools.