

“South American shot after opening fire on police searching for illegal immigrant gang members”
FTFY
“South American shot after opening fire on police searching for illegal immigrant gang members”
FTFY
Well, it wasn’t a comment on the quality of the model, just that the context limitation has already been largely overcome by one company, and others will probably follow (and improve on it further) over time. Especially as “AI Coding” gets more marketable.
That said, was this the new gemini 2.5 pro you tried, or the old one? I haven’t tried the new model myself, but I’ve heard good things about it.
Yeah, I’ve been seeing the same. Purely economically it doesn’t make sense with junior developers any more. AI is faster, cheaper and usually writes better code too.
The problem is that you need junior developers working and getting experience, otherwise you won’t get senior developers. I really wonder how development as a profession will be in 10 years
Working on a big codebase, I don’t even get the idea to ask an AI, you just can’t feed enough context to the AI that it’s really able to generate meaningful code…
That’s not a hard limit, for example google’s models can handle 2-million-token context window.
AI isn’t ready to replace programmers, engineers or IT admins yet.
On the other hand… it’s been about 2.5 years since chatgpt came out, and it’s gone from you being lucky it could write a few python lines without errors to being able to one shot a mobile phone level complexity game, even with self hosted models.
Who knows where it’ll be in a few years
Well, anything else just wouldn’t be Christian, you know. I’d hate to have to report you…
In the wise words of Londo Mollari
Only an idiot fights a war on two fronts. Only the heir to the throne of the kingdom of idiots would fight a war on twelve fronts.
Since I already use ZFS for my data storage, I just created a private dataset for sensitive data. I also have my services split based on if it’s sensitive or not, so the non sensitive stuff comes up automatically and the sensitive stuff waits for me to log in and unlock the dataset.
Trigger word analysis happens locally. It’s when that’s triggered that audio gets sent to cloud
Don’t let the man get you down
I’m sorry, but what is ill informed or opinion about it? Fact is it can do things no other image generator can do, open source or not. It can also effortlessly do things that would require a lot of tinkering with controlnet in comfyui, or even making custom lora’s. It’s a multimodal model that can do image and text both input and output, and does it well. All other useful image generators are diffusion based, which doesn’t read a prompt in the same way, and is more about weighting patterns based on keywords rather than any real understanding of the prompt. That’s why they’re struggling with relatively simple things like “a full glass of wine” or “a horse riding an astronaut on the moon”. If I’m wrong about this, please prove me wrong. Nothing would make me happier than finding an open source model that can do what openai’s new image model can do, really. I already run llama.cpp servers and comfyui locally, I have my own AI server in the basement with a P40 and a 3090. Please, please prove me wrong here.
I love open models, and been running them locally since first llama model, but that doesn’t mean I willfully ignore and pretend what claude and openai and google develops doesn’t exist. Rather I want awareness about it, that it does exist, and I want an open source version of it.
ah yes, I forgot we live in post-truth society where reality doesn’t matter and only your feelings are important. And since your feelings say AI bad, proprietary bad, and reddit bad, you don’t have to actually think or take into consideration reality.
I know them, and used them a bit. I even mentioned them in an earlier comment. The capabilities of OpenAI’s new model is on a different level in my experience.
https://www.reddit.com/r/StableDiffusion/comments/1jlj8me/4o_vs_flux/ - read the comments there. That’s a community dedicated to running local diffusion models. They’re familiar with all the tricks. They’re pretty damn impressed too.
I can’t help but feel that people here either haven’t tried the new openai image model, or have never actually used any of the existing ai image generators before.
No other model on market can do anything like that. The closest is diffusion based where you could train a lora with a person’s look or a specific clothing, then generate multiple times and / or use controlnet to sorta control the output. That’s fast hours or days of work, plus it’s quite technical to set it up and use.
OpenAI’s new model is a paradigm shift in both what the model can do and how you use it, and can easily and effortlessly produce things that was extremely difficult or impossible without complicated procedures and post processing in Photoshop.
Edit Some examples. Try to make any of this in any of the existing image generators
It understands what you’re telling it, and can generate images from vague descriptions, combine things from different images just by telling it, modify it and understand the context - like knowing that “me” is the person in the image, for example.
Edit: From OpenAI - “4o image generation is an autoregressive model natively embedded within ChatGPT”
OpenAI is so lagging behind in terms of image generation it is comical at this point.
You’re the one lagging behind. OpenAI’s new image model is on a different level, way ahead of the competition
“You don’t have to be faster than the bear, you just have to be faster than the other guy”
The quote was originally on news and journalists.
unsandboxed software
I wonder how hard it would be to sandbox most games. We have things like https://en.m.wikipedia.org/wiki/Sandboxie and most games would have a fairly simple access list.
Edit: or sandbox steam itself
Whoa, this isn’t wood shop class?