

It’s the same for me.
I don’t care if somebody uses Claude or Copilot if they take ownership and responsibility over the code it generates. If they ask AI to add a feature and it creates code that doesn’t fit within the project guidelines, that’s fine as long as they actually clean it up.
I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.
This is the problem I have with it too. Using something that vulnerable to prompt injection to not only write code but commit it as well shows a complete lack of care for bare minimum security practices.
















He used OpenClaw to write and commit code. That shows a complete lack of care for even the most bottom of the barrel security standards.
If there’s any valid reason for someone to be ridiculed over using AI, it’s that.