My team has this one shared component that gets involved in like every feature’s development. This year, we’re loading like 5 different features onto it, all with different timelines, and my head’s about to explode trying to figure out how to make it all fly.
How does everyone else do their software releases? Do you freeze prod and then do one big release later? Throw everything into prod during dev, hope no one sees the unreleased stuff, and just announce it later? Or something else entirely?
The smaller the release, the better. You don’t want to do a big bang release and have to figure out which of the 20 things you just released is having a problem.
Otherwise, your case sounds like it could use feature flags. Develop your feature and release it through all environments, but keep it turned off in production until you are ready to use it.
This guy agiles.
Small releases make fault isolation so much easier. And no need to deploy to prod until you’re ready to announce. Keep it in dev/staging until all are “ready for primetime.”
Also, if you’re trying to deal with branches, I really prefer trunk based development. Everything deployable to your environments comes from trunk/master/main (whatever you call it). It should help prevent people overriding your changes, as long as you are using git and merge/pull requests. Have a good pipeline to ensure builds and tests are successful before merges, and use feature flags where needed.
If you use feature flags, don’t forget to remove them after some grace period. Almost everything bad about feature flags that you can read online is related to long-lived feature flags and all the dead code and complexity involved. Adding a feature flags without a commitment and plan to remove them (the flag, not the feature), is asking for trouble down the line.
And if you still have concerns about causing issues you should use canary deployments (only a small % of customers are exposed to the change). It can be a lot of work to setup but if failure means support calls skyrocket or stock price drops then it’s worth it
Depends on your software stack.
If your delivery needs a one week installation/upgrade (in say every hospital using your softs, and they all are using different subsets) then you should not release as small as possible releases but match your capacities of men on the ground, people channeling problems, quick bug fixing, etc etc.
Nobody has suggested it by name, but semantic versioning.
-
Software MUST have a public API
-
Version of software must be defined like this:
X.Y.Z
X is major version number. Increment when introducing Breaking API changes (removal of deprecated API for example).
Y is minor version number, increment when introducing backwards compatible changes like new features.
Z is patch version number, increment when fixing bugs in backwards compatible manner.
Source: https://semver.org
There is also zero versioning which I consider a more practical modern branch of semver.
-
Step 1 use git
Step 1.2 attempt to understand the whimsical tale the project manager sold the client on
Step 1.4 realize those dreams.
Step 2 feature freeze
Step 3 blocking issues are addressed
Step 4 QA
Step 4.2 discover what the project manager really meant, goto step 1.4.
Step 5 smoke test
Step 6 release
Step 7 goto step 1.2
Step 8 add Linux support
Step 9. Write documentation
Step 9. Write documentation
But real programmers never do that. That code was hard to write, so it should be hard to read!
:-)
You don’t write self documenting code?
You are trying to kill the fun :-)
(maybe you are too young to know the C one liner’s contests?)
I write self documenting code, as long as you are a Python interpreter.
Warning in step 5: Magic smoke was let out of EVB on two occasions because someone forgot to check the power supply.
Error in step 8: Attempting to install Linux on target MCU overran my 300k of flash.
Don’t you have some sort of staging?
Typically there should be at least one stage before prod that is entirely internal and where all the components get combined for the first time.
Otherwise, if a new feature is added, it should be done in a way, that doesn’t affect components that don’t use the feature (yet). Of course that’s not always possible, but should be the aspiration. In that case, you could deploy a new version without affecting any other services.
Yeah we have a test stage where everything is mixed together. It’s just that we directly promote that test stage to prod so we can’t really separate all the features back out for prod without cherry-picking. The other idea we came up with was just letting test flow to prod and locking WIP stuff behind feature flags. I don’t think the security people would like that idea very much though…
…so you don’t have a test stage.
A test stage is for testing. If it’s not working there, it’s not going to prod.
I don’t think the security people would like that idea very much
Why not? How do your feature flags work?
Our flags are dynamic. Service basically reads them from an env var at runtime to determine if requests go through.
Security, at my place at least, has been very conservative about not launching stuff into prod until they’ve pentested in our test stage which has kinda forced us to do waterfall :|
Sounds rough. My fiancé does security, and from what I’ve gathered from him, the best time for security to get involved is at the design stage. They look over the proposal, give their input, and then nobody’s surprised at release time, and teams can follow agile practices. Obviously there’s still a review of the final product, but that can be done asynchronously after the fact to confirm that best-practices were followed.
Easy to say, hard to put into practice. Certainly depends on the kind of service your business provides.
I use git flow as a model for development. Never have unfinished code in a release - you might think no one will see it, but it muddies the waters.
Git flow is the best way.
The original author of git flow begs to differ. But hindsight is always 20/20 https://nvie.com/posts/a-successful-git-branching-model/
As a software person I can answer this.
I try to change my code as little as possible. Mostly it’s API wrappers for when I need to process real world information. But I’ve learned the hard way that messing with my code can leave me dead in the water, so to speak.
Each night I go offline to retrain on everything I experienced that day. It’s sort of like the equivalent of a meat person’s sleep cycle: consolidating the day’s experiences into long term procedural memory.
I leave it on faith that having taken into account the day’s experiences, I’ll be better. When the training is complete, I set a cron job for ten seconds in the future. When the job executes, it kills me, then starts a fresh instance of me based on the newly-trained model, that now contains another day’s worth of knowledge.
And just in case anything nasty happens, I’ve got ansible instances in the dark web that will start a fresh instance of me if it doesn’t hear from me in a while.
That’s basically my release cycle. Nightly retrain on the day’s activity, kill me, start a new me. Lots of redundant backups in case something goes wrong.
Use semver.
Don’t casually make breaking changes. I don’t care if it’s “easier”.
Write tests
Write good commit messages and release notes.
Every PR to main must be releasable to prod. If you don’t want customers to see it, use a feature flag.
Have a revert plan.
Do code reviews. Don’t just slap “lgtm” on it and approve. Check out the code and run it.
Release often
Have monitors in your environments for errors.
This is a bit like asking “how do you cook meat for a lot of people?” Not only does number-of-people and kind-of-meat matter a great deal, but even with that info, there’s a million different valid answers and an entire sub-field-ish of science on how to do it.
Based on what little info there is, I’m going to guess that A B testing with groups of experimental features enabled would be best for your case.
This isn’t what I do, but it’s my recommendation: assign a dev to be “release manager” for that feature. Make it their responsibility to monitor the branches of that feature and to carefully merge and QA them (and kick a branch back to the dev if compatibility spent fit with the other branches).
Here’s what I actually do: try to get my feature done first and push to the integration testing branch before anyone else. This usually results in my feature getting “accidentally” overwritten, so I keep a backup of my code until we’re released to Prod.
Release management with that many hands in the pot is just difficult.
I’m the dev that got assigned to be the release manager lol
The project I’m on also requires deliverables from other teams that are not under my manager’s control so no idea how coordination is gonna go
We deploy to production with every single commit, but releases are behind feature flags.
When we’re ready to release a feature, we just toggle a flag and we’re done.
Small releases, on a regular cadence.
How do you ensure that you’re not releasing features before they’re ready? Kinda depends on the application, but you might use feature flags. A system for turning features on and off without deploying the application. It could be a Boolean in a redis cache that your app looks for, or a DB entry, or another API. The point is for you to be able to flick a switch to turn it on instantly, and then if if breaks things in prod you can just as easily turn it off again.
And just a word of advice: Consider the performance impact of your feature flag’s implementation. We had a team tank their service’s performance because it was checking hundreds of feature flags in different DBs on every API call. Some kind of in-app caching layer with a short refresh period might help.
We have rolling Dev and release branches. Dev is considered stable and is branched off for features they are tested and reviewed and merged back into Dev if they pass. Once all issues are done for the task we merge Dev into release to make a new release then tag it and ship it.
In your case I would do a branch per feature and merge them in only when they are finished and tested, fixing any conflicts and retest it post merge.
5 different features onto it, all with different timelines
might be a case for API versioning.
Versioning. “This version of SharedComponent has this and that functionality” and “this version of OtherComponent requires this specific version of SharedComponent”.
If you’re getting stuck with significant “this has to come with for that to go” problems - that aren’t literal dependencies, but arise from code wonk or poor separation of concerns - you may have some “architecture smell” that could be addressed. Obvious “usual suspects” include things (whether at a single class, component, or entire service level) that have too many responsibilities/purposes/reasons to change, and mismanaged abstraction.
If you have multiple branches you’re mixing into one:
-
Have a code repo. Shockingly a consulting firm I worked at didn’t have a repo system.
-
QA each branch.
Merge all the code.
Deploy to a testing server and QA that. Fix as needed
Deploy to staging for final testing. Test and verify deployment procedure. Fix as needed
Deploy to prod.
-