Ognjen Regoje bio photo

MY NAME IS
Ognjen Regoje
BUT YOU CAN CALL ME OGGY


I make things that run on the web (mostly).
More /ABOUT me.

me@ognjen.io Twitter LinkedIn Github

The Prune Date

Features come and go frequently during the lifetime of a product. Except that they often don’t actually go but just stop being used.

I still (mostly) prefer The Majestic Monolith so I’m not a big fan of having code that just exists but isn’t ever used.

To help with keeping the monolith humming along, I have a informal system that I refer to as the The Prune Date.

The Prune Date is a periodic review session of usage numbers for specific features. It’s approximately once per quarter and doesn’t include recently built features.

The Review

Each feature has the following looked at:

  • Original reasoning behind the feature and if and how it’s changed
  • Usage numbers (eg. a messaging system would have numbers and types of messages exchanged, in total and over time)
  • Stakeholders are consulted (eg. customer service and what they think of messaging)
  • Scale (eg. is the feature still appropriate for current scale)

Decide

As a result of the above responses the feature is either considered a prune, maintain or improve.

If the usage is low, or the original reasoning isn’t valid any more it’s a prune. It will be removed.

Depending on user opinion however, it might push a feature that’s to be pruned into improve.

If the usage is there but there isn’t yet a need to scale and there aren’t any improvements needed it’s just a maintain. This means check for any potential future issues (performance or usability) and note where the feature might need to scale first.

If the usage is growing and/or there is feedback that can significantly increase impact or it needs to scale then it’d be improve.

This would then be incorporated as part of the following development cycle either as debt (prune and maintain) or as new features (improve).

Why is it necessary?

Maintaining things that aren’t used is a waste. Here’s how unused features impact:

  1. Tech: repos are bigger, tests take longer, there’s more documentation, more dependencies, more complexity, more questions. It also might prevent you from doing things because you need to continue supporting something that’s not used. It can also improve speed and efficiency (in particular memory usage).
  2. Training, documentation and support: training might cover a feature that’s never used and takes away focus from the important stuff. Documentation will have useless pages that might show up in search. Support might sometimes get questions for really obscure features that no one knows about.
  3. User experience: it might be as benign as having a link that’s never clicked but as harmful as having a page that’s being used but isn’t as good as it should be.

Besides the direct impact, there is also a lot to be learnt by doing this. By looking at why something was built in the first place you can see what assumptions you made and how they’ve changed. For a feature that’s failing you can look at where it started to drop off and why. You can look at how it was introduced and users on-boarded and what mistakes might have been made in that process. Conversely, for features that are successful you can look at what was done right. You can also look at what potential issues might have occurred in the meantime that this feature might have caused or fixed. (eg. too many messages might indicate the process isn’t clear enough). Similarly, you should also look at what other features were introduced in the meantime that might have had an impact on the feature.

This simple informal process has helped me keep a couple of majestic monoliths running smoothly. Even though I’ve done this only with small teams (< 20) I do think that a similar approach would benefit larger teams as well.

e: About a year-and-a-half after I wrote this there was a study that said that’s instinctive to fix things by adding new parts.

#architecture #development #product