Using Cyclomatic Complexity effectively

Method and class level thresholds for Cyclomatic Complexity (CC) are sometimes used as a means of controlling code quality. So, for example, a method with CC greater than 10 or a class with CC greater than 30 is a candidate for refactoring. But this does not catch procedural partitioning of monolithic methods into smaller units and similar bad decomposition of classes. No single method or class exceeds the threshold. But the total conditional logic in the codebase hasn’t changed. Average CC per method or class doesn’t help either because of significant variance around the average (e.g. get/set methods on beans skew the average CC per method).

A better measurement would be ‘cyclomatic complexity per 100 lines of source code’ (how about calling this CC100?). Now, one would need to make real improvements to the code (e.g. removing duplication, introducing polymorphism) to improve the metric. Tools like javaNCSS can help calculate this. It gives the total size of code base (NCSS = non comment source statements) and the CCN per method. Actually CC100 can also get skewed by duplication of low CC code. So we could use CC100 in conjunction with a duplication detecter like PMD's copy-paste detector. A combination of near-zero duplication and low CC100 should give a good indication of this aspect of code quality.

All that now remains to be agreed upon is a reasonable threshold for CC100. Here is a heuristic - typical CC threshold for a method is 10, typical NCSS threshold for a method is 30. Now, 100 lines of code would be roughly 3 methods; therefore a resonable threshold for CC100 would be 3 methods * CC of 10 per method =30.

5 comments:

Chris Chedgey said...

This doesn't work well for me. It's as if I have a choice of writing code with lots of control statements (if, while, case, ..., these increase CC) or I can alternatively choose to write it without so much control flow. I suggest this is much more to do with the nature of the problem I'm solving than the way I'm solving it.

What is much more important is that I do not place too much complexity at any one level of design breakout. So let's say at the method-level, I keep my CC below 10. The next thing I need to check is whether the complexity at the class level is "reasonable". Well, just as a method has a CC of 1 if it has 1000 lines but no control-flow, setting a limit on the number of methods in a class is a bit arbitrary. A class becomes more complex when the relationships between the methods becomes hard to understand. A good measure of this is the size of the dependency graph of the methods (and fields) within the class (actually the same principle as CC).

The same principle can be used at any level in the design composition. E.g. use the class-level dependency graph to measure the complexity of a package; use the package-level dependency graph to measure the complexity of high-level packages.

The CC100 metic should be good to understand how complex an application is and to estimate things like maintenence and upgrade effort. But ultimately there may be little you can do about the number. However, keeping the stuctural complexity within certain thresholds is alway possible.

More here http://chris.headwaysoftware.com

Sriram Narayan said...

Oh, but we do have the choice of writing less conditional code. A lot of times, blocks of conditional code turn out to be code smells crying to be refactored into polymorphic code. They may represent abstractions/patterns waiting to be discovered and fleshed out.

Chris Chedgey said...

Ok, I see your point, and it is an interesting one that hadn't occured to me before. However I presume that you are not saying that it is right to keep adding polymorphism until the cc100 falls under a threshold. Surely there is a point were you reach a "suitable" class design and then would lower the complexity (cc or cc100) by extracting method where necessary? And I don't think cc100 can be used to detect one over the other.

Jit Roy Chowdhury said...

I too am a bit confused. By using code size in the denominator in your formula (CC/lines of code)X100, you seem to be encouraging people to increase the program length as much as possible, to get a lower value of CC100. This may lead to bloating of the code-base, or a proliferation of classes. Rather, IMHO, we should aim at both keeping the average CC down and the code concise. Possibly a figure of (Avg. CC + LOC/100) would be a good measure. We can also think of attaching different weights like (Avg CC X 0.6 + LOC X 0.4/100).

Sriram said...

Hi Jit
You have revived a 7 year old post! Is your avg.cc is per line/method/class/module? In general, normalizing across very different measures only makes sense if they are of the same order of magnitude.

Also, in my experience, developers are rarely crooked enough to purposely increase LOC redundantly (without duplication) just to dress up a metric. This isn't something that can happen by chance.

Post a Comment