Categories: Software Quality

How to Measure Code Maintainability with Sonar

This article talks about 4 different quality parameters found on Sonar dashboard which could help measuring code maintainability. Following are those quality parameters:

  1. Unit Test Coverage: Unit Test Coverage depicts code coverage in terms of unit tests written for the classes in the project.

    fig: unit tests coverage

    Greater test coverage depicts that developers are focusing on writing good unit tests for their code which results in greater test coverage. This also depicts the fact that the code is testable and hence, easier to change as a bad change could lead the unit tests to fail and raise the red flag. And, a code easier to change becomes a code easier to maintain. One another important thing to note is the fact that the focus should be given to capture the trending of test coverage over multiple releases to see if the test coverage is increasing or decreasing. For this purpose, one could use the “time machine” of the sonar.  Following is a sample test coverage trendline which depicts the test coverage across severel releases.

    fig: test coverage trendline (Sample)

  2. Code Complexity: Code complexity is a measure of cyclomatic complexity and reflects the number of conditional expressions found in a particular class in the project.

    fig: code complexity

    Code complexity, in above diagram, depicts the conditional expressions present in the method and classes. A higher code complexity depicts that there are multiple conditional expressions in the class. This impacts the testability of the code (and hence code maintainability) as it becomes very difficult to write the unit tests having great coverage of such methods or classes. Additionally, it also impacts the read-ability and understandability of the code and hence code usability. Again, as mentioned in above point, it would be interesting to capture the trending of code complexity over multiple releases to check if the code complexity is increasing or decreasing.

  3. LCOM4: LCOM4 stands for Lack of Cohesion of Methods. It reflects on the cohesiveness of a class. The ideal score of LCOM4 for a project could be 1 which represents that the all the classes in the project are cohesive. In other words, all classes in the project follow single responsibility principle. This, however, is very difficult to achieve in practical scenarios.

    fig: LCOM4

    In above diagram, you would see that 4% of files have LCOM index greater than 1. This indicates the fact that 4% of files have code which is serving more than one responsibility and thus, could be difficult to change. In another words, 4% of files are less cohesive in nature and hence have lesser re-usability. From object oriented principles perspective, 4% of files violates the single responsibility principle. And, the files which violates the SRP could be seen as files that are difficult to change and hence, difficult to maintain.

  4. Duplications: As the name implies, the duplication depicts the code duplications.

    fig: duplications

    This is fairly simple to infer. Greater code duplication indicates that code is very difficult to maintain as one needs to update in multiple places if the functionality related with duplicated block of code changes.

[adsenseyu1]

Ajitesh Kumar

I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big data, etc. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking.

View Comments

  • Hi,
    I like the way you summarized these metrics.
    I would also add the cyclomatic dependencies as a good to check metric.
    More than once I found out that we grouped packages the wrong way.

    Regarding the complexity, I get annoyed by the fact that it counts the 'equals' method, which, if you generate it using eclipse, you get high complexity.
    I haven't deeply looked how to remove only 'equals' (and hashcode for that matter) in Sonar.

Recent Posts

Agentic Reasoning Design Patterns in AI: Examples

In recent years, artificial intelligence (AI) has evolved to include more sophisticated and capable agents,…

1 month ago

LLMs for Adaptive Learning & Personalized Education

Adaptive learning helps in tailoring learning experiences to fit the unique needs of each student.…

1 month ago

Sparse Mixture of Experts (MoE) Models: Examples

With the increasing demand for more powerful machine learning (ML) systems that can handle diverse…

2 months ago

Anxiety Disorder Detection & Machine Learning Techniques

Anxiety is a common mental health condition that affects millions of people around the world.…

2 months ago

Confounder Features & Machine Learning Models: Examples

In machine learning, confounder features or variables can significantly affect the accuracy and validity of…

2 months ago

Credit Card Fraud Detection & Machine Learning

Last updated: 26 Sept, 2024 Credit card fraud detection is a major concern for credit…

2 months ago