CODE QUALITY
⇨Code quality is a group of different attributes and requirements, determined and prioritized by your business.
⇨Here are the main attributes that can be used to determine:
➦Clarity: Easy to read and oversee for anyone who isn’t the creator of the code. If it’s easy to understand, it’s much easier to maintain and extend the code.
Not just computers, but also humans need to understand it.
➦Maintainable: A high-quality code isn’t overcomplicated.
Anyone working with the code has to understand the whole context of the code if they want to make any changes.
➦Documented: The best thing is when the code is self-explaining, but it’s always recommended to add comments to the code to explain its role and functions.
It makes it much easier for anyone who didn’t take part in writing the code to understand and maintain it.
➦Refactored: Code formatting needs to be consistent and follow the language’s coding conventions. Some code refactoring tips here.
➦Well-tested: The less bugs the code has the higher its quality.
Thorough testing filters out critical bugs ensuring that the software works the way it’s intended.
➦Extendible: The code you receive has to be extendible. It’s not really great when you have to throw it away after a few weeks.
➦Efficiency: High-quality code doesn’t use unnecessary resources to perform the desired action.
A quality code does not necessarily meet all of the above-mentioned attributes, but the more it meets, the higher its quality.
⧬These requirements are more like a priority list that depends on the characteristics of your project.
YOU CARE ABOUT CODE QUALITY
⧪Great authors write books with compelling stories that are easy to read and understand. From some aspects, the job of an author is similar to that of a developer. The main difference is that developers use different jargon.
⧪As the author’s writing has to be easy to read and comprehensive, so should a software developer’s code.
⧪I know it’s pretty hard to pay attention to code quality when you’re under pressure to meet your next deadline, but if you’re thinking long term, you definitely need to produce code that’s readable and maintainable.
Here are three main reasons why code quality is important:
🔼Readability: Make the code more readable and easier to comprehend for everyone working on the project.
It’s much harder to read and understand a bad quality code than to write it.
🔼Maintainability: It’s easier, safer and less time consuming to maintain and test quality code.
🔼Lower technical debt: Good quality code can speed up long-term software development since it can be reused and developers don’t have to spend that much time fixing old bugs and polishing code.
⧪ It also makes it easier for new project members to join the project.
BUILD A CODE QUALITY ASSURANCE SYSTEM FOR YOUR TEAM
➤In this part, I will show you how we use version control, style guides and automated testing to make sure our code meets our predefined quality standards.
➤By following these methods, you can easily replicate our system and radically improve the code quality your team produces.
➤You just need to go through the following steps:
⛅Setup version control
⛅Determine conventions
⛅Run functional quality tests
1. VERSION CONTROL TOOL TO ENSURE CODE QUALITY AND TRANSPARENCY
Version control tool is the foundation of our system.
The most popular tool for version control is Git.
It also offers a branching style guide called GitFlow that enables seamless collaboration between team members and makes it easy to scale up the development team.
It provides an easy-to-track system that separates the live product from the less stable developer branch with unpublished features.
When a developer from our team finishes a feature, he/she sends a pull request on GitHub. This describes the content and the details of the request.
This system makes sure that no unreviewed code will be merged with the master branch.
Here is how our process looks:
One team member sends a pull request to the development branch.
This will appear in a ready-to-review section waiting for a project member to review (peer review).
A team member reviews the request, and if it meets the requirements, it will be merged into the development branch.
This a great system for controlling versions and making everyone’s work fully transparent.
There are lots of GUI extensions for Git, such as GitKraken, which supports Gitflow.
Here you can see how easily can enable it.
But how do you decide if the code is good enough?
In the next part, I show you tools to track code quality and metrics that can be used to measure code quality.
2. STYLE GUIDE FOR READABLE AND COMPREHENSIBLE CODE
A style guide is a collection of best practices and conventions.
Using a style guide ensures that every developer's code looks exactly the same, making the code easier to review and work with.
Fortunately, you don’t have to create your own style guide.
There are many style guides available for free, focusing on different programming languages and scopes:
Company: Cool companies like Airbnb and Google have already created and published their own style guides.
Here is Airbnb’s JavaScript style guide.
Project: There could be varying conventions across different projects or products within a company.
We don’t really recommend project-based style guides since it makes it much harder for people switching between projects.
Use linters to automatically test code style
Linter is part of the style guide.
It’s a small piece of software that automatically checks if your code meets the predefined code convention rules.
You don’t have to manually go through the code base to check style.
There are linters available for almost every programming language, just to mention a few:
- JavaScript ESLint
- TypeScript TSlint
- Python pylint /flake8
- Sass/SCSS sass-lint
- Go golang lint
- Bash ShellCheck
You can also check out our own JavaScript style guide here (it needs to be updated though).
Many code editors have support for configurable linting, such as VSCode.
Here is a guide on how to set your own linter up.
EditorConfig helps developers define and maintain consistent coding styles between different editors and IDEs.
The EditorConfig project consists of a file format for defining coding styles and a collection of text editor plugins that enable editors to read the file format and adhere to defined styles.
3. IMPROVE CODE QUALITY WITH FUNCTIONAL TESTS
While style guides with linters test how your code looks, functional quality tests show if your code actually works or not.
This simple pyramid shows how a test process should be structured and your efforts directed.
Generally, we can say that you have to run lots of unit tests, fewer integration tests, and even fewer end-to-end tests.
With a unit test, you inspect one module of the software by mocking out dependencies.
An integration test shows how different components work together while an end-to-end test checks the full client-server round.
For running unit and integration tests, here are some tools you can use:
🌴Mocha
🌴Jasmine
For end-to-end tests, we recommend using:
🎄Jasmine
🎄Karma (for angular)
🎄Protractor
🎄Cucumber
Additional reading: Mobile Labs Inc put together a cool checklist to use before deploying any applications.
HOW TO MEASURE TESTS
The best way to measure test effectiveness is to track test coverage.
It shows what portion (%) of the code is covered by the testing algorithm. To get a better understanding, it’s worth breaking down test coverage:
Statement coverage (%): number of statements executed during a test divided by all statements
Branch coverage (%): number of executed conditions divided by all conditions
Function coverage (%): the number of executed functions divided by all functions
Lines coverage (%): number of lines ran during a test divided by all lines
Istanbul is a cool tool for measuring test coverage for JavaScript codebase.
USER INTERFACE TEST
UI test can be also automatized but it demands more resources, especially when the component is changing so the full test environment should be rewritten.
There are cool applications for automated user interface tests: Monkey test for Android UI stress tests, Saucelabs for cross-browser testing and Protractor for a more comprehensive end-to-end test (including user interface).
- Monkey test for Android
- Saucelabs
- Protractor
USE CONTINUOUS INTEGRATION TOOLS
Our philosophy is to always have feedback on the condition of the software we’re building.
This is where continuous integration comes in the picture.
One of our favorite bloggers, Martin Fowler, nailed the definition of continuous integration.
"Continuous integration is a software development practice where members of a team integrate their work frequently ,usually each person integrates at least daily-leading to multiple integration per day.Each integration is verified by an automated build(including test) to detect integration error as quickly as possible."-Martin Fowler,
Here is our process:
1. Continuous integration platform will run the linters on the code. If it fails, the process will stop here and the developer has to fix the style-related issues.
2. It will run the functional test and move to the next step if the code runs according to plan.
Then it starts calculating test coverage.
3. If it doesn’t meet the predefined threshold, it will fail.
4. If the request is merging with the master branch, it should be deployed as well.
Recommended tools:
- Shippable
- TravisCI
Guide: Here are 10 things you should consider while choosing a CI platform.
Here is how to integrate Shippable with Github:
1. Go to Shippable.com
2. Log in with Github
3. Select the team you want to work with
4. Click enable project
5. Select your project from the list
POST PRODUCTION CODE QUALITY
You really shouldn’t let quality tracking go after the product goes live.
Tools such as Sentry and Newrelic real-time monitor errors so you don’t have to ask users to report crashes and bugs since you will be notified automatically.
All you need to do is add a small piece of code to your app.
Tools:
Sentry (can be integrated with Slack and GitHub)
Newrelic
Measuring Code Quality with Test Coverage Metrics
Test coverage and code quality are two of a handful of fundamental metrics used to analyze, track and measure the effectiveness of an IT project or initiative.
Both test coverage and code quality are interlinked in a way few other metrics are. For instance, one of the ways we measure code quality is by looking at corresponding test coverage.
Yet questions lurk around how effective it is to use test coverage metrics to measure code quality.
So in this post, we’ll take a critical look at this practice.
We’ll do this by reviewing the generally accepted view about measuring code quality with test coverage metrics, and how you can apply a solution that works for your situation.
Elephant in the room – Test coverage
Let’s address the elephant in the room first – namely, test coverage.
“How Much Test Coverage Is Adequate?”
Granted – adequate test coverage helps improve your chances of catching all the critical/high priority bugs.
This isn’t just a grandmother’s tale but actually a pretty reliable yardstick to build confidence about your IT system’s code quality.
However, the major source of disagreement in the industry about test coverage, seems to be centered on ‘how much is adequate?’
Theories vary – from 60% to 80% to even 100%.
Each proponent has a valid argument for why they think a certain amount of test coverage is enough or not enough.
The one thing nobody seems to be discussing: What do you measure test coverage against?
Don’t get me wrong – everyone does have a traditionally accepted basis for measuring test coverage – the number of lines of code in the software being tested.
Is that the right measure though? That is precisely what we’ll attempt to answer today.
“The One Thing Nobody Seems To Be Discussing: What Do You Measure Test Coverage Against?”
Test coverage measured against lines of code
How does this work? You simply take:
- (A) the total lines of code in the piece of software you are testing, and
- (B) the number of lines of code all test cases currently execute, and
- Find (B divided by A) multiplied by 100 – this will be your test coverage %.
For example,
If the total lines of code in a system component is 1000 and the number of lines being actually executed through all existing test cases is 650, then your test coverage is:
(650 / 1000) * 100 = 65%
What is the generally accepted ‘sufficient’ test coverage when measured by a number of lines of code executed? The consensus hovers around 80% – higher for critical systems (definition of critical may vary by industry, geography, user base etc.).
The important question is: Does this metric work?
Hmm – that’s a toughie. I scoured internet forums for an answer and found that generally, people think 80% is adequate.
Then again, there is the question of whether you’re using good tests rather than those that aren’t really useful for coverage.
Good tests look like
That’s not very difficult to answer. A good test looks to trace a requirement to fulfillment. Both happy and unhappy flows can be good tests.
So, then how do you ensure that you have mainly good tests to improve test coverage?
Before we answer that question, let’s look at the more important one.
Really measure Code Quality
Test coverage – of course. But how do you really measure test coverage?
Popular answer: number of lines of code executed by test cases.
Correct answer: well, that’s where it gets interesting.
Measuring test coverage by a number of lines of code executed
While traditionally leaned upon by developers, testers and project managers alike, I’ve been questioning the efficacy of using this method to measure coverage.
Why?
As this forum thread will tell you, ensuring 100% coverage in terms of lines of code executed doesn’t really get you a quality product.
Then what does?
As I’ve said before,
- Good Test Cases and
- Adequate test coverage.
To achieve both, you need to look critically at your points of reference.
Improving the ratio of Good Test Cases in your Test repository
A good test case traces a requirement (happy and unhappy flows included) to fulfillment.
All you have to do to ensure you mainly have good test cases is to establish Requirements Traceability.
My team achieves traceability in my projects by clearly mapping out Test Scenarios to cover all requirements in scope for that release, project, iteration, sprint.
By establishing Requirements Traceability, I know – at any given point in time – test coverage by requirements.
In Agile projects, given you’re supposed to only focus on requirements intended for the next immediate release, achieving 100% test coverage by Requirements should be fairly straightforward if you use an elementary Traceability Matrix.
If you are only testing to requirements, you should only have good test cases in your repository.
Now to the next challenge – adequate test coverage.
Well, I started this section by NOT telling you that the correct answer is the number of lines of code executed by test cases.
“If You Ensure 100% Of Your Requirements Are Covered By Test Cases, Then You Have All The Test Cases You Need.”
Why did I do that?
Because I don’t believe that is the correct measure of code quality. If you’ve still got ‘Why?’ on your lips, so I’ll try and answer to that.
Measure code quality with test coverage
Requirements traceability gives you a reliable way to build good test cases. So, let’s extend that for a minute to think about why that is.
If you only write test cases that can trace back to a requirement, you’re effectively eliminating any redundant, unnecessary test cases. This improves the efficiency of your team’s testing efforts.
Now, if you turn that around, what you’d get is this: if you ensure 100% of your requirements are covered by test cases, then you have all the test cases you need.
Interesting, right?
Then again, how do you ensure you have 100% accurate requirements? What if some requirements are incorrect, or missed?
Well, as they say, “A high-quality product built on bad requirements, is a poor quality product.”
The focus then shifts to ensuring you have high-quality user stories that cover all functional and non-functional requirements. A cursory search on the internet will tell you what you need to ensure all your requirements are captured adequately and effectively.
Test coverage in the world of Test Driven Development
Let’s consider how I run my average Agile project (week-long sprints as an example).
Sprint – Day 1
My scrum team uses the planning session to shortlist the top x stories on the backlog for delivery.
Each of the stories has been elaborated and story pointed in readiness for the planning sessions (through ongoing backlog grooming sessions).
A Scrum Tester (or testers) picks up the freshly minted sprint backlog to map out all the test scenarios they expect to cover with their test cases.
These scenarios are then handed over to the Developers who are already plugging away at the code.
Sprint – Day 2
By this time, scrum developers have made some initial headway with dev planning and kick off, and have had a chance to get an overview of the requirements and test scenarios for the sprint with the bas and testers.
The testers have completed scripting new test cases or copying over existing test cases from a case repository as applicable, to cover off all the test scenarios.
They’ve also identified both Happy and Unhappy flows and prioritized these.
And finally, they’ve mapped the test cases back to the source requirement, establishing traceability. By this point, the testers know if all the requirements for the current sprint have been covered by test cases.
Sprint – Final Day
Scrum Developers have used the test scenarios and detailed test cases to guide their effort at coding, and have unit tested the code with these test cases as well.
By the time they deploy to an Integrated Test Environment for end-to-end functional testing, most of the easy to find, unseemly bugs have been found.
Scrum Testers have also begun to execute end-to-end tests using these test cases, and help identify the high/critical bugs that prevent a story from being delivered in that sprint.
When we get to the Demo and Retrospective, usually all high and critical bugs will have been fixed, with other unresolved bugs being prioritized for future resolution as necessary (We always end a release by mopping up most of these ‘other’ bugs with a spike).
As you can see, by the time we get to the end of a Sprint, or a Release, my team have a way of ensuring all requirements are covered adequately by tests, and that we meet Exit Criteria from Testing to Release (e.g.: No High/Critical bugs, 100% test case execution, 95% tests passed).
With Test-Driven Development, this is truly achievable.
And if you are able to demo a working, almost-bug-free product every Sprint, and follow this up with mop-up Spikes to fix the remaining unseemly bugs, my experience suggests you’re in a good place to go to Release.
So, What Should Code Quality be really measured against?
My response: Both requirements coverage and number of lines of code executed.
Why?
To be frank, I believe ensuring 100% requirements coverage is adequate.
After all, with Agile, you’re supposed to only work on the high priority requirements for a release all the time, thereby reducing waste – of effort, time, money.
Then again, humans aren’t perfect. Bad requirements could sneak into your scope.
Good requirements could be missed.
You may not discover this until late into your project. We can’t always guarantee 100% accuracy with requirements.
On the other hand, I’ve seen that reviewing test coverage by lines of code executed helps judge code quality at a superficial level.
For instance, if you have achieved 100% requirements coverage with your tests, and passed all exit criteria for release, and yet find only 65% of your code is covered in your tests, why is that?
Extreme example – I agree. But this is probable. And it could be a combination of poor requirements quality and wasted coding effort.
It’s quite common to encounter projects where developers have written so many more lines of code that is necessary. The reasons could be many, and some justifiable, but I constantly find ‘excess code’ as an issue for a lot of my clients’ projects.
Coding effort veers away from the requirements, and developers spend precious resource writing unnecessary lines of code.
So, is it true by corollary that if a developer were to strictly follow agreed scope, 100% of the code will be covered by tests? Not necessarily.
This again is down to inadequate coaching for developers to write efficient code – i.e., only write the code necessary to deliver a requirement.
Even then, there may be scenarios where 100% coverage may not be achievable or necessary.
The goal, therefore shouldn’t ever be to achieve 100% code coverage – or any specific number like that.
Although when combined with 100% requirements coverage, you should be able to find that code coverage by default hovers in the high 80s and even low to high 90s.
We Learn
Code quality is high on everyone’s agenda.
In an era of increasing cost of resource and tightening budgets to deliver, it’s imperative to measure and track code quality to better understand your team’s efficiency, and where they can improve.
You should use every tool at your disposal to achieve your target code quality metrics.
Tracking the number of executed lines of code against the total lines of code is one option.
But that alone will not help you ascertain either adequate test coverage or actual code quality.
You should rely on complementary, and at times more powerful ways to measure code quality and test coverage.
You can achieve this easily by considering requirements coverage in your tests, which can directly lead to better code quality – instantly!
Tools to Improve Java Code Quality
There is no developer who never made a mistake.
Usually, the compiler catches the syntactic and arithmetic issues and lists out a stack trace.
But there still might be some issues that compiler does not catch.
These could be inappropriately implemented requirements, incorrect algorithm, bad code structure or some sort of potential issues that the community knows from experience.
The only way to catch such mistakes is to have some senior developer to review your code.
Such an approach is not a panacea and does not change much.
With each new developer in the team, you should have an extra pair of eyes which will look at his/her code.
But luckily there are many tools which can help you control the code quality including Checkstyle, PMD, FindBugs, SonarQube etc.
All of them are usually used to analyze the quality and build some useful reports.
Very often those reports are published by continuous integration servers, like Jenkins.
Here is a checklist of Java static code analysis tools, that we use at RomexSoft in most of our projects. Let’s review each of them.
🍓Checkstyle
Code reviews are essential to code quality, but usually, no one in the team wants to review tens of thousands of lines of code. But the challenges associated with manually code reviews can be automated by source code analyzers tool like Checkstyle.
Checkstyle is a free and open source static code analysis tool used in software development for checking whether Java code conforms to the coding conventions you have established. It automates the crucial but boring task of checking Java code. It is one of the most popular tools used to automate the code review process.
Checkstyle comes with predefined rules that help in maintaining the code standards. These rules are a good starting point but they do not account for project-specific requirements. The trick to gain a successful automated code review is to combine the built-in rules with custom ones as there is a variety of tutorials with how-tos.
Checkstyle can be used as an Eclipse plugin or as the part of a built system such as Ant, Maven or Gradle to validate code and create reports coding-standard violations.
🍓PMD
PMD is a static code analysis tool that is capable to automatically detect a wide range of potential bugs and unsafe or non-optimized code. It examines Java source code and looks for potential problems such as possible bugs, dead code, suboptimal code, overcomplicated expressions, and duplicated code.
Whereas other tools, such as Checkstyle, can verify whether coding conventions and standards are respected, PMD focuses more on preemptive defect detection. It comes with a rich and highly configurable set of rules that you can easily configure and choose which particular rules should be used for a given project.
The same as Checkstyle, PMD can be used with Eclipse, IntelliJ IDEA, Maven, Gradle or Jenkins.
Here are a few cases of bad practices that PMD deals with:
- Empty try/catch/finally/switch blocks.
- Empty if/while statements.
- Dead code.
- Cases with direct implementation instead of an interface.
- Too complicated methods.
- Classes with high Cyclomatic Complexity measurements.
- Unnecessary ‘if’ statements for loops that could be ‘while’ loops.
- Unused local variables, parameters, and private methods.
- Override hashcode() method without the equals() method.
- Wasteful String/StringBuffer usage.
- Duplicated code – copy/paste code can mean copy/paste bugs, and, thus, bring a decrease in maintainability.
🍓FindBugs
FindBugs is an open source Java code quality tool similar in some ways to Checkstyle and PMD, but with a quite different focus. FindBugs doesn’t concern the formatting or coding standards but is only marginally interested in best practices.
In fact, it concentrates on detecting potential bugs and performance issues and does a very good job of detecting a variety of many types of common hard-to-find coding mistakes, including thread synchronization problems, null pointer dereferences, infinite recursive loops, misuse of API methods etc. FindBugs operates on Java bytecode, rather than source code. Indeed, it is capable of detecting quite a different set of issues with a relatively high degree of precision in comparison to PMD or Checkstyle. As such, it can be a useful addition to your static analysis toolbox.
FindBugs is mainly used for identifying hundreds of serious defects in large applications that are classified into four ranks:
- scariest
- scary
- troubling
- of concern
Let’s take a closer look at some cases of bugs.
Infinite recursive loop
public String resultValue() {
return this.resultValue();
}
Here, the resultValue() method is called recursive inside itself.
Null Pointer Exception
FindBug examines the code for the statement that will surely cause the NullPointerException.
Object obj = null;
obj.doSomeThing(); //code execution will cause the NullPointerException
The below code is a relatively simple bug. If the ‘obj’ variable will contain null and ‘str’ variable has some instance it will surely lead to the NullPointerException.
if((str == null && obj == null) || str.equals(obj)) {
//do something
}
A method whose return value should not be ignored
The string is an immutable object. So ignoring the return value of the method will be reported as a bug.
String str = “Java”;
str.toUpper();
if (str.equals(“JAVA”))
Suspicious equal() comparison
The method calls equals(Object) on references of different class types with no common subclasses.
Integer value = new Integer(10);
String str = new String(“10”);
if (str != null && !str.equals(value)) {
//do something;
}
The objects of different classes should always compare as unequal, therefore !str.equals(value) comparison will always return false.
Hash equals mismatch
The class that overrides equals(Object) but does not override hashCode() and uses the inherent implementation of hashCode() from java.lang.The object will likely violate the invariant that equal objects must have equal hashcodes.
Class does not override equals in a superclass
Here’s a case: the child class that extends a parent class (which defines an equals method) adds new fields but does not override an equals method itself.
Thereby, equality on instances of child class will use the inherited equals method and, as a result, it will ignore the identity of the child class and the newly added fields.
To sum up, FineBug is distributed as a stand-alone GUI application but there are also plugins available for Eclipse, NetBeans, IntelliJ IDEA, Gradle, Maven, and Jenkins.
Additional rule sets can be plugged in FindBugs to increase the set of checks performed.
SonarQube
SonarQube is an open source platform which was originally launched in 2007 and is used by developers to manage source code quality.
Sonar was designed to support global continuous improvement strategy on code quality within a company and therefore can be used as a shared central system for quality management.
It makes management of code quality possible for any developer in the team.
As a result, in recent years it has become a world’s leader in Continuous Inspection of code quality management systems.
Sonar currently supports a wide variety of languages including Java, C/C++, C#, PHP, Flex, Groovy, JavaScript, Python, and PL/SQL (some of them via additional plugins).
And Sonar is very useful as it offers fully automated analyses tools and integrates well with Maven, Ant, Gradle, and continuous integration tools.
Sonar uses FindBugs, Checkstyle and PMD to collect and analyze source code for bugs, bad code, and possible violation of code style policies.
It examines and evaluates different aspects of your source code from minor styling details, potential bugs, and code defects to the critical design errors, lack of test coverage, and excess complexity.
In the end, Sonar produces metric values and statistics, revealing problematic areas in the source that require inspection or improvement.
Here is a list of some of SonarQube‘s features:
- It doesn’t show only what’s wrong. It also offers quality management tools to help you put it right.
- SonarQube addresses not only bugs but also coding rules, test coverage, code duplications, complexity, and architecture providing all the details in a dashboard.
- It gives you a snapshot of your code quality at a certain moment of time as well as trends of lagging and leading quality indicators.
- It provides you with code quality metrics to help you take the right decision.
- There are code quality metrics that show your progress and whether you’re getting better or worse.
- SonarQube is a web application that can be installed standalone or inside the existing Java web application. The code quality metrics can be captured by running mvn sonar: sonar on your project.
Your pom.xml file will need a reference to this plugin because it is not a default maven plugin.
<build>
…
<plugins>
<plugin>
<groupId>org.sonarsource.scanner.maven</groupId>
<artifactId>sonar-maven-plugin</artifactId>
<version>3.3.0.603</version>
</plugin>
</plugins>
…
</build>
Also, Sonar provides an enhanced reporting via multiple views that show certain metrics (you can configure which ones you want to see) for all projects.
And what’s most important, it does not only provide metrics and statistics about your code but translates these nondescript values to real business values such as risk and technical debt.
Dependency managers are software modules that coordinate the integration of external libraries or packages into the larger application stack.
Dependency managers use configuration files like composer.json, package.json, build.gradle or pom.xml to determine:
A repository is a source where the declared dependencies can be fetched from using the name and version of that dependency.
Most dependency managers have dedicated repositories where they fetch the declared dependencies from.
For example, maven central for maven and gradle, npm for npm, and packagist for a composer.
So when you declare a dependency in your config file — e.g. composer.json, the manager will go to the repository to fetch the dependency that matches the exact criteria you have set in the config file and make it available in your execution environment for use.
🍎They make sure the same version of dependencies you used in dev environment is what is being used in production. No unexpected behaviors
They make keeping your dependencies updated with latest patch, release or major version very easy.
🍎You can add to the list as you get to find out more while using them or let me know about them in the response section.
Thanks for reading and please remember to recommend it for others that might need it also by clicking the heart icon below
Furthermore, package management tools keep track of updates and upgrades so that the user doesn’t have to hunt down information about bug and security fixes.
Without package management, users must ensure that all of the required dependencies for a piece of software are installed and up-to-date, compile the software from the source code (which takes time and introduces compiler-based variations from system to system), and manage configuration for each piece of software.
Without package management, application files are located in the standard locations for the system to which the developers are accustomed, regardless of which system they’re using.
Package management systems attempt to solve these problems and are the tools through which developers attempt to increase the overall quality and coherence of a Linux-based operating system.
The features that most package management applications provide are:
Package downloading: Operating-system projects provide package repositories which allow users to download their packages from a single, trusted provider.
When you download from a package manager, the software can be authenticated and will remain in the repository even if the original source becomes unreliable.
Dependency resolution: Packages contain metadata which provides information about what other files are required by each respective package.
This allows applications and their dependencies to be installed with one command, and for programs to rely on common, shared libraries, reducing bulk and allowing the operating system to manage updates to the packages.
A standard binary package format: Packages are uniformly prepared across the system to make installation easier.
While some distributions share formats, compatibility issues between similarly formatted packages for different operating systems can occur.
Common installation and configuration locations: Linux distribution developers often have conventions for how applications are configured and the layout of files in the /etc/ and /etc/init.d/ directories; by using packages, distributions are able to enforce a single standard.
Additional system-related configuration and functionality: Occasionally, operating system developers will develop patches and helper scripts for their software which get distributed within the packages.
These modifications can have a significant impact on user experience.
Quality control: Operating-system developers use the packaging process to test and ensure that the software is stable and free of bugs that might affect product quality and that the software doesn’t cause the system to become unstable.
The subjective judgments and community standards that guide packaging and package management also guide the “feel” and “stability” of a given system.
In general, we recommend that you install the versions of software available in your distribution’s repository and packaged for your operating system.
If packages for the application or software that you need to install aren’t available, we recommend that you find packages for your operating system, when available, before installing from source code.
Example:
Dependency Managers
First, what is a dependency?
A Dependency is an external standalone program module (library) that can be as small as a single file or as large as a collection of files and folders organized into packages that performs a specific task.
For example, backup-MongoDB is a dependency for a blog application that uses it for remotely backing up its database and sending it to an email address.
In other words, the blog application is dependent on the package for doing backups of its database.
Dependency managers are software modules that coordinate the integration of external libraries or packages into the larger application stack.
Dependency managers use configuration files like composer.json, package.json, build.gradle or pom.xml to determine:
- What dependency to get
- What version of the dependency in particular and
- Which repository to get them from.
A repository is a source where the declared dependencies can be fetched from using the name and version of that dependency.
Most dependency managers have dedicated repositories where they fetch the declared dependencies from.
For example, maven central for maven and gradle, npm for npm, and packagist for a composer.
So when you declare a dependency in your config file — e.g. composer.json, the manager will go to the repository to fetch the dependency that matches the exact criteria you have set in the config file and make it available in your execution environment for use.
Example of Dependency Managers
- Composer (used with php projects)
- Gradle (used with Java projects including Android apps. and also is a build tool)
- Node Package Manager (NPM: used with Nodejs projects)
- Yarn
- Maven (used with Java projects including Android apps. and also is a build tool)
- and so on.
Why do I need Dependency Managers
Summing them up in two points:🍎They make sure the same version of dependencies you used in dev environment is what is being used in production. No unexpected behaviors
They make keeping your dependencies updated with latest patch, release or major version very easy.
🍎You can add to the list as you get to find out more while using them or let me know about them in the response section.
Thanks for reading and please remember to recommend it for others that might need it also by clicking the heart icon below
Package Management Concepts
Contemporary distributions of Linux-based operating systems install software in pre-compiled packages, which are archives that contain binaries of software, configuration files, and information about dependencies.
Furthermore, package management tools keep track of updates and upgrades so that the user doesn’t have to hunt down information about bug and security fixes.
Without package management, users must ensure that all of the required dependencies for a piece of software are installed and up-to-date, compile the software from the source code (which takes time and introduces compiler-based variations from system to system), and manage configuration for each piece of software.
Without package management, application files are located in the standard locations for the system to which the developers are accustomed, regardless of which system they’re using.
Package management systems attempt to solve these problems and are the tools through which developers attempt to increase the overall quality and coherence of a Linux-based operating system.
The features that most package management applications provide are:
Package downloading: Operating-system projects provide package repositories which allow users to download their packages from a single, trusted provider.
When you download from a package manager, the software can be authenticated and will remain in the repository even if the original source becomes unreliable.
Dependency resolution: Packages contain metadata which provides information about what other files are required by each respective package.
This allows applications and their dependencies to be installed with one command, and for programs to rely on common, shared libraries, reducing bulk and allowing the operating system to manage updates to the packages.
A standard binary package format: Packages are uniformly prepared across the system to make installation easier.
While some distributions share formats, compatibility issues between similarly formatted packages for different operating systems can occur.
Common installation and configuration locations: Linux distribution developers often have conventions for how applications are configured and the layout of files in the /etc/ and /etc/init.d/ directories; by using packages, distributions are able to enforce a single standard.
Additional system-related configuration and functionality: Occasionally, operating system developers will develop patches and helper scripts for their software which get distributed within the packages.
These modifications can have a significant impact on user experience.
Quality control: Operating-system developers use the packaging process to test and ensure that the software is stable and free of bugs that might affect product quality and that the software doesn’t cause the system to become unstable.
The subjective judgments and community standards that guide packaging and package management also guide the “feel” and “stability” of a given system.
In general, we recommend that you install the versions of software available in your distribution’s repository and packaged for your operating system.
If packages for the application or software that you need to install aren’t available, we recommend that you find packages for your operating system, when available, before installing from source code.
Compare and contrast different dependency/package management tools
Package Manager
is used to configure the system, ie to setup your development environment and with these settings you can build many projects.Dependency Manager
Is specific to the project. You manage all dependencies for a single project and those dependencies are going to be saved on your project. When you start another project you should manage your dependencies again.Example:
In PHP world there is COMPOSER as dependency manager and PEAR as package manager. When using composer all your settings and extensions are for a single project where pear settings to set up new extension and library to php core.
Build Tools
Build tools are programs that automate the creation of executable applications from source code.
The building incorporates compiling, linking and packaging the code into a usable or executable form. In small projects, developers will often manually invoke the build process.
This is not practical for larger projects, where it is very hard to keep track of what needs to be built, in what sequence and what dependencies there are in the building process.
Using an automation tool allows the build process to be more consistent.
The primary purpose of the first build tools, such as the GNU make and "make depend" utilities, commonly found in Unix and Linux-based operating systems, was to automate the calls to the compilers and linkers.
Today, as build processes become ever more complex, build automation tools usually support the management of the pre- and post-compile and link activities, as well as the compile and link activities.
The process of code compilation is essential to the creation of software when high-level programming languages are used.
Part of the function of the build tool is to cope with errors in the compilation process of complex software systems.
Modern build tools go further in enabling workflow processing by obtaining the source code, deploying executables to be tested and even optimizing complex build processes using distributed build technologies, which involves running the build process in a coherent, synchronized manner across several machines.
Very Large-Scale Development
mixture of traditional and agile methods.
Previous research on the use of development methods suggests that methods need to be adapted to the work context (Fitzgerald et al. 2006).
We first describe some of the main differences between traditional and agile development before
focusing on three aspects of adaptation that are critical in very large scale development: customer involvement, software architecture, and inter-team coordination.
Traditional and Agile Development
Nerur et al. (2005) described the fundamental assumption behind traditional methods to be thatinformation systems are fully specifiable and built through meticulous and extensive planning.
Agile methods, on the other hand, assume that information systems can be built through
continuous design, improvement, and testing based on rapid feedback and change.
Learning and adaptation should be embraced (Conboy 2009). Adapting the method to the context will
involve balance in a number of areas (Vinekar et al. 2006), as illustrated in Table 1.
Very largescale projects involve great risk and management attention. Boehm and Turner (2003) argued that traditional and agile methods should be balanced when facing risk such as increases in
programme size.
The transition from traditional plan-driven development to a more agile method with iterations and development conducted in small teams was the focus of a study by Petersen and Wohlin (2010).
They examined the development of three large subsystem components in an Ericsson product involving 117 people and found that many of the issues raised in traditional development were not raised after the transition to agile development.
This suggests that agile methods can also work well in large-scale
product development.
Another study from Ericsson finds that agile principles in large scale contributed to knowledge sharing, and led to increased project visibility and effectiveness in coordination (Lagerberg et al. 2013).
One of the very few studies on large projects combining traditional and agile methods examined a project to develop a web-based customer booking engine for an American cruise company (Batra et al. 2010).
The study describes a $15 million project that lasted for 28 months.
The project was distributed and combined Scrum with the......
Table 1 Traditional versus agile development (excerpt from (Nerur et al. 2005))
Project Management Body of Knowledge3 framework. Customers were available but did not work together with developers on a daily basis. The iteration length was two weeks.
Some of the challenges identified in the study were due to the size and involvement of a high number of internal business sponsors, users, project managers, analysts, and external developers from the UK and India; Customer involvement.
The communications were mainly formal, and formal documents were needed for changes. However, the project was considered a success, and the study describes the balance between traditional
and agile methods as essential in achieving both project control and agility.
A model organizing development as chains of Scrum teams (the output of one team
is the input of the next) is described in a study of three cases of organizations with 150,
34, and 5 Scrum teams (Vlietland and van Vliet 2015). The study investigated the strategy,
structure, collaboration, coordination, communication, mindset, and competence of
people.
The dependence of teams on the results from other teams introduces a number
of challenges.
The three cases illustrate challenges that include a lack of coordination in the chain, mismatch of backlog priority between teams, challenges in alignment between teams, and unpredictability in delivering on commitments, this suggests challenges with inter-team coordination and main design decisions as expressed in the software architecture.
Customer Involvement
Agile methods are people-centric and recognize the value that competent people and their relationships bring to software development (Nerur and Balijepally 2007).A key pillar in an agile method is the close and continual collaboration between clients and
developers (Maiden and Jones 2010).
Therefore, agile methods are highly dependent on the on-site customer identifying and prioritizing features, providing feedback, and guiding change in the course of the development (Vinekar et al. 2006).
In their study on a large-scale agile project, Bjarnason et al. (2012) found that low customer involvement in near-development roles in combination with weak awareness of overall goals
may result in unrealistically large project scope.
Further, overscoping can lead to a number of negative effects, including quality issues, delays, and failure to meet customer expectations.
A study on large-scale and distributed projects found that understanding requirement dependencies is of paramount importance in such projects (Daneva et al. 2013).
Active participation and constant involvement of the customer in systems development yields greater benefits, but this reliance on the customer can fail if the on-site customer goals are misaligned with the goals of other stakeholders.
Having multiple on-site customers in a large-scale project increases the risk of failure because of the
challenge of establishing a common understanding among all customer representatives.
A fragmented view of the system that each customer may have is likely to have a negative impact on the project (Ramesh et al. 2010).
Further, when different stakeholder groups have different priorities, there is a need for open and transparent dialogue and cross-stakeholder group communication in large-scale agile projects
(Barney and Wohlin 2009).
Software Architecture
The software architecture is the fundamental technical organization of a system.How and when to make architectural decisions have been the subject of major debate in the software
engineering field (Abrahamsson et al. 2010).
In traditional development, the architecture is defined prior to implementation and testing, whereas the architectural design emerges as a result of on-going clarification in ‘purist’ agile development.
Traditionally, software architecture is associated with up-front designs and stable structures that accommodate pre-defined, non-functional requirements.
The assumption is that the value of good architectural decisions will surface at a later point in time in forms such as more easily maintainable code and scalability (Faber 2010).
Also, sound architecture is largely invisible and provides an effective structure for subsequent functionality. It is interesting to note that software architecture has meaning and significance for a wide variety of stakeholders.
Different stakeholders are also likely to associate different meanings to software architecture.
Smolander (2002) suggested that the four metaphors blueprints, literature, language, and
decisions capture the meaning of software architecture for different actors involved in software
development.
This underlines the pervasive role of software architecture.
Several approaches to architecture work have been taken in large agile projects.
Some start with the architecture (‘big up-front design‘) and then use agile methods.
Others spend the first iteration focusing on architecture.
Others again start directly on the development and let the architecture emerge.
To construct a large software system developed by a number of teams, it is vitally important for the architecture to be agreed upon and communicated without introducing the bureaucracy and overhead associated with traditional methods.
In a study on software product companies, Unphon and Dittrich (2010) found that architectural knowledge was transferred by face-to-face communication with chief architects taking the role of a
By walking architecture.
Awareness and social protocols are important perspectives on how architecture is communicated.
Unphon and Dittrich do not discuss the increased challenges of architectural work on a
large scale.
In their study of a large-scale agile approach at Ericsson, Petersen, and Wohlin
(2010) found a need for a high-level architectural design to facilitate planning.
Nord et al. (2014) argued that for large-scale agile projects, agility is enabled by architecture, and
architecture is enabled by agility.
They suggested several tactics for handling architecture in large-scale projects, including making use of a matrix structure and focusing on the production infrastructure.
Inter-Team Coordination
Coordination can be defined as Bthe managing of dependencies (Malone and Crowston1994), where dependencies can be related to tasks, knowledge, resources, or technology.
The central challenge in coordination is identifying the right form or artifacts, arenas, and
the degree of formalization in large projects with high uncertainty.
In small agile projects, the development team coordinates work through frequent informal interaction among themselves and with customers, as in the customer-on-site practice in eXtreme Programming.
Scrum has dedicated meetings for planning, review, and retrospectives. Many teams use visual boards, like in Kanban, to show who is working on what and the status of work tasks.
Strode et al. (2012) explain coordination at the team level in agile teams and propose a model for
coordination strategy and coordination effectiveness.
For large-scale projects, there is less support. Scrum prescribes regular meetings between
Scrum teams (BScrum of Scrums) in order to manage the interfaces between teams.
Eckstein shows techniques that are applicable to large projects in order to facilitate planning, status
information, integration, and retrospectives in her book with recommendations to practitioners
(Eckstein 2004).
Some large-scale agile frameworks have been suggested by practitioners (Larman and Vodde 2013; Larman and Vodde 2017; Leffingwell et al. 2017) that describe roles and arenas for inter-team coordination.
Visual boards were the primary form of inter-team coordination in a project in Sweden described in an experience report by Kniberg (2011).
There is a small body of studies on inter-team coordination.
Vlietland and van Vliet (2015) propose that embedded coordination practices within and between Scrum teams positively impact delivery predictability in large projects.
A study of BScrum of Scrums(Paasivaara et al. 2012) suggests that this forum did not lead to satisfactory coordination: feature-specific or site-specific fora were better, but coordination at the project level was still a challenge.
Researchers working closely with SAP (Scheerer et al. 2014; Scheerer and Kude 2014) have
developed models of coordination called B coordination configurations and are exploring how
coordination configuration influences coordination effectiveness.
Paasivaara and Lassenius (2014) describe a very large-scale development initiative at Ericsson with 40 teams where four types of communities of practice (Wenger 1998) are used to coordinate teams.
A survey on coordination in large-scale software teams found that respondents wished more effective and efficient communication, as well as the importance of good personal relationships for coordination (Begel et al. 2009).
A management science study (Ingvaldsen and Rolfsen 2012) suggests that inter-group
coordination is a major challenge when groups are self-managing.
Self-management involves giving teams the authority to decide how to 1) execute tasks, and 2) monitor and manage their work process (Hackman 1986).
Moe et al. (2009) describe challenges to self-management at the team level and highlight challenges at the organizational level, such as shared resources, organizational control, and specialist culture
A Software Development Process for Small-Scale Embedded Systems
Developing software for small-scale embedded applications is different from developing large-scale software applications.Large-scale applications use commercially available ‘one fits all’ software development solutions that are difficult to scale downward and usually miss the desired process goals.
In many cases, developing a small-scale software application development process within an existing corporate environment is quicker, less expensive, and results in superior developer productivity and product quality.
Figure 1: A selected group of use-case instances describe an iteration cycle in the implementation and test and verification phases of the development process.
Software pioneer Grady Booch famously commented that “Building quality software in a repeatable and predictive fashion is hard.”
This statement not only describes the difficulty of the software development process, but also describes the primary goal of any software development process — software products should be defect-free, maintainable, and have veracity requirements to guarantee a successful operation.
Software development processes can be fully described by four orthogonal views: methodology, process artifacts, process procedures, and quality assurance.
This article will focus on the methodology view.
ANALYSIS PHASE
Figure 2: Iterative development is use-case-driven; they are the information baseline source for all other documents.
DESIGN PHASE
Figure 3: The development phases are shown as UML swim lanes. Note that the maintenance phase uses the same process phases as the initial software development.
IMPLEMENTATION PHASE
TEST AND VERIFICATION (T&V) PHASE
MAINTENANCE PHASE
Build automation
In professional embedded software projects - especially in the industrial environment - we assume product life cycles of several years.Device families also "live" for decades, so a valid build and software management for board support packages over that time is essential to enable long-term, economic development.
At the same time, it makes the desired transparency and composability of open source software as well as the participation in innovations and continuous security updates from the community manager.
Flexible configuration and variant management
for embedded Linux Board Support Packages
At the latest, when Embedded Linux operating system software is subject to certification, you need to be able to prove a valid and reproducible build process with reliable version management.
The Build Management or Build Automation Management with appropriate software change management is usually on the one hand tooling and infrastructure, on the other hand, but also from defined processes.
Creating the software must also be independent of the configuration of a single (machine) computer and the expertise of a single developer.
Emlix has developed e2factory to meet these requirements as well as various projects subject to certification.
The software management and build system has been continuously maintained and developed since 2003 and is now used in several hundred development projects as well as maintenance and platform strategies.
e2factory is subject to GPLv3 and is freely available as a development tool.
With less stringent requirements for process safety over the life cycle and at the same time the necessity to provide an anonymous application developer with a suitable development environment, a reduced yocto approach with the Yocto construction system BitBake and the minimal distribution Poky-Tiny represents an alternative.
Build automation software
Not all the Java development is done through eclipse and not all the jars may be built from the command line ( or should be built from the command line).
You may need additionally run test cases, unit tests, and many, many other processes.
What ant does, provides a mechanism to automate all this work ( so you don't have to do it every time ) and perhaps you may invoke this ant script each day at 6 p.m.
For instance, in some projects, a daily build is needed, the following are the task that may be automated with ant, so they can run without human intervention.
- Connect to the subversion server.
- Download/update with the latest version
- Compile the application
- Run the test cases
- Pack the application ( in a jar, war, ear, or whatever )
- Commit this build binaries to subversion.
- Install the application in a remote server
- Restart the server
- Send an email with the summary of the job.
Of course for other projects this is overkill, but for some others is very helpful.
Java Build Tools: Ant vs Maven vs Gradle
In the beginning, there was Make as the only build tool available.
Later on, it was improved with GNU Make. However, since then our needs increased and, as a result, build tools evolved.
JVM ecosystem is dominated with three build tools:
🕶Apache Ant with Ivy
🕶Maven
🕶Gradle
🌞Ant with Ivy
Ant was the first among “modern” build tools. In many aspects, it is similar to Make.
It was released in 2000 and in a short period of time became the most popular build tool for Java projects.
It has a very low learning curve thus allowing anyone to start using it without any special preparation. It is based on procedural programming idea.
After its initial release, it was improved with the ability to accept plug-ins.
The major drawback was XML as the format to write build scripts. XML, being hierarchical in nature, is not a good fit for procedural programming approach Ant uses.
Another problem with Ant is that its XML tends to become unmanageably big when used with all but very small projects.
Later on, as dependency management over the network became a must, Ant adopted Apache Ivy.
The main benefit of Ant is its control of the build process.
🌞Maven
Maven was released in 2004.Its goal was to improve upon some of the problems developers were facing when using Ant.
Maven continues using XML as the format to write build specification. However, the structure is diametrically different.
While Ant requires developers to write all the commands that lead to the successful execution of some task, Maven relies on conventions and provides the available targets (goals) that can be invoked.
As the additional, and probably most important addition, Maven introduced the ability to download dependencies over the network (later on adopted by Ant through Ivy).
That in itself revolutionized the way we deliver software.
However, Maven has its own problems.
Dependencies management does not handle conflicts well between different versions of the same library (something Ivy is much better at).
XML as the build configuration format is strictly structured and highly standardized.
Customization of targets (goals) is hard. Since Maven is focused mostly on dependency management, complex, customized build scripts are actually harder to write in Maven than in Ant.
Maven configuration is written in XML continuous being big and cumbersome.
On bigger projects, it can have hundreds of lines of code without actually doing anything “extraordinary”.
The main benefit of Maven is its life-cycle. As long as the project is based on certain standards, with Maven one can pass through the whole life cycle with relative ease.
This comes at a cost of flexibility.
In the meantime, the interest for DSLs (Domain Specific Languages) continued increasing. The idea is to have languages designed to solve problems belonging to a specific domain. In the case of builds, one of the results of applying DSL is Gradle.
🌞Gradle
Gradle combines good parts of both tools and builds on top of them with DSL and other improvements.It has Ant’s power and flexibility with Maven’s life-cycle and ease of use.
An end result is a tool that was released in 2012 and gained a lot of attention in a short period of time.
For example, Google adopted Gradle as the default build tool for the Android OS.
Gradle does not use XML.
Instead, it had its own DSL based on Groovy
(one of JVM languages).
As a result, Gradle build scripts tend to be much shorter and clearer than those written for Ant or Maven.
The amount of boilerplate code is much smaller with Gradle since its DSL is designed to solve a specific problem: move software through its life cycle, from compilation through static analysis and testing until packaging and deployment.
Initially, Gradle used Apache Ivy for its dependency management. Later own it moved to its own native dependency resolution engine.
Gradle effort can be summed as “convention is good and so is flexibility”.
🌻Code examples
We’ll create build scripts that will compile, perform static analysis, run unit tests and, finally, create JAR files.
We’ll do those operations in all three frameworks (Ant, Maven, and Gradle) and compare the syntax.
By comparing the code for each task we’ll be able to get a better understanding of the differences and make an informed decision regarding the choice of the build tool.
First things first. If you’ll do the examples from this article by yourself, you’ll need Ant, Ivy, Maven, and Gradle installed. Please follow installation instructions provided by makers of those tools.
You can choose not to run examples by yourself and skip the installation altogether.
Code snippets should be enough to give you the basic idea of how each of the tools works.
Code repository https://github.com/vfarcic/JavaBuildTools contains the java code (two simple classes with corresponding tests), check style configuration and Ant, Ivy, Maven, and Gradle configuration files.
Let’s start with Ant and Ivy.
❤Ant with Ivy
Ivy dependencies need to be specified in the ivy.xml file.
Our example is fairly simple and requires only JUnit and Hamcrest dependencies.
[ivy.xml]
<ivy-module version="2.0">
<info organisation="org.apache" module="java-build-tools"/>
<dependencies>
<dependency org="junit" name="junit" rev="4.11"/>
<dependency org="org.hamcrest" name="hamcrest-all" rev="1.3"/>
</dependencies>
</ivy-module>
Now we’ll create our Ant build script. Its task will be only to compile a JAR file. The end result is the following build.xml.
<project xmlns:ivy="antlib:org.apache.ivy.ant" name="java-build-tools" default="jar">
<property name="src.dir" value="src"/>
<property name="build.dir" value="build"/>
<property name="classes.dir" value="${build.dir}/classes"/>
<property name="jar.dir" value="${build.dir}/jar"/>
<property name="lib.dir" value="lib" />
<path id="lib.path.id">
<fileset dir="${lib.dir}" />
</path>
<target name="resolve">
<ivy:retrieve />
</target>
<target name="clean">
<delete dir="${build.dir}"/>
</target>
<target name="compile" depends="resolve">
<mkdir dir="${classes.dir}"/>
<javac srcdir="${src.dir}" destdir="${classes.dir}" classpathref="lib.path.id"/>
</target>
<target name="jar" depends="compile">
<mkdir dir="${jar.dir}"/>
<jar destfile="${jar.dir}/${ant.project.name}.jar" basedir="${classes.dir}"/>
</target>
</project>
First, we specify several properties. From there on it is one task after another. We use Ivy to resolve dependencies, clean, compile and, finally, create the JAR file. That is quite a lot of configuration for a task that almost every Java project needs to perform.
To run the Ant task that creates the JAR file executes following.
1 ant jar
Let’s see how would Maven does the same set of tasks.
❤Maven
[pom.xml]
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.technologyconversations</groupId>
<artifactId>java-build-tools</artifactId>
<packaging>jar</packaging>
<version>1.0</version>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
</dependency>
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-all</artifactId>
<version>1.3</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
</plugin>
<!--verify-->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<version>2.12.1</version>
<executions>
<execution>
<configuration>
<configLocation>config/checkstyle/checkstyle.xml</configLocation>
<consoleOutput>true</consoleOutput>
<failsOnError>true</failsOnError>
</configuration>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>findbugs-maven-plugin</artifactId>
<version>2.5.4</version>
<executions>
<execution>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-pmd-plugin</artifactId>
<version>3.1</version>
<executions>
<execution>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
To run the Maven goal that runs both unit tests and static analysis with CheckStyle, FindBugs, and PMD, execute following.
1. mvn verify
We had to write a lot of XML that does some very basic and commonly used set of tasks.
On real projects with a lot more dependencies and tasks, Maven pom.xml files can easily reach hundreds or even thousands of lines of XML.
Here’s how the same looks in Gradle.
❤Gradle
apply plugin: 'java'
apply plugin: 'checkstyle'
apply plugin: 'findbugs'
apply plugin: 'pmd'
version = '1.0'
repositories {
mavenCentral()
}
dependencies {
testCompile group: 'junit', name: 'junit', version: '4.11'
testCompile group: 'org.hamcrest', name: 'hamcrest-all', version: '1.3'
}
task wrapper(type: Wrapper) {
gradleVersion = '1.12'
}
Not only that the Gradle code is much shorter and, to those familiar with Gradle, easier to understand than Maven, but it actually introduces many useful tasks not covered with the Maven code we just wrote.
To get the list of all tasks that Gradle can run with the current configuration, please execute the following.
1. gradle tasks --all
Clarity, complexity and the learning curve
For newcomers, Ant is the clearest tool of all.Just by reading the configuration XML one can understand what it does.
However, writing Ant tasks easily gets very complex. Maven and, especially, Gradle has a lot of tasks already available out-of-the-box or through plugins. For example, by seeing the following line it is probably not clear to those not initiated into mysteries of Gradle what tasks will be unlocked for us to use.
[build.gradle]
1.apply plugin: 'java'
This simple line of code adds 20+ tasks waiting for us to use.
Ant’s readability and Maven’s simplicity are, in my opinion, false arguments that apply only during the short initial Gradle learning curve.
Once one is used to the Gradle DSL, its syntax is shorter and easier to understand than those employed by Ant or Maven. Moreover, only Gradle offers both conventions and the creation of commands.
While Maven can be extended with Ant tasks, it is tedious and not very productive.
Gradle with Groovy brings it to the next level.
Build Lifecycle
A Build Lifecycle is a well-defined sequence of phases, which define the order in which the goals are to be executed.Here phase represents a stage in the life cycle. As an example, a typical Maven Build Lifecycle consists of the following sequence of phases.
There are always pre and post phases to register goals, which must run prior to, or after a particular phase.
When Maven starts building a project, it steps through a defined sequence of phases and executes goals, which are registered with each phase.
Maven has the following three standard lifecycles −
➦clean
➦default(or build)
➦site
A goal represents a specific task which contributes to the building and managing of a project.
It may be bound to zero or more build phases.
A goal not bound to any build phase could be executed outside of the build lifecycle by direct invocation.
The order of execution depends on the order in which the goal(s) and the build phase(s) are invoked. For example, consider the command below.
The clean and package arguments are build phases while the dependency:copy-dependencies is a goal.
mvn clean dependency:copy-dependencies package
Here the clean phase will be executed first, followed by the dependency:copy-dependencies goal, and finally, the package phase will be executed.
Clean Lifecycle
When we execute the mvn post-clean command, Maven invokes the clean lifecycle consisting of the following phases.
🔶pre-clean
🔶clean
🔶post-clean
Maven clean goal (clean: clean) is bound to the clean phase in the clean lifecycle.
It's clean: clean goal deletes the output of a build by deleting the build directory.
Thus, when the mvn clean command executes, Maven deletes the build directory.
We can customize this behavior by mentioning goals in any of the above phases of clean life cycle.
In the following example, We'll attach maven-antrun-plugin: run goal to the pre-clean, clean, and post-clean phases.
This will allow us to echo text messages displaying the phases of the clean lifecycle.
We've created a pom.xml in C:\MVN\project folder.
<project xmlns = "http://maven.apache.org/POM/4.0.0"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.companyname.projectgroup</groupId>
<artifactId>project</artifactId>
<version>1.0</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.1</version>
<executions>
<execution>
<id>id.pre-clean</id>
<phase>pre-clean</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>pre-clean phase</echo>
</tasks>
</configuration>
</execution>
<execution>
<id>id.clean</id>
<phase>clean</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>clean phase</echo>
</tasks>
</configuration>
</execution>
<execution>
<id>id.post-clean</id>
<phase>post-clean</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>post-clean phase</echo>
</tasks>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
Now open command console, go to the folder containing pom.xml and execute the following mvn command.
C:\MVN\project>mvn post-clean
[INFO] Scanning for projects...
[INFO] -----------------------------------------------------------------
-
[INFO] Building Unnamed - com.companyname.projectgroup:project:jar:1.0
[INFO] task-segment: [post-clean]
[INFO] ------------------------------------------------------------------
[INFO] [antrun:run {execution: id.pre-clean}]
[INFO] Executing tasks
[echo] pre-clean phase
[INFO] Executed tasks
[INFO] [clean:clean {execution: default-clean}]
[INFO] [antrun:run {execution: id.clean}]
[INFO] Executing tasks
[echo] clean phase
[INFO] Executed tasks
[INFO] [antrun:run {execution: id.post-clean}]
[INFO] Executing tasks
[echo] post-clean phase
[INFO] Executed tasks
[INFO] ------------------------------------------------------------------
[INFO] BUILD SUCCESSFUL
[INFO] ------------------------------------------------------------------
[INFO] Total time: > 1 second
[INFO] Finished at: Sat Jul 07 13:38:59 IST 2012
[INFO] Final Memory: 4M/44M
[INFO] ------------------------------------------------------------------
You can try tuning mvn clean command, which will display pre-clean and clean.
Nothing will be executed for the post-clean phase.
Default (or Build) Lifecycle
This is the primary life cycle of Maven and is used to build the application. It has the following 21 phases.There are few important concepts related to Maven Lifecycles, which are worth to mention −
When a phase is called via Maven command, for example, mvn compile, only phases up to and including that phase will execute.
Different maven goals will be bound to different phases of Maven lifecycle depending upon the type of packaging (JAR / WAR / EAR).
In the following example, we will attach maven-antrun-plugin: run goal to a few of the phases of the Build Lifecycle.
This will allow us to echo text messages displaying the phases of the lifecycle.
We've updated pom.xml in C:\MVN\project folder.
<project xmlns = "http://maven.apache.org/POM/4.0.0"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.companyname.projectgroup</groupId>
<artifactId>project</artifactId>
<version>1.0</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.1</version>
<executions>
<execution>
<id>id.validate</id>
<phase>validate</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>validate phase</echo>
</tasks>
</configuration>
</execution>
<execution>
<id>id.compile</id>
<phase>compile</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>compile phase</echo>
</tasks>
</configuration>
</execution>
<execution>
<id>id.test</id>
<phase>test</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>test phase</echo>
</tasks>
</configuration>
</execution>
<execution>
<id>id.package</id>
<phase>package</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>package phase</echo>
</tasks>
</configuration>
</execution>
<execution>
<id>id.deploy</id>
<phase>deploy</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>deploy phase</echo>
</tasks>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
Now open command console, go the folder containing pom.xml and execute the following mvn command.
C:\MVN\project>mvn compile
Maven will start processing and display phases of build life cycle up to the compile phase.
[INFO] Scanning for projects...
[INFO] -----------------------------------------------------------------
-
[INFO] Building Unnamed - com.companyname.projectgroup:project:jar:1.0
[INFO] task-segment: [compile]
[INFO] -----------------------------------------------------------------
-
[INFO] [antrun:run {execution: id.validate}]
[INFO] Executing tasks
[echo] validate phase
[INFO] Executed tasks
[INFO] [resources:resources {execution: default-resources}]
[WARNING] Using platform encoding (Cp1252 actually) to copy filtered
resources,
i.e. build is platform dependent!
[INFO] skip non existing resourceDirectory
C:\MVN\project\src\main\resources
[INFO] [compiler:compile {execution: default-compile}]
[INFO] Nothing to compile - all classes are up to date
[INFO] [antrun:run {execution: id.compile}]
[INFO] Executing tasks
[echo] compile phase
[INFO] Executed tasks
[INFO] -----------------------------------------------------------------
-
[INFO] BUILD SUCCESSFUL
[INFO] -----------------------------------------------------------------
-
[INFO] Total time: 2 seconds
[INFO] Finished at: Sat Jul 07 20:18:25 IST 2012
[INFO] Final Memory: 7M/64M
[INFO] -----------------------------------------------------------------
-
Site Lifecycle
Maven Site Plugin is generally used to create fresh documentation to create reports, deploy a site, etc. It has the following phases −➤pre-site
➤site
➤post-site
➤site-deploy
In the following example, we will attach maven-antrun-plugin: run goal to all the phases of Site lifecycle. This will allow us to echo text messages displaying the phases of the lifecycle.
We've updated pom.xml in C:\MVN\project folder.
<project xmlns = "http://maven.apache.org/POM/4.0.0"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.companyname.projectgroup</groupId>
<artifactId>project</artifactId>
<version>1.0</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.1</version>
<executions>
<execution>
<id>id.pre-site</id>
<phase>pre-site</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>pre-site phase</echo>
</tasks>
</configuration>
</execution>
<execution>
<id>id.site</id>
<phase>site</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>site phase</echo>
</tasks>
</configuration>
</execution>
<execution>
<id>id.post-site</id>
<phase>post-site</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>post-site phase</echo>
</tasks>
</configuration>
</execution>
<execution>
<id>id.site-deploy</id>
<phase>site-deploy</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>site-deploy phase</echo>
</tasks>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
Now open the command console, go the folder containing pom.xml and execute the following mvn command.
C:\MVN\project>mvn site
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------
[INFO] Building Unnamed - com.companyname.projectgroup:project:jar:1.0
[INFO] task-segment: [site]
[INFO] ------------------------------------------------------------------
[INFO] [antrun:run {execution: id.pre-site}]
[INFO] Executing tasks
[echo] pre-site phase
[INFO] Executed tasks
[INFO] [site:site {execution: default-site}]
[INFO] Generating "About" report.
[INFO] Generating "Issue Tracking" report.
[INFO] Generating "Project Team" report.
[INFO] Generating "Dependencies" report.
[INFO] Generating "Project Plugins" report.
[INFO] Generating "Continuous Integration" report.
[INFO] Generating "Source Repository" report.
[INFO] Generating "Project License" report.
[INFO] Generating "Mailing Lists" report.
[INFO] Generating "Plugin Management" report.
[INFO] Generating "Project Summary" report.
[INFO] [antrun:run {execution: id.site}]
[INFO] Executing tasks
[echo] site phase
[INFO] Executed tasks
[INFO] ------------------------------------------------------------------
[INFO] BUILD SUCCESSFUL
[INFO] ------------------------------------------------------------------
[INFO] Total time: 3 seconds
[INFO] Finished at: Sat Jul 07 15:25:10 IST 2012
[INFO] Final Memory: 24M/149M
[INFO] ------------------------------------------------------------------
Maven
Maven is a project management and comprehension tool that provides developers a complete build lifecycle framework.A development team can automate the project's build infrastructure in almost no time as Maven uses a standard directory layout and a default build lifecycle.
In the case of multiple development teams environment, Maven can set-up the way to work as per standards in a very short time.
As most of the project setups are simple and reusable, Maven makes a life of developer easy while creating reports, checks, build and testing automation setups.
Maven provides developers ways to manage the following −
- Builds
- Documentation
- Reporting
- Dependencies
- SCMs
- Releases
- Distribution
- Mailing list
To summarize, Maven simplifies and standardizes the project build process.
It handles compilation, distribution, documentation, team collaboration and other tasks seamlessly.
Maven increases reusability and takes care of most of the build related tasks.
Maven Evolution
Maven was originally designed to simplify building processes in the Jakarta Turbine project.
There were several projects and each project contained slightly different ANT build files. JARs were checked into CVS.
Apache group then developed Maven which can build multiple projects together, publish projects information, deploy projects, share JARs across several projects and help in collaboration of teams.
Objective
The primary goal of Maven is to provide the developer with the following −A comprehensive model for projects, which is reusable, maintainable, and easier to comprehend.
Plugins or tools that interact with this declarative model.
Maven project structure and contents are declared in an XML file, pom.xml, referred to as Project Object Model (POM), which is the fundamental unit of the entire Maven system. In later chapters, we will explain POM in detail.
Convention over Configuration
Maven uses Convention over Configuration, which means developers are not required to create build process themselves.
Developers do not have to mention each and every configuration detail.
Maven provides sensible default behavior for projects.
When a Maven project is created, Maven creates a default project structure.
The developer is only required to place files accordingly and he/she need not define any configuration in pom.xml.
As an example, the following table shows the default values for project source code files, resource files, and other configurations.
Assuming, ${basedir} denotes the project location −
In order to build the project, Maven provides developers with options to mention life-cycle goals and project dependencies (that rely on Maven plugin capabilities and on its default conventions).
Much of the project management and build related tasks are maintained by Maven plugins.
Developers can build any given Maven project without the need to understand how the individual plugins work.
We will discuss Maven Plugins in detail in the later chapters.
Features of Maven
- Simple project setup that follows best practices.
- Consistent usage across all projects.
- Dependency management including automatic updating.
- A large and growing repository of libraries.
- Extensible, with the ability to easily write plugins in Java or scripting languages.
- Instant access to new features with little or no extra configuration.
Model-based builds − Maven is able to build any number of projects into predefined output types such as jar, war, metadata.
A coherent site of project information − Using the same metadata as per the build process, Maven is able to generate a website and a PDF including complete documentation.
Release management and distribution publication − Without additional configuration, maven will integrate with your source control system such as CVS and manages the release of a project.
Backward Compatibility − You can easily port the multiple modules of a project into Maven 3 from older versions of Maven. It can support the older versions also.
Automatic parent versioning − No need to specify the parent in the sub-module for maintenance.
Parallel builds − It analyzes the project dependency graph and enables you to build schedule modules in parallel.
Using this, you can achieve the performance improvements of 20-50%.
Better Error and Integrity Reporting − Maven improved error reporting, and it provides you with a link to the Maven wiki page where you will get a full description of the error.
Maven Build Lifecycle
The Maven build follows a specific life cycle to deploy and distribute the target project.There are three built-in life cycles:
- default: the main life cycle as it’s responsible for project deployment
- clean: to clean the project and remove all files generated by the previous build
- site: to create the project’s site documentation
Each life cycle consists of a sequence of phases.
The default build life cycle consists of 23 phases as it’s the main build lifecycle.
On the other hand, clean life cycle consists of 3 phases, while the site lifecycle is made up of 4 phases.
Maven Phase
A Maven phase represents a stage in the Maven build lifecycle.Each phase is responsible for a specific task.
Here are some of the most important phases in the default build lifecycle:
- validate: check if all information necessary for the build is available
- compile: compile the source code
- test-compile: compile the test source code
- test: run unit tests
- package: package compiled source code into the distributable format (jar, war, …)
- integration-test: process and deploy the package if needed to run integration tests
- install: install the package to a local repository
- deploy: copy the package to the remote repository
For the full list of each lifecycle’s phases, check out the Maven Reference.
Phases are executed in a specific order.
This means that if we run a specific phase using the command:
1. mvn <PHASE>
This won’t only execute the specified phase but all the preceding phases as well.
For example, if we run the deploy phase – which is the last phase in the default build lifecycle – that will execute all phases before the deploy phase as well, which is the entire default lifecycle:
1.mvn deploy
Maven Goal
Each phase is a sequence of goals, and each goal is responsible for a specific task.When we run a phase – all goals bound to this phase are executed in order.
Here are some of the phases and default goals bound to them:
- compiler: compile – the compile goal from the compiler plugin is bound to the compile phase
- compiler:testCompile is bound to the test-compile phase
- surefire: a test is bound to test phase
- install: install is bound to install phase
- jar: jar and war: war is bound to package phase
We can list all goals bound to a specific phase and their plugins using the command:
1. mvn help: describe -Dcmd=PHASENAME
For example, to list all goals bound to the compile phase, we can run:
1.mvn help:describe -Dcmd=compile
And get the sample output:
1.compile' is a phase corresponding to this plugin:
2.org.apache.maven.plugins:maven-compiler-plugin:3.1:compile
Maven - Build Profiles
Build Profile
A Build profile is a set of configuration values, which can be used to set or override default values of Maven build.Using a build profile, you can customize build for different environments such as Production v/s Development environments.
Profiles are specified in a pom.xml file using its activeProfiles/profiles elements and are triggered in a variety of ways.
Profiles modify the POM at build time and are used to give parameters different target environments (for example, the path of the database server in the development, testing, and production environments).
Types of Build Profile
Build profiles are major of three types.Profile Activation
A Maven Build Profile can be activated in various ways.- Explicitly using command console input.
- Through maven settings.
- Based on environment variables (User/System variables).
- OS Settings (for example, Windows family).
- Present/missing files.
Profile Activation Examples
Let us assume the following directory structure of your project −Now, under src/main/resources, there are three environment specific files −
Explicit Profile Activation
This will allow us to echo text messages for different profiles.
We will be using pom.xml to define different profiles and will activate profile at command console using maven command.
Assume, we've created the following pom.xml in C:\MVN\project folder.
<project xmlns = "http://maven.apache.org/POM/4.0.0"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.companyname.projectgroup</groupId>
<artifactId>project</artifactId>
<version>1.0</version>
<profiles>
<profile>
<id>test</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.1</version>
<executions>
<execution>
<phase>test</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>Using env.test.properties</echo>
<copy file="src/main/resources/env.test.properties"
tofile="${project.build.outputDirectory}
/env.properties"/>
</tasks>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
</project>
Now open the command console, go to the folder containing pom.xml and execute the following mvn command. Pass the profile name as argument using -P option.
C:\MVN\project>mvn test - Ptest
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------
[INFO] Building Unnamed - com.companyname.projectgroup:project:jar:1.0
[INFO] task-segment: [test]
[INFO] ------------------------------------------------------------------
[INFO] [resources:resources {execution: default-resources}]
[WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources,
i.e. build is platform dependent!
[INFO] Copying 3 resources
[INFO] [compiler:compile {execution: default-compile}]
[INFO] Nothing to compile - all classes are up to date
[INFO] [resources:testResources {execution: default-testResources}]
[WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources,
i.e. build is platform dependent!
[INFO] skip non existing resourceDirectory C:\MVN\project\src\test\resources
[INFO] [compiler:testCompile {execution: default-testCompile}]
[INFO] Nothing to compile - all classes are up to date
[INFO] [surefire:test {execution: default-test}]
[INFO] Surefire report directory: C:\MVN\project\target\surefire-reports
-------------------------------------------------------
T E S T S
-------------------------------------------------------
There are no tests to run.
Results :
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] [antrun:run {execution: default}]
[INFO] Executing tasks
[echo] Using env.test.properties
[INFO] Executed tasks
[INFO] ------------------------------------------------------------------
[INFO] BUILD SUCCESSFUL
[INFO] ------------------------------------------------------------------
[INFO] Total time: 1 second
[INFO] Finished at: Sun Jul 08 14:55:41 IST 2012
[INFO] Final Memory: 8M/64M
[INFO] ------------------------------------------------------------------
Now as an exercise, you can perform the following steps −
⭐Add another profile element to profiles element of pom.xml (copy existing profile element and paste it where profile elements end).
⭐Update id of this profile element from test to normal.
⭐Update task section to echo env.properties and copy env.properties to a target directory.
⭐Again repeat the above three steps, update id to prod and task section for env.prod.properties.
That's all.
Now you've three build profiles ready (normal/test/prod).
Now open the command console, go to the folder containing pom.xml and execute the following mvn commands. Pass the profile names as an argument using -P option.
C:\MVN\project>mvn test -Pnormal
C:\MVN\project>mvn test -Pprod
Profile Activation via Maven Settings
Open Maven settings.xml file available in %USER_HOME%/.m2 directory where %USER_HOME% represents the user home directory. If the settings.xml file is not there, then create a new one.Add test profile as an active profile using active Profiles node as shown below in the example.
<settings xmlns = "http://maven.apache.org/POM/4.0.0"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/settings-1.0.0.xsd">
<mirrors>
<mirror>
<id>maven.dev.snaponglobal.com</id>
<name>Internal Artifactory Maven repository</name>
<url>http://repo1.maven.org/maven2/</url>
<mirrorOf>*</mirrorOf>
</mirror>
</mirrors>
<activeProfiles>
<activeProfile>test</activeProfile>
</activeProfiles>
</settings>
Now open command console, go to the folder containing pom.xml and execute the following mvn command. Do not pass the profile name using -P option. Maven will display result of test profile being an active profile.
C:\MVN\project>mvn test
Profile Activation via Environment Variables
Now remove an active profile from maven settings.xml and update the test profile mentioned in pom.xml. Add activation element to profile element as shown below.
The test profile will trigger when the system property "env" is specified with the value "test".
Create an environment variable "env" and set its value as "test".
<profile>
<id>test</id>
<activation>
<property>
<name>env</name>
<value>test</value>
</property>
</activation>
</profile>
Let's open command console, go to the folder containing pom.xml and execute the following mvn command.
C:\MVN\project>mvn test
Profile Activation via Operating System
Activation element to include os detail as shown below.This test profile will trigger when the system is Windows XP.
<profile>
<id>test</id>
<activation>
<os>
<name>Windows XP</name>
<family>Windows</family>
<arch>x86</arch>
<version>5.1.2600</version>
</os>
</activation>
</profile>
Now open command console, go to the folder containing pom.xml and execute the following mvn commands.
Do not pass the profile name using -P option. Maven will display the result of the test profile is an active profile.
C:\MVN\project>mvn test
Profile Activation via Present/Missing File
Now activation element to include OS details as shown below.The test profile will trigger when target/generated-sources/axis tools/wsdl2java/com/company name/group is missing.
<profile>
<id>test</id>
<activation>
<file>
<missing>target/generated-sources/axistools/wsdl2java/
com/companyname/group</missing>
</file>
</activation>
</profile>
Now open the command console, go to the folder containing pom.xml and execute the following mvn commands.
Do not pass the profile name using -P option. Maven will display the result of the test profile is an active profile.
C:\MVN\project>mvn test
Maven - Manage Dependencies
One of the core features of Maven is Dependency Management. Managing dependencies is a difficult task once we've to deal with multi-module projects (consisting of hundreds of modules/sub-projects). Maven provides a high degree of control to manage such scenarios.Transitive Dependencies Discovery
It is pretty often a case, when a library, say A, depends upon other libraries, say B. In case another project C wants to use A, then that project requires to use library B too.Maven helps to avoid such requirements to discover all the libraries required. Maven does so by reading project files (pom.xml) of dependencies, figure out their dependencies and so on.
We only need to define direct dependency in each project pom. Maven handles the rest automatically.
With transitive dependencies, the graph of included libraries can quickly grow to a large extent. Cases can arise when there are duplicate libraries. Maven provides a few features to control the extent of transitive dependencies.
Dependency Scope
Transitive Dependencies Discovery can be restricted using various Dependency Scope as mentioned below.Dependency Management
Usually, we have a set of a project under a common project.
In such a case, we can create a common pom having all the common dependencies and then make this pom, the parent of sub-projects poms.
Following example will help you understand this concept.
Following are the detail of the above dependency graph −
- App-UI-WAR depends upon App-Core-lib and App-Data-lib.
- Root is a parent of App-Core-lib and App-Data-lib.
- Root defines Lib1, lib2, Lib3 as dependencies in its dependency section.
App-UI-WAR
<project xmlns = "http://maven.apache.org/POM/4.0.0"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.companyname.groupname</groupId>
<artifactId>App-UI-WAR</artifactId>
<version>1.0</version>
<packaging>war</packaging>
<dependencies>
<dependency>
<groupId>com.companyname.groupname</groupId>
<artifactId>App-Core-lib</artifactId>
<version>1.0</version>
</dependency>
</dependencies>
<dependencies>
<dependency>
<groupId>com.companyname.groupname</groupId>
<artifactId>App-Data-lib</artifactId>
<version>1.0</version>
</dependency>
</dependencies>
</project>
App-Core-lib
<project xmlns = "http://maven.apache.org/POM/4.0.0"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>Root</artifactId>
<groupId>com.companyname.groupname</groupId>
<version>1.0</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<groupId>com.companyname.groupname</groupId>
<artifactId>App-Core-lib</artifactId>
<version>1.0</version>
<packaging>jar</packaging>
</project>
App-Data-lib
<project xmlns = "http://maven.apache.org/POM/4.0.0"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>Root</artifactId>
<groupId>com.companyname.groupname</groupId>
<version>1.0</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<groupId>com.companyname.groupname</groupId>
<artifactId>App-Data-lib</artifactId>
<version>1.0</version>
<packaging>jar</packaging>
</project>
Root
<project xmlns = "http://maven.apache.org/POM/4.0.0"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.companyname.groupname</groupId>
<artifactId>Root</artifactId>
<version>1.0</version>
<packaging>pom</packaging>
<dependencies>
<dependency>
<groupId>com.companyname.groupname1</groupId>
<artifactId>Lib1</artifactId>
<version>1.0</version>
</dependency>
</dependencies>
<dependencies>
<dependency>
<groupId>com.companyname.groupname2</groupId>
<artifactId>Lib2</artifactId>
<version>2.1</version>
</dependency>
</dependencies>
<dependencies>
<dependency>
<groupId>com.companyname.groupname3</groupId>
<artifactId>Lib3</artifactId>
<version>1.1</version>
</dependency>
</dependencies>
</project>
Now when we build App-UI-WAR project, Maven will discover all the dependencies by traversing the dependency graph and build the application.
From the above example, we can learn the following key concepts −
🥀Common dependencies can be placed at a single place using the concept of parent pom.
Dependencies of App-Data-lib and App-Core-lib project are listed in the Root project
(See the packaging type of Root. It is POM).
🥀There is no need to specify Lib1, lib2, Lib3 as a dependency in App-UI-WAR.
Maven uses the Transitive Dependency Mechanism to manage such detail.
Top 10 Testing Automation Tools for Software Testing
1. Selenium
Selenium is a testing framework to perform web application testing across various browsers and platforms like Windows, Mac, and Linux.Selenium helps the testers to write tests in various programming languages like Java, PHP, C#, Python, Groovy, Ruby, and Perl.
It offers record and playback features to write tests without learning Selenium IDE.
Selenium proudly supports some of the largest, yet well-known browser vendors who make sure they have Selenium as a native part of their browser.
Selenium is undoubtedly the base for most of the other software testing tools in general.
Learn more about Selenium.
2. TestingWhiz
TestingWhiz is a test automation tool with the code-less scripting by Cygnet Infotech, a CMMi Level 3 IT solutions provider.
TestingWhiz tool’s Enterprise edition offers a complete package of various automated testing solutions like web testing, software testing, database testing, API testing, mobile app testing, regression test suite maintenance, optimization, and automation, and cross-browser testing.
TestingWhiz offers various important features like:
⚽Keyword-driven, data-driven testing, and distributed testing
⚽Record and playback test automation framework
⚽Object Eye Internal Recorder
⚽290+ inbuilt testing commands in addition to in-built JavaScript
⚽Integration with bug tracking tools like Jira, Mantis, and FogBugz
⚽Integration with test management tools like HP Quality Center
⚽Risk-based testing
⚽Continuous Integration and Delivery in Agile cycles
Learn more about TestingWhiz.
3. HPE Unified Functional Testing (HP – UFT formerly QTP)
HP QuickTest Professional was renamed to HPE Unified Functional Testing.HPE UFT offers testing automation for functional and regression testing for software applications.
Visual Basic Scripting Edition scripting language is used by this tool to register the test processes and operates the various objects and controls in testing the applications.
QTP offers various features like:
🎈Integration with Mercury Business Process Testing and Mercury Quality Center
🎈Unique Smart Object Recognition
🎈Error handling mechanism
🎈Creation of parameters for objects, checkpoints, and data-driven tables
🎈Automated documentation
Learn more about HP – UFT.
4. TestComplete
TestComplete is a functional testing platform that offers various solutions to automate testing for desktop, web, and mobile applications by SmartBear Software.TestComplete offers the following features:
🦋GUI testing
🦋Scripting Language Support – JavaScript, Python, VBScript, JScript, DelphiScript, C++Script & 🦋C#Script
🦋Test visualizer
🦋Scripted testing
🦋Test recording and playback
Learn more about TestComplete.
5. Ranorex
Ranorex Studio offers various testing automation tools that cover testing all desktop, web, and mobile applications.
Ranorex offers the following features:
GUI recognition
Reusable test codes
Bug detection
Integration with various tools
Record and playback
Learn more about Ranorex.
6. Sahi
Sahi is a testing automation tool to automate web applications testing.The open source Sahi is written in Java and JavaScript programming languages.
Sahi provides the following features:
✪Performs multi-browser testing
✪Supports ExtJS, ZK, Dojo, YUI, etc. frameworks
✪Record and playback on browser testing
Learn more about Sahi.
7. Watir
Watir is an open source testing tool made up of Ruby libraries to automate web application testing. It is pronounced as “water.”Watir offers the following features:
- Tests any language-based web application
- Cross-browser testing
- Compatible with business-driven development tools like RSpec, Cucumber, and Test/Unit
- Tests web page’s buttons, forms, links, and their responses
Learn more about Watir.
8. Tosca Testsuite
Tosca Testsuite by Tricentis uses model-based test automation to automate software testing.Tosca Testsuite comes with the following capabilities:
🔼Plan and design test case
🔼Test data provisioning
🔼Service virtualization network
🔼Tests mobile apps
🔼Integration management
🔼Risk coverage
🔼Tests mobile apps
🔼Integration management
🔼Risk coverage
Learn more about Tosca Testsuite.
Telerik TestStudio offers various compatibilities like:
★Support of programming languages like HTML, AJAX, ASP.NET, JavaScript, Silverlight, WPF, and MVC
★Integration with Visual Basic Studio 2010 and 2012
★Record and playback
★Cross-browser testing
★Manual testing
★Integration with bug tracking tools
Learn more about Telerik TestStudio.
WatiN supports web application testing for.Net programming languages.
It is licensed under Apache 2.0.
WatiN consists of the following features:
🟐Supports HTML and AJAX website testing
🟐Integration with unit testing tools
🟐Automate browser testing on IE and Firefox
🟐Generates web page screenshots
🟐Native support for Page and Control model
Learn more about WatiN.
After that installation would be started.
➽Another way to install Maven plug-in for Eclipse:
After successful installation does the followings in Eclipse:
Finally,
9. Telerik TestStudio
Telerik TestStudio offers one solution to automate desktop, web, and mobile application testing including UI, load, and performance testing.Telerik TestStudio offers various compatibilities like:
★Support of programming languages like HTML, AJAX, ASP.NET, JavaScript, Silverlight, WPF, and MVC
★Integration with Visual Basic Studio 2010 and 2012
★Record and playback
★Cross-browser testing
★Manual testing
★Integration with bug tracking tools
Learn more about Telerik TestStudio.
10. WatiN
WatiN is an open-source, C#-developed web application testing tool that was inspired by Watir.WatiN supports web application testing for.Net programming languages.

WatiN consists of the following features:
🟐Supports HTML and AJAX website testing
🟐Integration with unit testing tools
🟐Automate browser testing on IE and Firefox
🟐Generates web page screenshots
🟐Native support for Page and Control model
Learn more about WatiN.
Maven in Eclipse: step by step installation
Maven Eclipse plugin installation step by step:- Open Eclipse IDE
- Click Help -> Install New Software...
- Click Add button at top right corner
- At pop up: fill up Name as "M2Eclipse" and Location as "http://download.eclipse.org/technology/m2e/releases" or http://download.eclipse.org/technology/m2e/milestones/1.0
- Now click OK
After that installation would be started.
➽Another way to install Maven plug-in for Eclipse:
- Open Eclipse
- Go to Help -> Eclipse Marketplace
- Search by Maven
- Click the "Install" button at "Maven Integration for Eclipse" section
- Follow the instruction step by step
After successful installation does the followings in Eclipse:
- Go to Window --> Preferences
- Observe, Maven is enlisted at left panel
Finally,
- Click on an existing project
- Select Configure -> Convert to Maven Project
Guide to naming conventions on groupId, artifactId, and version
GroupId
GroupId uniquely identifies your project across all projects.
A group ID should follow Java's package name rules.
This means it starts with a reversed domain name you control.
For example,
org.apache.maven, org.apache.commons
Maven does not enforce this rule.
There are many legacy projects that do not follow this convention and instead use single word group IDs.
However, it will be difficult to get a new single word group ID approved for inclusion in the Maven Central repository.
You can create as many subgroups as you want.
A good way to determine the granularity of the groupId is to use the project structure.
That is, if the current project is a multiple module project, it should append a new identifier to the parent's groupId.
For example,
org.apache.maven, org.apache.maven.plugins, org.apache.maven.reporting
ArtifactId
ArtifactId is the name of the jar without version.
If you created it, then you can choose whatever name you want with lowercase letters and no strange symbols.
If it's a third party jar, you have to take the name of the jar as it's distributed.
eg. maven, commons-math
Version
Version if you distribute it, then you can choose any typical version with numbers and dots
(1.0, 1.1, 1.0.1, ...).
Don't use dates as they are usually associated with SNAPSHOT (nightly) builds. If it's a third party artifact, you have to use their version number whatever it is, and as strange as it can look.
For example,
2.0, 2.0.1, 1.3.1
Packaging
Defines the packaging method.
This could be e.g. a jar, war or ear file.
If the packaging type is a pom, Maven does not create anything for this project, it is just meta-data.
What happens when I add a Maven dependency?
In general, in all pom.xml we found dependency like this -
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.4</version>
</dependency>
Here groupId, artifactId, and version are 3 keys by which a jar is uniquely identified.
These 3 combination works like a coordinate system for uniquely identifying a point in space using x, y and z coordinates.
Whenever you issue an mvn package command maven tries to add the jar file indicating by the dependency to your build path. For doing this maven follows these steps -
Maven search in your local repository (default is ~/.m2 for Linux).
If the dependency/jar is found here then it adds the jar file to your build path. After that, it uses the required class file from the jar for compilation.
If the dependency is not found in ~/.m2 then it looks for your local private repository (If you already have configured any using setting.xml file) and maven central remote repository respectively.
If you don't have any local private repository then it directly goes to the maven central remote repository.
Whenever the jar is found in the local/remote repository it is downloaded and saved in ~/.m2.
Going forward, when you again issue an mvn package command then it's never searching for the dependency to any repository since it already in your ~/.m2.
Eclipse compilation error: The hierarchy of the type 'Class name' is inconsistent
- It means you are trying to implement a non-existing interface or you're extending a non-existing class.
- Try to refresh your Eclipse.
- If it doesn't work, it may mean that you have a reference to a JAR that is not in the build path.
- Check your project's classpath and verify that the jar containing the interface or the class is in it.
Wrote by Hansi