DoD is a collection of valuable deliverables required to produce software.
Deliverable that add verifiable/demonstrable addition of value to the product are part of the definition of done. Such as writing code, coding comments, unit testing, integration testing, release notes, design documents etc. Definition of done helps frame our thinking to identify deliverable that a team has to complete in order to build software. Focusing on value added steps allows the team to eliminate wasteful activities that complicate software development efforts. It is a simple list of valuable deliverable.
DoD is the primary reporting mechanism for team members.
My favorite agile manifesto value is “Individuals and interactions over processes and tools”. Would it not be effective reporting to say, “Feature’s done”? DoD is a simple artifact that adds clarity to “Feature’s done” statement. A feature or Product Backlog Item is either done or it is not-done. Using DoD as a reference for this conversation a team member can effectively update other team members and product owner. Kindly note that by primary reporting mechanism I do not intend that DoD is the only reporting mechanism used.
DoD is informed by reality.
Scrum framework sets a very high bar of delivering “Potentially Shippable Software” at the end of every sprint. To me, potentially shippable software is a feature(s) waiting on product owner’s discretion to be released to end-users. Teams that are able to release to end-users within a maximum of 2 days can be reasonably said to have their product in potentially shippable state. For such teams: Potentially Shippable = Definition of Done.
For other teams working to achieve potentially shippable state, their DoD contains only a subset of deliverable necessary to release to end users. Such teams have DoD at various levels:
§ Definition of Done for a Feature (Story or Product Backlog Item)
§ Definition of Done for a Sprint (Collection of features developed within a sprint)
§ Definition of Done for a Release (Potentially shippable state)
There are various factors which influence whether a given activity belongs in DoD for a feature or for a sprint or for a release.
The most important is for the team to realistically answer:
Can we do this activity for each feature? If not, then
Can we do this activity for each sprint? If not, then
We have to do this activity for our release!
For activities that cannot be included for a sprint/feature: “Discuss all of the obstacles which stop them from delivering this each iteration/sprint” – (Building a Definition of Done)
Some of the common root causes for impediments that I have observed:
a. Team does not have the skill set to incorporate activities into the definition of done for a sprint or for a feature.
b. Team does not have the right set of tools. (Example: continuous integration environment, automated build, servers etc.)
c. Team members are executing their sprint in mini-waterfalls. Aha! Opportunity to be more cross-functional. Sharing of responsibilities across functional silos.
DoD is not static
DoD changes over time. Organization support and team’s ability, to remove impediments, enables inclusion of activities into DoD for feature/sprint.
DoD is an auditable checklist.
Task break down for a feature/story happens during sprint planning and also within a sprint. DoD is used to validate whether all major tasks are accounted (hours remaining) for. Also, after a feature or after a sprint, is done, DoD is used as a checklist to verify whether all necessary value added activities were completed. It is important to note that the generic nature of the definition of done has some limitations. Not all value added activities will be applicable to each feature since the definition of done is intended to be a comprehensive checklist. The team has to consciously decide about applicability of value added activities for each feature. For example following user experience guidelines for a feature that provides integration point (eg: web service) to another system is not applicable to that particular feature, however for other features within the system that interface with a human being require user experience guidelines to be followed.
Summary:
Definition of done is orthogonal to user acceptance criteria (functional acceptance) for a feature. It is a comprehensive collection of necessary value added deliverables that assert the quality of a feature and not the functionality of that feature. Definition of done is informed by reality where it captures activities that can be realistically committed by the team to be completed at each level (feature, sprint, release).
After some reflection on my language in this post, I chose to change couple of terms that I had used.
1. Replaced word “checklist” with “collection”.
why? – During my discussions with people, I have noticed that checklist has implied meaning which is beyond what was intended by the post. For traditional managers a checklist implies an order (sequence) and also a notion of “gotta do it”. I now feel that “collection” expresses my opinion better as it stays away from both of these traditional notions.
2. Replaced word “activities” with “deliverables”.
Why? – Initially the first point stated that ‘DoD is a checklist of activities…’ This could have been better stated as ‘DoD is a collection of deliverables..’ since by calling these as activities I inadvertently strayed into implying the ‘how’ rather than the ‘what?’. The team through its inspect & adapt cycles is ultimately responsible to figure out the ‘how?’
Hi Dennis,
Yes, simplicity is a virtue. If you feel there is a redundancy, talk to the team about it. See if the item intends to assert same aspect for quality of software. Go with the team’s consensus. After all, DoD is an tool/artifact to help the team develop quality software at the end of each iteration/sprint.
~ Dhaval
Hello Dahaval,
Thank you for explanation, it was very helpful. I would like also your opinion on a different matter. When defining DoD’s on different levels some redundancy can be created. For example the DoD for a feature can contain the following criteria:
– Code has to be unit tested
And the DoD for a sprint can have the criteria:
– All stories should be unit tested.
But the first criteria implies the second. Should this redundancy be removed or is it still useful on the sprint DoD?
Dennis,
Sure, there will be different set of tasks for different features. Capturing all tasks for all possible features in DoD will be overwhelming. Suggest elevating these tasks (eg: “test ui for valid e-mail entry”) to something more generic (eg: “acceptance test”).
DoD helps to guide our thinking so that we can generate appropriate tasks for a feature. Acceptance testing for a feature that has ui elements, will need different tasks than acceptance testing a feature that updates database. However, the commonality lies in validating: Does this feature do what it is supposed to?
Thank you for your comment Dennis, I hope this was helpful.
cheers!
Dhaval
What about tasks that apply for some features but not all of them? Like features that need a gui or a database. They need some additional testing tasks. Are these tasks also put on the DoD?