03 Dec 2014
Governments across the globe are keen to get citizens to use transactional digital services. Making Digital Government Work for Everyone is a clear and simple strategy that can deliver huge benefits to citizens and governments. It’s not fun when you get it wrong though!
The GDS GOV.UK Verify service has been getting some heat recently as they announced the public beta. Some farmers reacted angrily at being shut out of the new online service for Common Agricultural Policy (CAP) applications and payments as they were unable to certify their identity using the government identity assurance scheme. The opposition Labour Party weighed in calling on the government to urgently address the issues with “Verify”.
Identity Assurance is a core digital building block that enables public digital services. However, it is a complex and emotional minefield that must be managed carefully to protect public concerns on privacy. I was fortunate enough to have implemented the myaccount service in Scotland which was adopted by City of Edinburgh Council. What is more the service transformation was implemented in 6 months once the OJEU procurement process was completed.
The Scottish Government myaccount service transformed and modernised an outdated system whose golden data sources are owned and managed by the 32 Scottish local authorities. All councils know their citizen details because they must provide services to each and every household and individual throughout their lives whilst they remain within their jurisdiction.
The GDS strategy for identity assurance adopted a different approach where commercial identity providers were contracted to verify and assign trust to the identity of an individual. The accuracy of any identity match will depend on the quality of data and the information that a citizen enters during registration. The easier we make it to narrow down the name and address of a citizen, the simpler the process. Establishing trust is another matter!
The myaccount service establishes trust by linking a citizen name (as held by the health records) and citizen address (as held by the OSG) and then establishing a known fact or secret that only an authentic user can provide which must match to a trusted golden data source of transaction related data held by a local authority. For example, a citizen may need to provide the myaccount service with a council tax reference number that can be linked to their name and registered address to assign an online and real-time level of assurance. The service has an impressive accuracy for getting a match at first attempt. Where exceptions occur, a manual overnight batch process kicks in to investigate multiple matches and create a myccount.
These technical details aside. The most critical measure by which any public service will be measured, especially an identity assurance service like Verify or myaccount, is going to be based on the spectrum of services that a citizen can access once they have gone through the process of registering for an online identity and provided the necessary proofs. The more the services, the less will be the focus on the few users who could not be registered or make use of the service.
Digital services are driven by the dream to deliver faster, better and smarter. Unfortunately, most public sector organisations and government departments that run such projects tend to work at a very slow pace. Every digital project that is live within government adopts agile, scrum and whiteboards are plentiful. However, the timescales for delivery has time horizons of 5 years. Agile!!?
My own experience is that the public sector is managed and driven by people who have good intent. Sadly, the public sector and government in general has a very poor track record of managing innovative IT initiatives. Keeping it simple and following a truly agile approach to delivery can lead to results faster but the strategy has to be driven by content first! Health, Education, Council Services, etc. How can the citizen access new services easily, efficiently and with reduced delays.
The GDS team are making great inroads and will succeed as the service uptake gathers pace. Innovation and change takes time to bed down. Good things will eventually follow. I’m an optimist!
01 Mar 2014
Ever wondered why so many projects over run the schedule, under deliver on the expectations of the business and cost much more than was originally budgeted? The list of explanations for such disaster is a subject for another post.
The complexity of a project and the technical challenges cannot be underestimated. However, a large proportion of contributory factors is very tightly linked to the lack of discipline and rigour associated with the management and traceability of requirements. There is an important discipline that many quality delivery organisations follow that we can call the Application Delivery Methodology (ADM).
Every major corporation has a project management methodology and lifecycle. Unfortunately, very few people understand it and more importantly very rarely does the method assist in simplifying or improving the quality of project deliverables.
The purpose of our ADM is to standardise on a common set of artefacts for all business critical projects across the enterprise. In this article I investigate the importance of the tools required to support and manage the analysis and design phase of projects by managing the creation, maintenance and traceability of the ADM project artefacts and their inter-dependencies.
The ADM has been created to primarily follow the waterfall design approach and more importantly the logical process model for the analysis and design phase of a project as depicted in the diagram below. It is important to note that the definition and alignment of test related artefacts is considered to be critical and within the scope of analysis and design phase of the project lifecycle.
The diagram above identifies the core project artefacts that will be produced to drive the enterprise towards the creation and delivery of a high quality software solution. In general, the IT department will adopt the role of overall design authority for the technical solution and will work with in-house resources, external suppliers and third parties to deliver a robust, scalable, extensible and integrated quality solution on a fixed price basis. The approach is equally applicable to bespoke build, COTS integration or change enhancement to an existing system. Furthermore, the rigour of the proposed approach is equally applicable to application development and infrastructure engagements.
The diagram above identifies the standard set of artefacts (see boxes) and the linkage between these products (see lines). The left hand column depicts the requirements specification artefacts (e. g. BRS, FRS, and DDS) that will be produced by (i) the business analyst team, and (ii) the design team. The right hand column identifies the associated validation criteria that are aligned with the requirements specification artefacts by the test team.
At a logical level, the diagram above depicts the inter-dependencies between artefacts. It should be immediately clear from this diagram why IT projects are so inherently complex to manage as the inter-dependencies and hence impacts of decisions within any given artefact can have a profound effect vertically and horizontally. Given that each artefact is targeted to capture the requirements and acceptance criteria of a unique set of users, it is clear that any mis-interpretation or gaps in the artefact will lead to a mismatch in expectations. Experience suggests that where this mismatch is significant, the project will identify defects and incur costs to realign the solution to the original business requirements.
ADM and Traceability
The ADM defines the standard templates for artefacts and more importantly the rules for capturing requirements and specifications in a pragmatic, unambiguous and rigorous manner which is readily understandable by the target audience and easy to interpret for test and verification purposes. However, it is absolutely key that the selected tool provides a simple way for managing traceability and linkage across artefacts and even within artefacts themselves.
Why is this important? The purpose of any IT engagement is to deliver a costed solution to the business requirements and achieve a defect free sign off to the User Acceptance Criteria. However, in order to achieve this, the analysts, designers and testers must work collaboratively to increasingly finer level of detail to decompose the problem statement into a functional/logical definition of the solution and ultimately into a detailed/physical representation that can be implemented.
The traceability and linkage across (i) the conceptual business view, (ii) the functional logical view and (iii) the detailed physical view is very complex to manage and hence appropriate tooling and governance must be put in place to make this a practical proposition.
For example, at a high level we require (i) vertical traceability from the BRS to/from the FRS, and similarly (ii) horizontal traceability from the BRS to/from the User Acceptance Testing (UAT).
The complexity of the analysis and design phase and the approach to decomposition is depicted in the diagram below which highlights the extent of documentation and linkage that will occur within even the simplest of projects.
This diagram is simply showing another view of the relationships and dependencies described earlier. For example, the horizontal relationship between the BRS and UAT (BRS - UAT) is depicted at the top part of the document: from the BRS, one is able to derive a number of User Acceptance Tests; each user acceptance test can be linked to a requirement in the BRS. Also shown in the diagram vertical linkage between types of testing UAT-System Test-Integration Test-Unit Test. The linkages highlighted in red show intra-domain linkages where there are horizontal inter-dependencies across FRS n and FRS n+2. Similar relationships and hierarchies will exist in the testing domain.
Many organisations when faced with the prospect of managing this number of artefacts will often raise the white flag of surrender and decide to pursue a RAD based iterative approach. This approach is commonly unacceptable within most corporations and experience has taught us that this approach is a false economy and abdicates the responsibility of this analysis to the developers whose decisions and interpretations are made in isolation during code construction and under extreme time pressures to deliver software to a tight schedule. This will almost always result in increased cost and delays to the final solution delivery. As the complex network of dependencies between analysis / design requirements and test is non-trivial, any attempt at manually performing this task becomes unmanageable without the use of appropriate tooling. The end result of such an approach is inevitably increased risk and exposure to stringent rules and regulations.
Whilst the ADM may appear to be forcing a waterfall approach to application delivery, in actual fact it is highly adaptable to an iterative and agile delivery model. As the methodology is rolled out and experience is gained in scoping and decomposing business requirements into the identified project artefacts, projects can be delivered iteratively and in a more agile manner as a number of thin inter-related tuplets (BRS, FRS, DDS) that form the basis of an independent work stream.
The selected tool must be able to easily generate a matrix view of requirements such that a detailed conceptual, logical and physical view of requirements can be traced horizontally and vertically allowing a collaborative team comprising of business users, business analysts, project managers, designers, developers and testers to get a consistent view of the inter-dependencies of requirements across the project. This is a very powerful feature that provides a framework for decomposing multiple views that spans organisational boundaries.
The generation of summary views must be derived and maintained from a consistent master repository which is professionally managed for configuration and change control as a group wide asset in much the same way as any productionised business application.
Business Requirements Traceability
Any tool must be easily configurable to apply appropriate business rules and a hierarchical labelling structure to all information contained within the repository. Specifically, the generation of unique identifiers to Business Requirements is absolutely key as this will be the primary key upon which all downstream requirements will be cross referenced. From the table above, we can see how a physical representation of the desired traceability can actually be displayed. This model allows for linkage of the BRS to inter-related BRS Identifiers and derived FRS and DDS. Furthermore, the linkage to testing is captured at the UAT.
It is highly likely that our standard tool for capturing and managing test requirements will be based on a product such as the Mercury Test Director suite. Hence, any tool that we identify must be able to support bi-lateral integration.
We would expect the tool to provide this view as an active navigational aid to browse to the desired level of detail and with appropriate drill down as required.
Functional Requirements Traceability
The FRS view is identical to the BRS view, however, the context is from the perspective of the Designer and hence the FRS is the starting point and associated System Test.
Detailed Design Traceability
The DDS view is identical to the BRS and FRS views, however, the context is from the perspective of the Designer/Developer and hence the DDS is the starting point and associated Integration Test and Unit Test.
The following UML diagram is a meta model of the information and concepts described above. As the meta model shows, there are many 1-to-many relationships between the various artefacts and we will be looking for tools that seamlessly manage these complex associations between the many project artefacts.
Until we have selected a tool, we will assume that:
All requirements will be captured in Word and imported into the tool
All requirements will be uniquely identified and referenceable through an auto generated tag or a “link”
All requirements will be linked to an associated requirement and the linkage can be easily navigated up or down
Using the method, the following documents will be generated and be fully traceable:
Business Requirements Specifications
Functional Requirements Specifications
Detailed Design Specifications
User Acceptance Tests
Traceability (vertical and horizontal) amongst these deliverables will be mandatory. The tool must provide an easy and robust mechanism for managing the associations and the tool must:
Provide impact analysis reports e. g. to e able to quickly determine the impact of a requirement change
Warn about and manage deletion of artefacts. Obviously, removing an artefact with dependant artefacts needs to be managed and handled correctly
Provide different views on the data e. g. tree view, where starting from a requirement say, one can “drill” down into the associated artefacts such as analysis, tests (UAT, system test, unit tests) and
Provide coverage reports i. e. have all expected artefacts for a requirement been produced and their status (e. g. complete, signed off, work in progress, …). The tool must highlight outstanding artefacts.
The successful rollout of the ADM requires appropriate tooling to support the easy and intuitive approach that we have defined. Specifically, the use of standard templates and the management, navigation and traceability of those artefacts across collaborating teams and organisations is considered to be very important. Whilst the identified tool must have a rich set of features, it is critical that it easily and flexibly handles traceability as described in this document and provide natural integration with the Mercury Test suite of applications.
Traceability ensures completeness, that all lower level requirements come from higher level requirements, and that all higher level requirements are allocated to lower level requirements. Traceability is also used to manage change and provides the basis for test planning.