2012 Issue
39 Coding One trick, if you have a strained quality assurance testing staff, is to have a construction review. This is not a code review; this is a review of a programmer’s work by another programmer. When program- mers complete the code and are ready to send it to testing, they first ask another programmer to try to break it. They do this right at a workstation, where it is very easy for the programmers to im- mediately fix any found problems. Once the built code is stable, they can then send it to the testing group. In some cases, coding standards are important. Instead of doing a code review at the end of a developer’s efforts, a good idea is to perform a code review about 20% into a developer’s efforts, and then have a follow-up review after the work is about 80% complete. This practice finds bad habits early, providing enough time for a coder to refactor any needed code while still meeting a deadline. An extreme but effective practice, found in agile development communities, is the practice of Test Driven Development (TDD). TDD is the practice of creating test cases at the code level for every subroutine in the code. The TDD pattern is called “Red, Green, Clean,” and represents the following practice: 1. Creating a subroutine shell, 2. Creating a test case that calls the subroutine, 3. Running the test case, which returns a failure (“Red”) because there is no code inside it, 4. Completing the subroutines code and calling the test case again, hopefully returning a success (“Green”), 5. Refactoring the code to optimize it for speed and readability. All of these test cases are connected by one backbone test harness. Although this practice requires a littlemore overhead for the devel- opers, it ensures 100% code test coverage, andmakes maintaining an existing product extremely low-risk. As changes are made later to a product, all of the test cases can be run again. If any changes have broken other parts of the software, the test harness will find the error before the customers will. Testing Testing should be examined from different perspectives. Initially, a smoke-test should be performed, where the tester verifies all of the components are in place and have access to whatever data-store exists supplying their data. A simple exploration of every screen, triggering one function per screen to see whether data access is present, is a good idea. Without first performing a smoke test, testers may spend hours testing working compo- nents, only to find that additional, needed components were not included in their version. This forces them to start their testing process all over again. Next should be unit testing, where each screen is tested thoroughly for desired functionality. Integration testing, or scenario testing, ensures the software is adequate for real-world situations. Good integration tests track the consistency of data input in one screen to the same data output in other screens or reports. UI tests are always good to ensure the screens paint and react properly to user-input. Make a list of common UI problems, and have your testers check each screen for each problem. A traceability matrix will help with this. User Acceptance Once your software has progressed this far, have an end-user confirm that the working screens and reports are ready and useful. Be sure to confirmwhich users performed each test, and what their role was. Be sure to includeeverymajor role in theuser acceptance testingprocess. Sponsor Acceptance As a final step, confirm with your project sponsor that all of the deliverables are accounted for and have met all of the required quality checks along the way, from concept to delivery. Be sure your sponsor has established success criteria and objectives for the project up-front. Youmay then review these objectives, deliverables, and success criteria. A traceability matrix is useful for this final sponsor-acceptance process. The traceability matrix should show project deliverables or requirements that have been cross-referenced with upstream project objectives, downstream quality review, and successful testing results. Release Create a list of lessons learned during the development process. Hold a project retrospective after roll-out and record any additional lessons learned. Be sure to save all project-quality documentation and lessons learned in a release repository, where it can be accessed easily by future software teams. In conclusion, putting a quality management system in place for your software environment requires a mild investment in qual- ity practices. Don’t be surprised to find some pushback from some team members. The key is to compare the cost of quality measures to the cost of penalties, firefighting, rework, and lost time due to quality-related efforts. You can almost always make a convincing case for adopting the advanced quality practices described here.
Made with FlippingBook
RkJQdWJsaXNoZXIy OTM0Njg2