Details

Advanced Testing of Systems-of-Systems, Volume 2


Advanced Testing of Systems-of-Systems, Volume 2

Practical Aspects
1. Aufl.

von: Bernard Homes

126,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 29.11.2022
ISBN/EAN: 9781394188468
Sprache: englisch
Anzahl Seiten: 304

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p>As a society today, we are so dependent on systems-of-systems that any malfunction has devastating consequences, both human and financial. Their technical design, functional complexity and numerous interfaces justify a significant investment in testing in order to limit anomalies and malfunctions.</p> <p>Based on more than 40 years of practice, this book goes beyond the simple testing of an application – already extensively covered by other authors – to focus on methodologies, techniques, continuous improvement processes, load estimates, metrics and reporting, which are illustrated by a case study. It also discusses several challenges for the near future.</p> <p>Pragmatic and clear, this book displays many examples and references that will help you improve the quality of your systemsof-systems efficiently and effectively and lead you to identify the impact of upstream decisions and their consequences.</p> <p><i>Advanced Testing of Systems-of-Systems 2</i> deals with the practical implementation and use of the techniques and methodologies proposed in the first volume.</p>
<p>Dedication and Acknowledgments xiii</p> <p>Preface xv</p> <p><b>Chapter 1 Test Project Management 1</b></p> <p>1.1 General principles 1</p> <p>1.1.1 Quality of requirements 2</p> <p>1.1.2 Completeness of deliveries 3</p> <p>1.1.3 Availability of test environments 3</p> <p>1.1.4 Availability of test data 4</p> <p>1.1.5 Compliance of deliveries and schedules 5</p> <p>1.1.6 Coordinating and setting up environments 6</p> <p>1.1.7 Validation of prerequisites – Test Readiness Review (TRR) 6</p> <p>1.1.8 Delivery of datasets (TDS) 7</p> <p>1.1.9 Go-NoGo decision – Test Review Board (TRB) 7</p> <p>1.1.10 Continuous delivery and deployment 8</p> <p>1.2 Tracking test projects 9</p> <p>1.3 Risks and systems-of-systems 10</p> <p>1.4 Particularities related to SoS 11</p> <p>1.5 Particularities related to SoS methodologies 11</p> <p>1.5.1 Components definition 12</p> <p>1.5.2 Testing and quality assurance activities 12</p> <p>1.6 Particularities related to teams 12</p> <p><b>Chapter 2 Testing Process 15</b></p> <p>2.1 Organization 17</p> <p>2.2 Planning 18</p> <p>2.2.1 Project WBS and planning 19</p> <p>2.3 Control of test activities 21</p> <p>2.4 Analyze 22</p> <p>2.5 Design 23</p> <p>2.6 Implementation 24</p> <p>2.7 Test execution 25</p> <p>2.8 Evaluation 26</p> <p>2.9 Reporting 28</p> <p>2.10 Closure 29</p> <p>2.11 Infrastructure management 29</p> <p>2.12 Reviews 30</p> <p>2.13 Adapting processes 31</p> <p>2.14 RACI matrix 32</p> <p>2.15 Automation of processes or tests 33</p> <p>2.15.1 Automate or industrialize? 33</p> <p>2.15.2 What to automate? 33</p> <p>2.15.3 Selecting what to automate 34</p> <p><b>Chapter 3 Continuous Process Improvement 37</b></p> <p>3.1 Modeling improvements 37</p> <p>3.1.1 PDCA and IDEAL 38</p> <p>3.1.2 CTP 39</p> <p>3.1.3 SMART 41</p> <p>3.2 Why and how to improve? 41</p> <p>3.3 Improvement methods 42</p> <p>3.3.1 External/internal referential 42</p> <p>3.4 Process quality 46</p> <p>3.4.1 Fault seeding 46</p> <p>3.4.2 Statistics 46</p> <p>3.4.3 A posteriori 47</p> <p>3.4.4 Avoiding introduction of defects 47</p> <p>3.5 Effectiveness of improvement activities 48</p> <p>3.6 Recommendations 50</p> <p><b>Chapter 4 Test, QA or IV&V Teams 51</b></p> <p>4.1 Need for a test team 52</p> <p>4.2 Characteristics of a good test team 53</p> <p>4.3 Ideal test team profile 54</p> <p>4.4 Team evaluation 55</p> <p>4.4.1 Skills assessment table 56</p> <p>4.4.2 Composition 58</p> <p>4.4.3 Select, hire and retain 59</p> <p>4.5 Test manager 59</p> <p>4.5.1 Lead or direct? 60</p> <p>4.5.2 Evaluate and measure 61</p> <p>4.5.3 Recurring questions for test managers 62</p> <p>4.6 Test analyst 63</p> <p>4.7 Technical test analyst 64</p> <p>4.8 Test automator 65</p> <p>4.9 Test technician 66</p> <p>4.10 Choose our testers 66</p> <p>4.11 Training, certification or experience? 67</p> <p>4.12 Hire or subcontract? 67</p> <p>4.12.1 Effective subcontracting 68</p> <p>4.13 Organization of multi-level test teams 68</p> <p>4.13.1 Compliance, strategy and organization 69</p> <p>4.13.2 Unit test teams (UT/CT) 70</p> <p>4.13.3 Integration testing team (IT) 70</p> <p>4.13.4 System test team (SYST) 70</p> <p>4.13.5 Acceptance testing team (UAT) 71</p> <p>4.13.6 Technical test teams (TT) 71</p> <p>4.14 Insourcing and outsourcing challenges 72</p> <p>4.14.1 Internalization and collocation 72</p> <p>4.14.2 Near outsourcing 73</p> <p>4.14.3 Geographically distant outsourcing 74</p> <p><b>Chapter 5 Test Workload Estimation 75</b></p> <p>5.1 Difficulty to estimate workload 75</p> <p>5.2 Evaluation techniques 76</p> <p>5.2.1 Experience-based estimation 76</p> <p>5.2.2 Based on function points or TPA 77</p> <p>5.2.3 Requirements scope creep 79</p> <p>5.2.4 Estimations based on historical data 80</p> <p>5.2.5 WBS or TBS 80</p> <p>5.2.6 Agility, estimation and velocity 81</p> <p>5.2.7 Retroplanning 82</p> <p>5.2.8 Ratio between developers – testers 82</p> <p>5.2.9 Elements influencing the estimate 83</p> <p>5.3 Test workload overview 85</p> <p>5.3.1 Workload assessment verification and validation 86</p> <p>5.3.2 Some values 86</p> <p>5.4 Understanding the test workload 87</p> <p>5.4.1 Component coverage 87</p> <p>5.4.2 Feature coverage 88</p> <p>5.4.3 Technical coverage 88</p> <p>5.4.4 Test campaign preparation 89</p> <p>5.4.5 Running test campaigns 89</p> <p>5.4.6 Defects management 90</p> <p>5.5 Defending our test workload estimate 91</p> <p>5.6 Multi-tasking and crunch 92</p> <p>5.7 Adapting and tracking the test workload 92</p> <p><b>Chapter 6 Metrics, KPI and Measurements 95</b></p> <p>6.1 Selecting metrics 96</p> <p>6.2 Metrics precision 97</p> <p>6.2.1 Special case of the cost of defaults 97</p> <p>6.2.2 Special case of defects 98</p> <p>6.2.3 Accuracy or order of magnitude? 98</p> <p>6.2.4 Measurement frequency 99</p> <p>6.2.5 Using metrics 99</p> <p>6.2.6 Continuous improvement of metrics 100</p> <p>6.3 Product metrics 101</p> <p>6.3.1 FTR: first time right 101</p> <p>6.3.2 Coverage rate 102</p> <p>6.3.3 Code churn 103</p> <p>6.4 Process metrics 104</p> <p>6.4.1 Effectiveness metrics 104</p> <p>6.4.2 Efficiency metrics 107</p> <p>6.5 Definition of metrics 108</p> <p>6.5.1 Quality model metrics 109</p> <p>6.6 Validation of metrics and measures 110</p> <p>6.6.1 Baseline 110</p> <p>6.6.2 Historical data 111</p> <p>6.6.3 Periodic improvements 112</p> <p>6.7 Measurement reporting 112</p> <p>6.7.1 Internal test reporting 113</p> <p>6.7.2 Reporting to the development team 114</p> <p>6.7.3 Reporting to the management 114</p> <p>6.7.4 Reporting to the clients or product owners 115</p> <p>6.7.5 Reporting to the direction and upper management 116</p> <p><b>Chapter 7 Requirements Management 119</b></p> <p>7.1 Requirements documents 119</p> <p>7.2 Qualities of requirements 120</p> <p>7.3 Good practices in requirements management 122</p> <p>7.3.1 Elicitation 122</p> <p>7.3.2 Analysis 123</p> <p>7.3.3 Specifications 123</p> <p>7.3.4 Approval and validation 124</p> <p>7.3.5 Requirements management 124</p> <p>7.3.6 Requirements and business knowledge management 125</p> <p>7.3.7 Requirements and project management 125</p> <p>7.4 Levels of requirements 126</p> <p>7.5 Completeness of requirements 126</p> <p>7.5.1 Management of TBDs and TBCs 126</p> <p>7.5.2 Avoiding incompleteness 127</p> <p>7.6 Requirements and agility 127</p> <p>7.7 Requirements issues 128</p> <p><b>Chapter 8 Defects Management 129</b></p> <p>8.1 Defect management, MOA and MOE 129</p> <p>8.1.1 What is a defect? 129</p> <p>8.1.2 Defects and MOA 130</p> <p>8.1.3 Defects and MOE 130</p> <p>8.2 Defect management workflow 131</p> <p>8.2.1 Example 131</p> <p>8.2.2 Simplify 132</p> <p>8.3 Triage meetings 133</p> <p>8.3.1 Priority and severity of defects 133</p> <p>8.3.2 Defect detection 134</p> <p>8.3.3 Correction and urgency 135</p> <p>8.3.4 Compliance with processes 136</p> <p>8.4 Specificities of TDDs, ATDDs and BDDs 136</p> <p>8.4.1 TDD: test-driven development 136</p> <p>8.4.2 ATDD and BDD 137</p> <p>8.5 Defects reporting 138</p> <p>8.5.1 Defects backlog management 139</p> <p>8.6 Other useful reporting 141</p> <p>8.7 Don’t forget minor defects 141</p> <p><b>Chapter 9 Configuration Management 143</b></p> <p>9.1 Why manage configuration? 143</p> <p>9.2 Impact of configuration management 144</p> <p>9.3 Components 145</p> <p>9.4 Processes 145</p> <p>9.5 Organization and standards 146</p> <p>9.6 Baseline or stages, branches and merges 147</p> <p>9.6.1 Stages 148</p> <p>9.6.2 Branches 148</p> <p>9.6.3 Merge 148</p> <p>9.7 Change control board (CCB) 149</p> <p>9.8 Delivery frequencies 149</p> <p>9.9 Modularity 150</p> <p>9.10 Version management 150</p> <p>9.11 Delivery management 151</p> <p>9.11.1 Preparing for delivery 153</p> <p>9.11.2 Delivery validation 154</p> <p>9.12 Configuration management and deployments 155</p> <p><b>Chapter 10 Test Tools and Test Automation 157</b></p> <p>10.1 Objectives of test automation 157</p> <p>10.1.1 Find more defects 158</p> <p>10.1.2 Automating dynamic tests 159</p> <p>10.1.3 Find all regressions 160</p> <p>10.1.4 Run test campaigns faster 161</p> <p>10.2 Test tool challenges 161</p> <p>10.2.1 Positioning test automation 162</p> <p>10.2.2 Test process analysis 162</p> <p>10.2.3 Test tool integration 162</p> <p>10.2.4 Qualification of tools 163</p> <p>10.2.5 Synchronizing test cases 164</p> <p>10.2.6 Managing test data 164</p> <p>10.2.7 Managing reporting (level of trust in test tools) 165</p> <p>10.3 What to automate? 165</p> <p>10.4 Test tooling 166</p> <p>10.4.1 Selecting tools 167</p> <p>10.4.2 Computing the return on investment (ROI) 169</p> <p>10.4.3 Avoiding abandonment of tools and automation 169</p> <p>10.5 Automated testing strategies 170</p> <p>10.6 Test automation challenge for SoS 171</p> <p>10.6.1 Mastering test automation 171</p> <p>10.6.2 Preparing test automation 173</p> <p>10.6.3 Defect injection/fault seeding 173</p> <p>10.7 Typology of test tools and their specific challenges 174</p> <p>10.7.1 Static test tools versus dynamic test tools 175</p> <p>10.7.2 Data-driven testing (DDT) 176</p> <p>10.7.3 Keyword-driven testing (KDT) 176</p> <p>10.7.4 Model-based testing (MBT) 177</p> <p>10.8 Automated regression testing 178</p> <p>10.8.1 Regression tests in builds 178</p> <p>10.8.2 Regression tests when environments change 179</p> <p>10.8.3 Prevalidation regression tests, sanity checks and smoke tests 179</p> <p>10.8.4 What to automate? 180</p> <p>10.8.5 Test frameworks 182</p> <p>10.8.6 E2E test cases 183</p> <p>10.8.7 Automated test case maintenance or not? 184</p> <p>10.9 Reporting 185</p> <p>10.9.1 Automated reporting for the test manager 186</p> <p><b>Chapter 11 Standards and Regulations 187</b></p> <p>11.1 Definition of standards 189</p> <p>11.2 Usefulness and interest 189</p> <p>11.3 Implementation 190</p> <p>11.4 Demonstration of compliance – IADT 190</p> <p>11.5 Pseudo-standards and good practices 191</p> <p>11.6 Adapting standards to needs 191</p> <p>11.7 Standards and procedures 192</p> <p>11.8 Internal and external coherence of standards 192</p> <p><b>Chapter 12 Case Study 195</b></p> <p>12.1 Case study: improvement of an existing complex system 195</p> <p>12.1.1 Context and organization 196</p> <p>12.1.2 Risks, characteristics and business domains 198</p> <p>12.1.3 Approach and environment 200</p> <p>12.1.4 Resources, tools and personnel 210</p> <p>12.1.5 Deliverables, reporting and documentation 212</p> <p>12.1.6 Planning and progress 213</p> <p>12.1.7 Logistics and campaigns 216</p> <p>12.1.8 Test techniques 217</p> <p>12.1.9 Conclusions and return on experience 218</p> <p><b>Chapter 13 Future Testing Challenges 223</b></p> <p>13.1 Technical debt 223</p> <p>13.1.1 Origin of the technical debt 224</p> <p>13.1.2 Technical debt elements 225</p> <p>13.1.3 Measuring technical debt 226</p> <p>13.1.4 Reducing technical debt 227</p> <p>13.2 Systems-of-systems specific challenges 228</p> <p>13.3 Correct project management 229</p> <p>13.4 DevOps 230</p> <p>13.4.1 DevOps ideals 231</p> <p>13.4.2 DevOps-specific challenges 231</p> <p>13.5 IoT (Internet of Things) 232</p> <p>13.6 Big Data 233</p> <p>13.7 Services and microservices 234</p> <p>13.8 Containers, Docker, Kubernetes, etc 235</p> <p>13.9 Artificial intelligence and machine learning (AI/ML) 235</p> <p>13.10 Multi-platforms, mobility and availability 237</p> <p>13.11 Complexity 238</p> <p>13.12 Unknown dependencies 238</p> <p>13.13 Automation of tests 239</p> <p>13.13.1 Unrealistic expectations 240</p> <p>13.13.2 Difficult to reach ROI 241</p> <p>13.13.3 Implementation difficulties 242</p> <p>13.13.4 Think about maintenance 243</p> <p>13.13.5 Can you trust your tools and your results? 244</p> <p>13.14 Security 245</p> <p>13.15 Blindness or cognitive dissonance 245</p> <p>13.16 Four truths 246</p> <p>13.16.1 Importance of Individuals 247</p> <p>13.16.2 Quality versus quantity 247</p> <p>13.16.3 Training, experience and expertise 248</p> <p>13.16.4 Usefulness of certifications 248</p> <p>13.17 Need to anticipate 249</p> <p>13.18 Always reinvent yourself 250</p> <p>13.19 Last but not least 250</p> <p>Terminology 253</p> <p>References 261</p> <p>Index 267</p> <p>Summary of Volume 1 269</p>
<p><b>Bernard Homès</b> is the founder of ISTQB, AST and the CFTL, senior member of the IEEE Standards Association and president of TESSCO sas. He has published several recognized books on software testing, standards and ISTQB Advanced v2007 and v2012 syllabi.</p>

Diese Produkte könnten Sie auch interessieren:

Yuzhnoye Launchers and Satellites
Yuzhnoye Launchers and Satellites
von: Christian Lardier
PDF ebook
129,99 €
Critical Systems Thinking
Critical Systems Thinking
von: Michael C. Jackson
EPUB ebook
34,99 €