Industrial experience with Agile in high-integrity software development Working software Customer collaboration Responding to change.

1 Industrial experience with Agile in high-integrity soft...
Author: Austin Wilkerson
0 downloads 2 Views

1 Industrial experience with Agile in high-integrity software developmentWorking software Customer collaboration Responding to change

2 Can we do “High Integrity Agile” ?Short Answer YES!

3 Can we do “High Integrity Agile” ?Long Answer Yes … but …

4 CONTENT Background and sourcesHigh-Integrity Agile – Assumptions and Issues Agile Blind Spots – Turning the Dials Up The $64M Question… Next Steps…

5 CONTENT Background and sourcesHigh-Integrity Agile – Assumptions and Issues Agile Blind Spots – Turning the Dials Up The $64M Question… Next Steps…

6 Some light reading…

7 …and a couple of projects…

8 …and a couple of projects…

9 …and reports from industry…Many reports of Agile being used in medical devices, under FDA regulatory regime, Thales Avionics (Valence, France) report use of Agile in development of avionic systems, And many more…

10 CONTENT Background and sourcesHigh-Integrity Agile – Assumptions and Issues Agile Blind Spots – Turning the Dials Up The $64M Question… Next Steps…

11 Single customer? Agile view: Single “customer”, represented by “Product Owner” role… Really? What about Multiple classes of “User” Procurer Regulator (and standard-setting body) Project ISA

12 Regression Test and VerificationAgile view: “Regression Test” is principal (only?) verification activity, and is fast and amenable to automation. “All tests pass” defines When a refactoring is done When a product is “good enough” to close a sprint and ship to customer.

13 Regression Test and VerificationHigh Integrity View - No chance! We know “test” is utterly insufficient to claim ultra-reliability, safety or security properties. Butler/Finelli and Littlewood papers from 20 years ago… Security will always defy test anyway… Programming Satan’s Computer…

14 Regression Test and VerificationMany more forms of verification are required by standards, for example: Personal and Peer Review Automated static analysis Structural coverage (on target?) Traceability analysis Performance test Penetration test etc. etc… We know we can do much better anyway – for example, aggressive use of sound static analysis.

15 Upfront and ArchitectureObservation 1: High-Integrity systems have demanding non-functional requirements for safety, security, performance, reliability etc. etc. Observation 2: Our main weapon to achieve these goals is architecture. Observation 3: You can’t afford to “refactor in” these properties into a system late in the day!

16 Upfront and ArchitectureConclusion: we need just enough upfront architecture and design to be certain that Non-functional requirements will be met. Change can be accommodated later without horrendous pain and expense.

17 Upfront and ArchitectureBut how do we know what non-functional properties are required of the architecture? Errm…by doing proper (Up Front) requirements engineering for safety and security properties…

18 User Stories and Non-FunctionalAgile-style “User Stories” provide a sampling of the “D, S, R space” There will be “gaps” between the stories… Guess where the safety and security problems will lie… Aside: how much of the MULTOS CA formal specification is devoted to error handling??

19 Agile “Simple sprint pipeline”Agile presumes a two-stage pipeline: one system being used by the customer and one system being developed in current sprint. Delivery and deployment is assumed to be “instant”… Real world: no chance! Example: iFACTS 4-stage pipeline Build N: in live operation Build N+1: in NATS’ test lab Build N+2: in development/test at Altran Build N+3: Requirements and formal specification

20 Iteration rate… How fast can we iterate?Only as fast as the slowest pipeline stage… Full-blown evidence (e.g. safety case production) and customer acceptance test might be way too slow for a standard “Agile” model… Idea: multiple iteration rates and deliveries: Fast “minor” iteration with reduced evidence package and limited deployment. Slower “major” iteration with full evidence, suitable for operational deployment.

21 Embedded Systems IssuesAgile depends on plentiful availability of “target environment” to drive a fast build/integration/test process. Not True for embedded systems. Many projects have no target hardware for the majority of the time… Some verification activities (e.g. on-target structural coverage) are painful and slow.

22 Embedded Systems IssuesAvailability of target hardware for “test” can be a massive bottleneck. Idea: don’t depend on “target hardware” and “test” so much…

23 CONTENT Background and sourcesHigh-Integrity Agile – Assumptions and Issues Agile Blind Spots – Turning the Dials Up The $64M Question… Next Steps…

24 Turning the dials up… We’ve been building high-integrity software for more than 20 years… What have we learned that could improve an Agile approach? What about Team and Personal Software Process (TSP/PSP)? Formal Methods? Correctness-by-Construction approach? Lean Engineering? Programming Language Design and Static Verication like SPARK?

25 Static Verification Strong Static Verification can complement “test”Faster “Sounder” – potentially covers all input data and system states. Deeper – prevents and finds bugs that “test” simply cannot reach. So…precede “Regression Test” with “Regression Proof” All developers run SV tools all the time, and is not dependent on availability of target hardware, so scales well. Performance? iFACTS regression proof now takes 15 minutes.

26 Reviewing vs pair programmingJury is still out on whether Agile “pair programming” is really better… Conjecture: Developer + Strong Static Verification + PSP Personal Review + TSP Peer Review …is much better. No control experiment to confirm this…sorry!

27 Automation, automation, automation…Can we automate production of other verification evidence? Structural coverage Traceability analysis Other artefacts required by your standard or regulator? Yes...of course… So…right-to-left plan it. Work out which artefacts can be auto-generated and plan approach, disciplines and languages to do this in your minor or major iteration.

28 A naïve Agile “build/integration” system

29 An Agile “Evidence Engine”...

30 The $64M Question… CONTENT Background and sourcesHigh-Integrity Agile – Assumptions and Issues Agile Blind Spots – Turning the Dials Up The $64M Question… Next Steps…

31 The $64M question… So...how much “Upfront” is “Just Right” ???It depends… …but inform this decision with solid Requirements Engineering, especially for non- functional properties. Aside: NASA COCOMO-II model for in-flight software:

32 High-Integrity Agile Process

33 The $64M question… Proposal: two-stage projectStage 1: Upfront work, resulting in requirements, specification (complete enough to estimate from), and enough architecture to verify NFRs and foreseeable change. Stage 2: Incremental/Agile build with multiple iteration rates. Critical: Completely different contractual and financial terms for Stages 1 and 2. (Discuss with your procurer…  )

34 CONTENT Background and sourcesHigh-Integrity Agile – Assumptions and Issues Agile Blind Spots – Turning the Dials Up The $64M Question… Next Steps…

35 Next Steps… For us: report on next project – Scrum with SPARK!For you: please publish your experiences.