Some ideas are even borrowed from Prussian military tactics from the nineteenth century and their current implementation in the US Army. (Location 228)
There is a huge difference between this and missing a still target and then going back to have another go at hitting it. (Location 264)
Gerald Weinberg and Donald Gause suggest that the difference between disappointment and delight is not a matter of delivering software, but how well the delivery matches what clients expected. (Location 267)
Note: be careful about expectatjions!
Changes get implemented directly without updating specifications, leading to differences between code and system (Location 310)
Unfortunately, this comforting feeling quickly goes away if you think about the fact that these guys have the biggest guns in the world and that they misunderstand orders twice in every three times. (Location 349)
People in homogenous groups often tend to make decisions to minimise conflicts and reach consensus without really challenging or analysing any of the ideas put to discussion. (Location 501)
Unless everyone involved understands the business goals, there is a high risk of missing the target. (Location 545)
We should not dive straight away into how to implement something, but rather think about how the finished system will be used (Location 568)
Relationship between examples, tests and requirements (Location 589)
The resulting military doctrine was called Auftragstaktik, or mission-type tactics, (Location 619)
Today it survives as Mission Command in the US Army. (Location 620)
We can bridge the communication gap by communicating intent, getting different roles involved in nailing down the requirements and exploiting the relationships between tests, examples and requirements. We can use realistic examples consistently throughout the project to avoid the translation and minimise the effect of the telephone game, saving information from falling through the communication gaps. (Location 646)
Automate verification of the acceptance tests. (Location 655)
Focus the software development effort on the acceptance tests. (Location 656)
There are no abstract requirements, and we have examples to describe all the edge cases. (Location 684)
In test-driven development, developers write tests to specify what a unit of code should do, then implement the code unit to satisfy the requirement. These tests are called unit tests, as they focus on small code units. (Location 700)
In Sources of Power[4], Klein cites a paper by Karl Weick entitled Managerial thought in the context of action[14], which suggests a modified template for an effective commander’s intent document: Here’s what I think we face Here’s what I think we should do Here’s why Here’s what we should keep our eye on Now, talk to me (Location 1167)
be used as a basis for later discussion, especially if we need to get them signed off by a senior stakeholder. (Location 1401)
Once the acceptance criteria for the next phase of the project are captured in acceptance tests, (Location 1612)
Mike Scott wrote27 that in his organisation acceptance tests are used to measure the progress of development, in a metric called running tested features. (Location 1628)
Focusing the development just on the things expected by acceptance tests helps a great deal to prevent just-in-case code from leaking into the system. (Location 1632)
The representative set of examples, formalised into acceptance tests and connected to the code by test automation, plays the role of a live specification of the system – the authoritative source of what the system does and how it behaves. (Location 1849)
noticing inconsistencies and unclear definitions when (Location 1855)
With domain knowledge and understanding shared among team members and a comprehensive acceptance test suite based on realistic examples, (Location 1915)
At the beginning of development, changes to software are quick and simple. As the code base grows, it becomes harder to modify. A simple change in one area often causes problems in a seemingly unrelated part of the code. (Location 1932)