Quantcast
Channel: James Grenning's Blog » Embedded Agile
Viewing all articles
Browse latest Browse all 3

Agile Design and Embedded

0
0

One important realization on the journey from a BDUF approach to an iterative and agile approach is that design is never done. Designs evolve. The waterfall emphasis has been to unnaturally try to control software physics by imposing requirements freezes and burdensome change control. The process of developing software is part science and part creative. You are applying science toward the invention of something. Design is capturing knowledge both about what the end user need is, and one solution to that need.

To paraphrase Craig Larmen:
“Software is too complex for us to know all the requirements in detail up front… humans can only handle so much complexity at once”

His conclusion is that we need to iteratively develop software. This is just as true for non-embedded software as for embedded.

If you accept (or entertain) that design is never done, and that designs evolve, we shift from fighting the software physics to developing sound engineering practice to deal with the ever changing software. This is where the XP technical practices fit in.

  • Test Driven Development
  • Refactoring
  • Metaphor (or Architectural vision)
  • Coding standard

At the beginning of an XP project there is an activity called exploration where all known stories are written down, and initial architecture is envisioned, estimations are done, and an initial plan is composed. Generally for most project single team projects this exploration is a week or two, maybe a little longer when the team is coming from a BDUF world so they can remain somewhat comfortable (or not extremely uncomfortable).

The architectural vision is a higher level of abstraction view of the system. In the embedded world it would include some vision of the hardware/software boundary. It might include key technologies like OS, protocol stacks, programming languages, subsystems, key interfaces, etc.

The thing is it a higher level view, a model. It is not the thing and it does not have all the detail. The detail is worked out over time, iteratively.

Another realization in agile is that code gets written one line at a time; mistakes can happen at any minute; so we need to discover and remove mistakes as soon after introduction as possible. We do it to avoid massive, and unpredictable, test and fix cycles. TDD helps us to drive defects out of our code almost before we put them in.

With the focus on test, TDD makes us create testable code. Adding tests after coding does not necessarily lead to the same. To be testable, code must be modular and loosely coupled. In debug later programming, as code grows in an incremental way, engineers are afraid to change the design. It’s too risky. It implies too much re-test effort. Over time the well thought out up-front design turns into a mess, one line at a time. It happens so slowly you don’t notice, just like waking up one morning and you wonder how your son or daughter became old enough to go off to college.

The TDD practitioner does not have that same fear. They have a safety net that allows code to be safely refactored. You have an architectural vision to serve as a goal, you have tests that support refactoring, and one other subtle point. As your design grows, you will see that a certain module needs a change, but you cannot see a good way to test the code. Maybe you need to add a new conditional in an already nested conditional, or a new case in a switch statement, or a conditional within a switch. You know its wrong. But in a non-TDD world you put the extra conditional in. It is not safe to do otherwise (but your design is slowly rotting).

In a TDD world you have an early warning of design decay. You have a safety net. You see a better way. You refactor to a testable design, a better design.

Each iteration some demonstration can be made to the customer showing the progress of their work. Focus on value delivered is tough in embedded because so much has to come together to be a product. Each iteration should show progress towards value delivery or demonstration of some risk reduction.

What about those decisions that you code up and discover are wrong. That will happen. A modular design, with automated tests will lessen the problem. Your designs will be more flexible. I suspect some of these problems could come from committing to a specific design course too early.

Take choice of OS for example. If you know that the decision on OS might change, or there is risk in your choice, you can protect your design from OS changes by have a portable API for the OS. Two ideas from lean development are: decide at the last responsible moment, and make decisions reversible. Design decisions in spaghetti code without tests are rarely easily reversible where decisions in a modular design with automated tests are much easer to reverse.

Architectural vision + TDD + refactoring + team work can lead to these more flexible designs that do not have to be decided all up front. Teams need to develop these skills, and they can, especially when there is more collaboration. If you work in cubes on only your own code, you will not learn everyday. If you collaborate and value learning, the teams skills will grow everyday.

Co-located teams can have fairly informal ways of communicating designs, such as: white boards, and hand sketched UML. Bigger teams might need more documentation to communicate across the time zones. Documentation is not prohibited. We just don’t want to wast effort on it unless it serves a demonstrable benefit.

All designs evolve. Find ways to accept this reality and thrive in it.

©2013 James Grenning's Blog. All Rights Reserved.

.

Viewing all articles
Browse latest Browse all 3

Latest Images

Trending Articles





Latest Images