There are many different software development methodologies which could be adopted for development of new CCS components or refinement of existing CCS components. Development in a collaborative, distributed, multi-language, research environment poses its own unique challenges. We want to keep the process as lightweight as possible, but at the same time we need to make sure that requirements of the observatory are fully taken into account, and that the resulting system is as simple and easy to use as possible since future subsystem developers will have limited time and expertise and since the system needs to be maintained and operated over the long-term by non-experts.
We have tried to use the development of the configuration system to define a mechanism for how this type of collaborative development should proceed, but to date this has not been as successful as we would like. In particular we do not seem to have developed a productive and efficient exchange of ideas between different groups. So maybe it is time to step back and try to evaluate our approach, and see if there is a better way to proceed.
At SLAC, based mainly on our experience of developing similar system in the past, we have assumed that the procedure we should follow will be something like:
- Choose a particular component of the overall CCS system (e.g. configuration system)
- Analyse use cases
- Define requirements
- Develop API's used by the rest of the system to interact with this component
- Implement API's and develop test cases
- Iterate as necessary
Tony's personal observations:
- The process of analyzing use cases and developing requirements is essential if we are to make sure the software we develop meets the overall goals outlined above. In a distributed (and multi-lingual) environment getting as many people involved in this initial phase is important because it is where a common vocabulary is developed (otherwise it is easy for implementers to later misinterpret the requirements).
- During this initial ?phase, especially when the team has not worked together before, frequent face-to-face meetings are essential. These must include sufficient unstructured time for developers to bounce ideas off each other and familiarize themselves with each other's work style.
- While we are all code developers, and would probably prefer to be developing code rather than analyzing requirements, my experience is that it is best to leave code development to at least the first iteration of step d, because otherwise developer's natural aversion to tearing up their finely crafted code can be an impediment to fully analyzing the best implementation approach. There are cases where code needs to be developed in order to test the technological feasibility of a proposed requirement, but these cases are generally rare and where they do occur the code should be viewed as a throwaway proof-of-principle implementation.
- Once the requirements and API's are defined the job of implementing the API's can be left to an individual or small group. Ideally the API's and requirements will leave a lot of flexibility for the implementation details to be defined by the developer(s).
- Iteration is almost always necessary, because often the implementation throws up inadequacies in the API's, or exposes cases where requirements are incomplete, contradictory, or just too hard to reasonably implement. In a research environment the requirements also have a habit of changing out from underneath the developers.
Open questions:
- Are there different development models we should consider?
- Or ways to make our existing model more productive?