Wiki: a powerful tool for scoping aspect (posted by Fabien Villard)
See notes at the end of the post
Scoping
The scoping aspect (also known as politics aspect) is the
pre-modeling concern before real modeling efforts of the upstream aspects (semantic, pragmatic and geographic). This is where we gather all raw material from very diverse sources including (but not limited to) enterprise and business strategies, business context, know-hows and rules, vision plans, external control principles and laws, enterprise and IS level intentions, enterprise transformation plans, and so on. All this material is, by nature, not formalized enough to be documented in formal representations (even if there is no fate in this fact, we must consider that there will be, at least, a long period of maturation before being able to formalize things also at this early starting point). This has a critical consequence: this information is used by formalizations and models in all aspects through the EST (Enterprise System Topology) and all elements, even the more reduced ones, may be critical in each of them. So two of the decisive questions are:
- how to manage efforts and work to guarantee (or at least increase guarantee) that all information is usable at any time and in any task that may need them?
- how to guarantee the traceability between scoping information and what is built at the various levels?
Despite the fact that the lack of formalization is a clear difficulty to answer that question, it is not out of hope to be able to structure such information, it is not a completely lost fight: we have some intellectual tools to, at least, organize information in a way that prevents loss of part of them and ensuring a good tracing capability.
Scoping information
The key characteristics of information we consider are:
- Variety of representation formats: digital files in different formats (pdf, word processor closed or open formats, page and books scans, slides presentations…) and non digital material (books, papers, interviews transcriptions, raw workgroups output….).
- Interrelated concerns in all documents: functional concerns disseminated in strategy descriptions, technical concerns confused with functional requirements, business requirements mixed with technological constraints, organizational inputs completed with functional requirements…
- Mixed scopes in documents: because we do not have a clear definition of such concepts as Architecture, Goal, Objective, Wish, Constraint, Requirement and so on, we never really use them to specify the scope of the information we put in documents.
- Variety of jargons and specialized vocabularies, with multiple senses for same words and multiple words for one concept.
They all four contribute to some degree to the difficulties to exploit scoping information correctly and deeply:
- Habits and usage combined with software inadequacies and dysfunctions lead us to classify documents by representation types, avoiding in-depth analyses and even considering that this shortcut is a desired saving. In the extreme cases, only titles are considered with document type to categorize, and details are lost. Another consequence is that we tend to treat documents as unsplittable chunks of information which has a consequence together with characteristic 2.
- Because documents are considered in a whole and not split, despite the fact that concerns are mixed in them, the links between elements of information are either lost or even never seen. This brings inconsistencies in the following studies and conflicting constructs in the next levels of the built system.
- Misunderstanding scopes of information leads to a complete misunderstanding of priority, validity and applicability of each element of information which in turn leads to mistakes that are very difficult to diagnose and correct afterward, and costly in all case either to correct or to manage consequences.
- If not understood and linked together, by terms and concepts, multiple jargons lead clearly to misunderstandings between persons when they work in teams, but also when they read results and deliverables. By not knowing precisely the jargon best known by a communication’s target, communications and publications bring confusion in minds instead of clarification. This is especially important in big companies with silos and internal corporations or segmented competencies.
How-to
The more powerful tool to deal with the difficulties described above are known since a long time : glossaries. A glossary is a list of related terms and definitions for a specialized domain. Usually domains are a specific text, a discipline, a field of knowledge, a particular business… We may extend the concept to other splits of knowledge: abstraction levels, jargons, concerns, Praxeme aspects. All these splitting axes are potentially relevant to avoid the difficulties and errors described above, so me could end up with a collection of different glossaries without links between them, no path to follow in the collection to span, for example, all concerns where a special concept makes sense, and a huge redundancy of terms, ideas and definitions, without easy ways to detect inconsistencies and conflicts.
Wiki: the graph of ideas
This is where wiki may be the differentiating tool.
A wiki is a (potentially huge) collection of information nodes (pages or text bloc or elementary document or mème). Each node has a name which may be seen both as the mème identifier and the summary of the node’s idea. Node’s text includes mème identifier which is considered by the system as a reference to the node. These references are automated: each time the underlying system finds a mème name, it creates the explicit link to the mème’s node: information is stored in a graph instead of a tree. Information elements that are connected by nodes texts are also connected by navigation links automatically available to the wiki reader.
Wiki: the tags cloud
If nothing more is done, the multiplicity of paths through information makes the wiki difficult to read when we need to filter on one concern or one category of concerns. By carefully tagging nodes we can build axes and classifications we had with multiple glossaries. But the information is now stored only once, and tags are only indexes to categorized or even (if tags are thought deep enough) to described the semantic content of each node.
Benefits
Together, the graph storing paradigm and the tags as indexes, are able to participate in the emergence of multiple ontologies mapped on top of each other like transparent and unsorted layers of meanings on the same set of information.
Because we have disconnected the storing paradigm and the classification/description mechanism we are able to change and improve the content of nodes and the links between them without impact on the classification system, and to change and improve the classification without impacting nodes content: they can evolve in there own life cycle.
Changing independently stored content and classification system render possible to increase the structuring level of information by multiple little evolutions and this opens a new set of possibilities:
- Manage smoothly methodological changes in ways we handle scoping information.
- Include people progressively instead of considering big bang approaches.
- Increase control on the information used by the different stages of project building.
- Handle precisely resources and costs spent on each effort to improve the system.
- Capitalize on each effort spent to the knowledge-management system.
Notes
This paper is written with Praxeme ideas and principles in mind, but it is also entirely accurate for all methodological approaches for project realization. RUP information gathering to built use cases, backlog constitution in RAD approaches, functional specifications, and even global technical specifications, strategy building, Enterprise Architecture efforts and business strategy determination, all may benefit from the more precise structuring of inputs that is discussed here.
This paper does not discuss methods used to gather information (interviews conducting, information analyzing, concepts testing, rules and laws applicability validation and so on). It focuses on the structuring representation of the results of these works.