Locked History Attachments


Computer-Human Interaction

Diogo Simões <diogo DOT fonseca DOT simoes AT ist DOT utl DOT pt>
Filipe Varela <filipevarela AT ist DOT utl DOT pt>

Recent concerns regarding the Organizational Model module (maintained under Bennu but still available in the context of Fénix) lack of usability in a way triggered this initiative, though this has truly been a permanent concern within the project. Although recognizing (and also cheering) the multidisciplinary nature of the CHI field, I personally struggle and lean for an engineering approach, providing more rigorous and tangible results.
This document will be divided in two parts. It will start by introducing a few core concepts and referring some methods and techniques. The second part revolves around the case solving, providing a fully documented example of driving a CHI project, reflecting the approach chosen and discussing whys and whynots.

For a better understanding of what Usability and CHI perspective we are trying to promote, the following reading is highly recommended:
Cooke, N. J., Durso, F. (2008). Stories of Modern Technology Failures and Cognitive Engineering Successes. Boca Raton, FL: CRC Press.

Background and Fundamental Concepts

If you are familiarized with CHI design projects you will just exhale a little sigh of disappointment rather than jump out of your chair when I tell you that we will be employing an iterative design process to guide us in our mission for better usability. For all the rest, to whom this does not ring a bell, worry not. As you will see the techniques explored will be fairly comprehensive and all concepts will sound naturally reasonable.

Iterative Design

Our starting point, as in many other similar cases, is trying to define an iterative process fitting our needs, hopefully capable of helping us achieve a successful solution. Traditionally when applied to CHI problems, the iterative design process 1 comprehends four stages:

  • Task Analysis;
  • Solution Conception;
  • Prototyping;
  • Evaluation.

  • IterativeDesign.png

    Fig. 1 - Iterative Design Diagram

This cycle should be iterated as many times as necessary until a satisfactory level is reached and the solution is accepted. Albeit simple in design, this process allows for a vast myriad of developing strategies. That is because each stage can be delivered through a palette of multiple techniques. It all depends on our goal and project constraints. Though one thing is for certain: more rigorous and accurate methods will require more resources to be put into practice.

Task Analysis

There is a vast variety of techniques that might help you with the Task Analysis (TA). The purpose of the Task Analysis is to help the designer sort out what interaction context he is meddling in, what problems will he most likely have to overcome and what features seem to be desired to be included in the upcoming solution.
Common examples of techniques 2 used in TA are:

  • Hierarchical Task Analysis (HTA) and Hierarchical Decomposition;
  • Cognitive Task Analysis (CTA) - Cognitive Complexity Theory (CCT) related methods;
  • Use Case Diagrams and Process Modeling;
  • Structured Interviews and Inquiries;
  • Use Stories;
  • Conceptual Models, Use Scenarios;
  • Simulation Interviews;

This techniques can be used alone or combined together and it is up to the designer to decide what will work better in each project.

Solution Conception

Once we have our Task Analysis complete, identifying our interaction problems and unconformities should be trivial. It’s time to be creative and start coming up with neat solutions. Although we will naturally elect a favorite solution (usually the first we had come up with) we shall never neglect the importance of coming up with more than one solution for each identified problem. Coming up with elegant and optimal solutions is a very hard process. It requires knowing some theory (design and ergonomic principles, usability heuristics), it requires knowing the trends (both the virtuous and the vicious: learn from the fellow practitioners’ mistakes), and above everything else it requires the rare gift of sagacity:

  1. accurate abstract thinking;
  2. big dose of common sense;
  3. ability to always discern the big picture;
  4. ability to assess each situation;
  5. ability to foresee the impact of your interventions.

Be fearless of trying new approaches. Get out of the box. Get out of your comfort zone. The day you try solving everything with your same old hammer, you are out of the game.


Prototyping is a crucial stage in this process where we can bring our solutions to life in order to put them to test. Prototypes can be built according to two axis: scope and detail. This means it is possible to built full-featured prototypes in an horizontal perspective or focus on one single aspect for analytical, vertical consideration. Prototypes can also be categorized as low fidelity, loosely mimicking the intended behavior, or high fidelity, portraying in great detail what the final functional solution will look like. Keep in mind that the medium used is not important, specially if we are building a low fidelity prototype. Paper, cardboard and foam board usually constitute very easy, cheap and fast media to deploy your prototypes.

  • PrototypingChart.png

    Fig. 2 - Prototyping variables


The last stage of the iterative process is the most critical of all four and, curiously, usually the most neglected. The evaluation apart from validating the solutions conceived, quantifies the amount of improvement and progression, helps identifying unreported problems and allows to determine if the desired quality level has been attained already. Doing so with rigor and precision is not always easy, sometimes might not even be entirely possible. In truth the fact that some practitioners tend to neglect or minimize this part of the process ends up not being so surprisingly after all, with the case being that many times it is very hard to find or employ a balanced qualitative and quantitative method.
One of the most appreciated techniques is testing with users 1. If well conducted, it produces reliable results. Results are tangible and can be treated statistically. The downside is that it is complex: usually requires involving many stakeholders, the tests need to be carefully designed and the results need to be carefully processed. The bottom line is: it takes a lot of time and effort, making it quite expensive. Other (not so) popular technique consists in using predictive analysis 3. When employing user tests might look too herculean, and sometimes it does, using predictive analysis is something that might be considered. Although in such cases predictive analysis might provide a faster way of performing evaluation than using user tests, it requires that the solutions fit the predictive analysis underlying modeling scheme. Also, these methods have a very narrow spectrum of problems where they might be applied, because they have been proven valid only on a very small number of contexts. Thus results are only valid whenever the problem context is integrated within the model’s extent of applicability.
These two techniques, although powerful if correctly used, bring along a heavy burden of formality. So if we are in need for some informal and agile method of evaluation we can consider heuristic evaluation. This consists of a team of experts thoroughly analyzing our solution designs looking for heuristic violations. These experts’ reports can then be used to guide our decisions. Even though the accuracy and preciseness of the attained results are agreeably far less than the ones we get by using formal methods, the resources needed to perform this kind of evaluation are significantly lower, which makes this kind of approach a valid option, particularly if our goal is to have only a light validation.

Now, after this light introductory notes that I hope helped placing in context the less familiarized with CHI practices, we advance to our endeavor log, where I will share with you the problems, strategies and decisions made along the way. So open your minds and roll your sleeves up, we are going to get our hands in the dough.

The Organizational Model Problem

The case we will be solving includes both browsing and editing organizational models. An organizational model describes an organization, or part of it, by representing its constituents: units and persons. These models typically take the form of tree structures where units compose branches and persons are incorporated as leaves.

  • OrganizationalModel.png

    Fig. 3 - Sales Department Organizational Model

Our domain representation of an organizational model follows what is suggested in Martin Fowler's Accountability Modeling 4.

  • OrgModelDomain.png

    Fig. 4 - Organization Structure Domain Model

Phase 1 - Task Analysis I

Before starting our task analysis there is an important aspect that needs to be assessed beforehand. Cognitive Systems Engineering problems can take many forms. For CHI domains it is usually centered around interface designs, in other cases it might be related to elaborating training programmes. Sometimes we need to create from scratch. Other times we need to audit already existing solutions and start from there. In the end the fundamental tools and knowledge we will use will be the same in both cases, but we need to adapt to the underlying context. Also never forget that our "jurisdiction" contemplates both the user and the machine and everything in between. Sometimes practitioners tend to erroneously overlook the user side of the problem and only focus on the interface and functional details. In this case a previously built solution already existed, and part of the work developed during this task analysis phase also included an audit on the existing solution.
One of the challenges of starting our task analysis with an audit is to make it so that the audit will not bias the future work. It helps when we can understand what is fundamentally wrong with the interface we are auditing. The solution that was already implemented had been developed strictly following the development of the functional requirements. That is a common mistake when interfaces are developed ad-hoc. However the curious thing about this fundamental weakness is that it cloaks a potential strength for our task analysis: it maps the functional requirements to which our interaction requirements have a logical correlation.
That will be a key aspect of our task analysis in this case. A task analysis is simply a process to illustrate how users perform a task and what is required for them to finish it with success. This solution, although being implemented, has not been largely disseminated, so there is not a big user pool to get our task analysis from there (by studying how/what they do nowadays). Therefore an approach following the functional requirements presents as a viable kick-off for our task analysis. Apart from this fact, we want to favor agility and flexibility over quality. This does not mean quality will be neglected. It just means it isn't our number one priority because this is not a usability-critical project (like a nuclear plant control panel, or an Air Traffic Control workstation). Instead reducing the overhead imposed by a structured interface design, as opposed to the traditional ad-hoc, is very important in order to demonstrate that this effort pays off with better user acceptance and less maintenance.

After reviewing the functional requirements we can start to draw some ideas on what is relevant to retain for our task analysis. Tasks will mostly involve creating new organizational models, editing their structure (by means of including/removing types of relations), editing units and persons, compose units and persons, and search operations across the model.


Fig. 5 - The existing OM interface main features

At this point we need to start formalizing the data we have gathered so far. A good way to place all those requirements in a task-oriented frame is by defining Use Scenarios. A Use Scenario is built around a simulated scenario where some of the most relevant tasks are performed in a realistic context. Besides providing some structure to our task analysis, since we need to think of what our users will need to accomplish (and eventually breaking it down to unitary tasks), it is also a good idea to have scenarios defined this early for it might be of some help later when we will need to define test cases for our evaluation (this scenarios often prove to be good simulation exercises to use during the evaluation phase).

Use Scenario #1



By order of the Administration Board some IST organic structures will be subjected to some reforms. From these, one that will be significantly affected will be the Informatics Services Department (DSI), which will be split according to its present competences. A new cost centre, named Information Systems Department (DSI), must be created. This new centre will be placed hierarchically on the same level as the previous DSI. The first DSI will renamed as the Data Systems Department (DSD). The Information Systems and Applications Group (AASI) must be migrated from the DSD to the newly created DSI. The User Support Group must be migrated to the Branding and Communication Department. The AASI will have two new centres : the Usability Engineering Laboratory (LEU) and the Software Engineering Laboratory (LES). Editing and creation of new persons will be necessary.

Por decisão do Conselho de Gestão será necessário proceder à reforma de algumas estruturas do IST. Esta reforma afecta sobretudo a Direcção de Serviços de Informática (DSI), que será dividida de acordo com as suas competências. Será criado um novo centro de custo com a designação Direcção de Sistemas de Informação (DSI). Este novo centro estará hierarquicamente colocado ao nível da antiga DSI. A antiga DSI passará a designar-se Direcção de Sistemas de Dados (DSD). A Área de Aplicações e Sistemas de Informação será migrada da DSD para a DSI. A Área de Ligação ao Utilizador será migrada para o centro de Comunicação e Imagem. A AASI terá dois novos centros de custo sob a sua alçada: o Laboratório de Engenharia de Usabilidade (LEU) e o Laboratório de Engenharia de Software (LES). Edição e/ou criação de novas pessoas será necessária.

Tab. 1 - Use Scenario #1: Editing an Organizational Model

Use Scenario #2



Following the changes introduced with the creation of IST-ID, it will be necessary to change the current IST’s organizational model. A new Organizational Model must be created and named IST-ID. The “Instituto Superior Técnico” unit will be one of the base entities of this new model. This model will have another base entity which will be a new unit name “IST-ID”. Its type is “Research Institution”. The model must include the same types of relations presented in the Instituto Superior Técnico model, in addition to two new relations: “Research Chief” and “Budget Officer”. Under this unit (IST-ID) there must be created 3 new cost centres: the Project Management Office, the Budget Management Office and the Scientific Supervising Office. IST’s Administration Board will play a supervising role in this unit, and its president will be Prof. Matos Ferreira.DSI laboratories (LES and LEU), created in the previous scenario, must be attached to the IST-ID unit, as “Research Units”. Dr. Zoidberg and Pedro Santos will be the research officers of their respective labs, as Diogo Simões and Artur Ventura will be researchers on their respective labs as well. Luis Cruz will assume the role of budget officer of both centres. After finishing all these updates the IST unit must be removed as a base entity.

No âmbito da reforma introduzida com a criação do IST-ID, deverá proceder-se à alteração do actual modelo organizacional do IST. Criar um novo Modelo Organizacional com o nome “IST-ID”. A unidade “Instituto Superior Técnico” será uma das unidades base deste modelo. No mesmo nível hierárquico onde se encontra o Instituto Superior Técnico, deve ser criada uma nova unidade designada de IST-ID, do tipo “Instituição de Investigação”. Essa unidade também será unidade base deste novo modelo. Este modelo deve ser uma extensão do modelo Instituto Superior Técnico, denotanto o mesmo tipo de relações, acrescido das relações “Responsável de Investigação” e “Responsável Financeiro”.Sob alçada desta unidade (IST-ID) deverão ser criados 3 centros de custo: Gabinete de Gestão de Projectos, Gabinete de Gestão Orçamental, Gabinete de Acompanhamento Científico. O conselho de gestão do IST terá uma função de supervisão nesta unidade. Esta unidade tem um presidente, que será o Prof. Matos Ferreira.Os laboratórios da DSI (LES e LEU), criados no cenário anterior, devem ser agregados à unidade IST-ID, enquanto “Unidades de Investigação”. O Dr. Zoidberg e o Pedro Santos serão os responsáveis de investigação dos respectivos laboratórios, assim como o Diogo Simões e o Artur Ventura serão investigadores nesses centros. O Luis Cruz assume a gestão financeira de ambos os centros. Depois de todas as alterações realizadas, deverá proceder-se à remoção do IST enquanto unidade base.

Tab. 2 - Use Scenario #2: Creating a new Organizational Model

These two scenarios illustrate most of the tasks users will be facing when working with organizational models. Now that we have it displayed in a loose story-telling format we may try to break it down into action-level descriptions of the tasks. This is what is called an Hierarchical Decomposition. For instance, when using Hierarchical Task Analysis (HTA), you are advised to define at most 3 levels of decomposition: tasks, sub-tasks and actions. Let's look at an example:

Guitar practice

1. Tune guitar
1.1. Plug the tuner
1.2. Tune each string
1.2.1. Pluck the string
1.2.2. Notice the key displacement
1.2.3. Adjust the respective peg accordingly

2. Practice piece
2.1. Read score
2.2. Play chords

Tab. 3 - The guitar practice HTA

You can see in the example above we start to define the top-level tasks needed to perform our guitar practice. Then, as these tasks are defined by a series of steps we break them down into several sub-tasks. Then, in the case of string tuning, we even go further and define that sub-tasks in terms of unitary actions needed to complete that sub-task.
Finding the proper granularity for this decomposition is not always a straightforward decision. That is why for now we will define our decomposition with only one level of detail. Mostly because our tasks, if you look carefully into the scenarios we presented, are very simple, thus not needing further decomposition. Also if we happen to realize some of these tasks offer an intricate nature, we can always refine our analysis later.

Scenario #1 - Editing IST's organizational model (changes affecting DSI only)

Cenário #1 - Redesenhar parte do modelo organizacional do IST (alterações na estrutura da DSI)

1. Rename the "Communications and Information Technologies" cost centre (IST > CG) to "Systems & Information" (I&S).

2. Rename the "Informatics Services Departments" to "Data Systems Department" (DSD).

3. Create a new cost centre named "Information Systems Department" (DSI) as a DSD sibling.

4. Move the "Information Systems and Applications Group" (AASI) from the DSD to the DSI.

5. Move the "User Support Group" to the "Branding and Communication Department".

6. Create a new cost centre under the AASI named "Usability Engineering Laboratory" (LEU).

7. Create a new cost centre under the AASI named "Software Engineering Laboratory" (LES).

8. Create a new person called: John A. Zoidberg, PhD.

9. Add Dr. Zoidberg to the LEU laboratory.

10. Add Diogo Simões to the LEU laboratory.

11. Add Pedro Santos and Artur Ventura to the LES laboratory.

1. Mudar o nome do centro de custo “Tecnologias de Informação e Comunicação” (IST > CG) para “Informação e Sistemas” (I&S).

2. Mudar o nome da “Direcção de Serviços de Informática” para “Direcção de Sistemas de Dados” (DSD).

3. Ao nível do DSD, criar um novo centro de custo com a designação “Direcção de Sistemas de Informação” (DSI).

4. Migrar a Área de Aplicações e Sistemas de Informação (AASI) da DSD para a DSI.

5. Migrar a “Área de Ligação ao Utilizador” para o centro de “Comunicação e Imagem”.

6. Criar um centro de custo sob a alçada da AASI com o nome “Laboratório de Engenharia de Usabilidade”.

7. Criar um centro de custo sob alçada da AASI com o nome “Laboratório de Engenharia de Software”.

8. Criar uma nova pessoa chamada John A. Zoidberg, PhD.

9. Adicionar o Dr. Zoidberg ao LEU.

10. Adicionar o Diogo Simões ao LEU.

11. Adicionar o Pedro Santos e o Artur Ventura ao LES.

Tab. 4 - HTA scheme for Scenario #1

Scenario #2 - Creating a new Organizational Model (IST-ID)

Cenário #2 - Criação de um novo Modelo Organozacional (IST-ID)

1. Create a new organizational model named "IST-ID".

2. Add "Instituto Superior Técnico" as a base entity in this model.

3. Set the model's relation types: the same as presented in "Instituto Superior Técnico" model plus three new types, "Research Officer", "Budget Officer" and "Research Unit".

4. Create new base entity: "IST-ID". This unit type is "Research Institution".

5. IST's administration board (IST) must play a supervising role in this unit.

6. Add Prof. Matos Ferreira to this unit as its president.

7. Create a new cost centre under "IST-ID" called "Project Management Office".

8. Create a new cost centre under "IST-ID" called "Budget Management Office".

9. Create a new cost centre under "IST-ID" called "Scientific Supervising Office".

10. Create a "Research Unit" relation between the "Usability Engineering Laboratory" (DSI > LEU) and "IST-ID".

11. Change Dr. Zoidberg's relation with LEU to "Research Officer".

12. Change Diogo Simões relation with LEU to "Researcher".

13. Create a "Research Unit" relation between the "Software Engineering Laboratory" (DSI > LES) and "IST-ID".

14. Change Pedro Santos relation with LES to "Research Officer".

15. Change Artur Ventura relation with LES to "Researcher".

16. Add Luis Cruz to LES and LEU as "Budget Officer".

17. Remove IST as a base entity.

1. Criar um novo modelo organizacional com o nome "IST-ID".

2. Adicionar o "Instituto Superior Técnico" enquanto unidade base.

3. Definir os tipos de relações do modelo: os mesmos presentes no modelo "Instituto Superior Técnico" acrescidos dos tipos "Responsável de Investigação", "Responsável Financeiro" e "Unidade de Investigação".

4. Criar nova unidade base: "IST-ID". Esta unidade é do tipo "Instituição de Investigação".

5. O conselho de gestão do IST (CG) assumirá um papel de supervisão desta unidade.

6. Associar o Prof. Matos Ferreira a esta unidade enquanto presidente da mesma.

7. Criar um novo centro de custo sob alçada da unidade "IST-ID" com a denominação "Gabinete de Gestão de Projectos".

8. Criar um novo centro de custo sob alçada da unidade "IST-ID" com a denominação "Gabinete de Gestão Orçamental".

9. Criar um novo centro de custo sob alçada da unidade "IST-ID" com a denominação "Gabinete de Acompanhamento Científico".

10. Estabelecer uma relação "Unidade de Investigação" entre o "Laboratório de Engenharia de Usabilidade" (DSI > LEU) e a unidade "IST-ID".

11. Mudar o tipo de relação entre o Dr. Zoidberg e o LEU para "Responsável de Investigação".

12. Mudar o tipo de relação ente o Diogo Simões e o LEU para "Pessoal Investigador".

13. Estabelecer uma relação "Unidade de Investigação" entre o "Laboratório de Engenharia de Software" (DSI > LES) e a unidade "IST-ID".

14. Mudar o tipo de relação entre o Pedro Santos e o LES para "Responsável de Investigação".

15. Mudar o tipo de relação entre o Artur Ventura e o LES para "Pessoal Investigador".

16. Adicionar o Luis Cruz ao LES e ao LEU enquanto "Responsável Financeiro".

17. Remover o IST enquanto unidade base.

Tab. 5 - HTA scheme for Scenario #2

You can see now that so far we have come from a set of requirements, we organized them around plausible use scenarios, and then we broke down those scripts into a formal description of interaction steps. We can go even further now in highlighting what details are really important to keep in mind when designing the solution. If you look closely into our HTA tables, you can easily spot that some tasks appear to be equivalent in nature. Take for instance steps 7, 8 and 9 from Scenario #2. All of these tasks imply creating a new unit. In order to have a resumed representation of our Task Analysis we can now pick up our HTA results and extract several Use Cases showing the different types of tasks involved in this processes.

  • UseCase01.png

    Fig. 6 - Use Case #1

    Tasks reflecting this case:


  • UseCase02.png

    Fig. 7 - Use Case #2

    Tasks reflecting this case:


  • UseCase03.png

    Fig. 8 - Use Case #3

    Tasks reflecting this case:


  • UseCase04.png

    Fig. 9 - Use Case #4

    Tasks reflecting this case:


  • UseCase05.png

    Fig. 10 - Use Case #5

    Tasks reflecting this case:


  • UseCase06.png

    Fig. 11 - Use Case #6

    Tasks reflecting this case:


  • UseCase07.png

    Fig. 12 - Use Case #7

    Tasks reflecting this case:


  • UseCase08.png

    Fig. 13 - Use Case #8

    Tasks reflecting this case:


  • UseCase09.png

    Fig. 14 - Use Case #9

    Tasks reflecting this case:


  • UseCase10.png

    Fig. 15 - Use Case #10

    Tasks reflecting this case:


  • UseCase11.png

    Fig. 16 - Use Case #11

    Tasks reflecting this case:


  • UseCase12.png

    Fig. 17 - Use Case #12

    Tasks reflecting this case:


  • UseCase13.png

    Fig. 16 - Use Case #13

    Tasks reflecting this case:


What have we achieved so far?
We now know what we really want to do and we have some hints on how to get it. We can advance into thinking our solutions always keeping in mind that the needs we must address are represented in these Use Cases. When designing solutions is usually the right time to risk and be creative. As the creative process it is, the sky should be the limit although we should not lose focus on what issues we have just now determined that should be faced. We can put it this way: sure!, wish upon the vast blue sky, but let your task analysis cloud your judgment.

  • CloudsOfInteraction.png

    Fig. 17 - Clouds of interaction

Phase 2 - Solution Conception I

Manipulating Organizational Structures is a complex activity. Getting to what is the best way to provide such mechanism is certainly difficult. So first and foremost, before defining concrete designs, we should sketch concepts for organizational structure representation. We should start by searching for problems similar to ours and investigate how those problems were solved.

Of course one can find immense resources referring organizational structures, though examples of tools conceived to manipulate such structures are not as easy to find. Still one common trace we can capture from these examples is that the organizational model is always depicted in the shape of a tree diagram. This might not help endowing us with creative options, but surely provides us the assurance of a surefooted design that has been broadly and compulsively used to answer the problem we are committed to solve. Either that or everyone has been been overlooking this issue for this long and using a tool that clearly lacks some fitness. Since our job requires a generous amount of modesty, let us keep the tree paradigm close to our hearts with high esteem, but still try to think about different ways of representing an organizational structure.

Let's analyze some ideas that came up during this stage. We will start with the tree paradigm already referred.

  • OMVisConcepts_Tree.png

    Fig. 18 - Organizational Model Visualization Concepts: TREE

In this case this concept shows a bit of redundancy, since both shapes and colors are used to differentiate units from persons. Squares represent a unit as blue lines lead to units, circles represent a person and orange lines lead to persons. The structure is expanded hierarchically from top to bottom, like a bunch of grapes. It gives an exact idea of the relation between two nodes, but for massive models it might fail to provide a macroscopic overview as the model tends to look like a noodle soup.

  • OMVisConcepts_Pyramid.png

    Fig. 19 - Organizational Model Visualization Concepts: PYRAMID

This concept is similar to the tree. It clearly makes evident the relation between any two given nodes, because once again it is organized from top to bottom, each layer in the pyramid corresponding to a different hierarchical level. This model is more compact though. Each node is a block placed together to form the structure. Since there aren't as many visual clues as in the tree (lines and shapes) it gets easier to have a global reading of the model.

  • OMVisConcepts_Fragmentation.png

    Fig. 20 - Organizational Model Visualization Concepts: FRAGMENTATION

Again, this is yet another way of representing a tree expansion. The conceptual model used is the explosion of a frag grenade. Each phase of the shockwave is a hierarchical level and as done before units are differentiated from persons using a color code.

  • OMVisConcepts_Industry.png

    Fig. 21 - Organizational Model Visualization Concepts: INDUSTRY

This concept also follows the tree model, in a way that is familiar to us as the folder explorer applications common in so many OSs. It presents a fundamental change though: this model expands a hierarchy of units only. Persons appear aggregated to each unit as information about that node.

  • OMVisConcepts_Mitosis.png

    Fig. 22 - Organizational Model Visualization Concepts: MITOSIS

Here I present something completely different. Each node is a cell, and surrounding the core, beneath the cell wall, the children nodes (both units and persons) are represented as mitochondrial entities. Whenever these little entities are expanded, a new cell is spawned representing that entity. Old cells are kept attached giving a trail of the path taken while exploring the structure. You can check a similar cell concept at use as the friends management tool within Google+.

  • OMVisConcepts_Snowflake.png

    Fig. 23 - Organizational Model Visualization Concepts: SNOWFLAKE

The snowflake concept is a fractal expansion model. Each branch sprouts two petals. Those petals will subsequently behave as branches themselves. I am very fond of fractals myself, and thought about this concept as a way of addressing two issues: the first being providing completeness and the second providing reading scalability. Completeness is achieved because the whole model is represented and each hierarchy is related with the depth level of the fractal. The snowflake keeps its branches and petals very tidy, equally sized and aloof. Besides, zooming-in the structure comes naturally with the fractal. Nevertheless because our models are often irregular, breaking the symmetry, we might be forced to turn the snowflake down as a viable concept to apply here.

  • OMVisConcepts_Geo.png

    Fig. 24 - Organizational Model Visualization Concepts: GEO

This is a very interesting concept. Each layer shows a hierarchic level, but only the children of the previously selected parent. By doing this it loses completeness, as it trims children from unselected sibling units, but still provides a general idea of the model shape, a fallback capability to any point in the model, and a very tidy look. Again the persons are showed separately as part of the selected unit's info.

  • OMVisConcepts_Topography.png

    Fig. 25 - Organizational Model Visualization Concepts: TOPOGRAPHY

In this concept the model is expanded around the root unit, with each hierarchic level being connected to the same contour line. Both selected unit and respective children are drawn with special focus (larger area for that contour and name tags drawn over the units). In this example we only represent units, though it can also be used to represent units and persons alike.

As you can see we already have a generous set of alternatives for our visualization strategy. However, conceiving a solution is not just coming up with a visualization paradigm. As with most creative processes, it starts with ideas, paper and pencil. Following you will witness the evolutionary steps of our solutions, from embryo to prototypes.

Evolving Solutions


Fig. 26 - An early sketch using a Tree to depict the structure. The persons have been separated from the units.


Fig. 27 - Different visualizations using Topography and Geo concepts


Fig. 28 - Clipping the working area into several task zones.


Fig. 29 - Interaction details for specific task zones 1/2


Fig. 30 - Interaction details for specific task zones 2/2


Fig. 31 - Initial efforts. Working around the Person Browser: Stage 1


Fig. 32 - Initial efforts. Working around the Person Browser: Stage 2


Fig. 33 - Initial efforts. Working around the Person Browser: Stage 3


Fig. 34 - A new working area arrangement. The visualization area got wider.


Fig. 35 - Viewing person details.


Fig. 36 - Viewing unit contact details.


Fig. 37 - Adding a unit.


Fig. 38 - Managing the Organizational Model.


Fig. 39 - Deleting a model: confirmation screen.


Fig. 40 - Using a Topography visualization concept.

Phase 3 - Prototyping I

After brainstorming and struggling for solutions we came up with two models. They differ almost exclusively on the visualization paradigm used, sharing the same area arrangement and interaction concepts. To give them a try we decided to prototype them extensively following a low-fi strategy. In order to attain plausible results against the existing implementation we decided to model this existing solution also, using the same tools. In this way we guarantee that all three models will be competing in equal stances.
Since we divided our tasks in two scenarios, this means we will get six prototypes (three models, two scenarios each).
The prototypes we make available in this wiki are PDF linked pages, each representing a screen.

We used Balsamiq Mockups to build our low-fidelity prototypes.
Adobe Illustrator was also vastly used to create some of the screens.

Model 1

Model 2

Model 3

Scenario 1




Scenario 2




Phase 4 - Evaluation I

At this stage we have prototypes mimicking the present day solution, and our two proposals for a new solution. You can guess how quintessential it is now to put all these prototypes to test, competing with themselves. We are not just looking for acceptance, we want comparison results. When we first started working on these alternative models we were obviously looking for means to surpass the fragilities we identified in the present day model. However, no matter how good our intentions were, or how skilled we are, we can never be sure that all the improvements and changes we make will translate into a better user experience. Thus making evaluation crucial for validation.

Ideally we would like to perform a significant amount of user tests, sampling our user community as reliably as possible, accounting for all variables, so that in the end our results would be totally undisputed. We would want to harvest all kinds of measurements(execution times, number of clicks, doubts, mistakes) and all sort of feedback(suggestions, comments, behavior, psychological response). We would then start working our gathered data. Typically doing an analysis by task, understanding the impact of our changes. How good an improvement in performance and user experience did we get; when are our solutions doing better than the existing one; when are they not; why are they not; even if they are doing better, are they doing good enough?

I will say it again, harnessing this knowledge is simply quintessential.

Now, there is yet another important word to mention. That being, compromise. Testing each scenario is time-consuming. Doing it for a significant number of users is, well, even more time-consuming. Plus the effort to analyze all the data. And our resources were obviously very limited. We knew from start we would have around a dozen hour blocks for our tests, and so we had to shape our evaluation to our constraints while keeping the good features already mentioned.

Based on the time we had to perform our tests we concluded from beginning that we would have the chance to gather data from around 10 subjects. That is far less data than what I would like to gather at this point (I would say around half of it), but that was what we had to work with. And since we wanted to expose a result model based on Old Vs. New, it meant we would have to have each subject working with at least more than one solution. Besides, the scenarios have a logical flow, that meaning each task should follow the previous in order to be understandable. At this point it started to look like we would have to overload our few subjects with an immense amount of tasks in order to attain the results we aimed to.
What we started doing was to reshape the tests in order for them to still be valid while at the same time be feasible. We stated that M2 and M3 were quite similar, with only the visualization changing while the experience being practically the same, so we could present our comparison as M1 Vs. M2&M3. At this point we established each test would comprise 2 models. Out of the 10 tests, 5 would be M1 Vs. M2 and the other 5 would be M1 Vs. M3. Since both sets of tasks for each model would be identical, and in order to avoid some bias, we also determined 5 tests would start with M1 and the other 5 would start with the corresponding M2 or M3. We also determined that our subjects should be heterogeneously picked from our user base: faculty (5) and administrative staff (5).
As to measurements we focused primarily on some performance probes, such as amount of clicks, execution times and number of help requests; while still gathering some qualitative data. We saved all information transmitted by the subjects (comments and suggestions), and prepared a simple 2-question inquiry that subjects would have to answer in the end, probing for usefulness and comfort of the model just tested. The prototypes tested are the same posted in the previous chapter. Still related with the environment configuration it is worth mentioning that we were running the prototypes inside Balsamiq and recording all data using ScreenFlow.

With Joana's help, our user-tests expert, we started stripping-down our scenarios a bit and accommodating to a simpler test protocol. At this point we also had Pedro and Marcelo, our interns responsible for user-tests execution, joining for extra insight and consideration. If you have done anything similar you know how critical it is to have your test protocols as neat and clear as possible.

Credits also go to: Joana Viana <joana DOT viana AT ist DOT utl DOT pt>, Marcelo Vieira <marcelo DOT vieira AT ist DOT utl DOT pt>, Pedro Durão <pedro DOT durao AT ist DOT utl DOT pt>

Like we said, all data gathered was recorded and assembled. During the tests Pedro and Marcelo were just aiding and monitoring the process. After that, Hugo did the post-analysis of all the recorded videos and populated our tables with all the aforementioned metrics. Hugo Tavares <hugo DOT tavares AT ist DOT utl DOT pt> was also an intern working with Pedro and Marcelo. Thanks for your patient and precious work Hugo.

In the aftermath we condensed all inputs in a comparison table specifically aimed to show each models' performances for each tested task.

Model Analysis per task


M2 and M3 got a bit hurt in execution times because they introduce a new paradigm (users were familiarized with M1s look-and-feel);


M2 tampered by some noise;
M3 showing really good numbers (as users get more familiarized with new paradigms);

1.04 - 1.05

M1 was harmed badly in these tasks because it doesn't offer an easy way of moving units;

1.06 - 1.08

M1 is a bit confusing when it comes to create units/people;


M2 tampered by noise;

2.01 - 2.04

M2 and M3 showing some design weaknesses (model edit screen);


M2 again suffering from noise;
M3 giving some trouble with the root unit swap functionality;

2.07 - 2.08

M2 and M3 causing some confusion with structure navigation;
Some confusion related with the add/create semantics (error in the protocols);


M1 is really confusing because you need to delete the person and then add it again;

Tab. 6 - A per task model analysis based on all evaluation inputs.

As we were expecting M2 and M3 overall scored better than M1, with average task execution being faster and more precise. Familiarity with M1 was a bias, with people often saying they felt more confident with M1 and then achieving better results with either one of the other two prototypes. One of our subjects achieved such bad performances, in clear contrast with all the remaining tests, that ended up harming M2 severely. After a clear examination of that test we decided it was so inconsonant with the remaining data (and possibly showing some abnormal procedures within it) we would discard that data. After that consideration it became clear to us both M2 and M3 would prove good options to follow from here (without forgetting that they still need some fixing and refining work). In terms of visualization we determined that for now implementing a tree model would prove more feasible, and since M3 did not really show a bigger improvement when comparing to M2, we made up our minds.

We picked M2.

Phase 5 - Task Analysis II

Every time we finish evaluating, a cycle ends, and it is time to determine if the achieved results are satisfying. If they are, we accept our design and start implementation. If not, a new iteration starts. The results from the previous cycle were extremely rich and very important. They do not comprehend a thorough coverage of the solution though, as we deliberately left a critical part of the system out of consideration during the first studies. This critical feature we left aside was the managing of relations between party entities (units and persons indiscriminately). We also felt we lacked a bit of refinement in our solution so far, and so we naturally decided to keep iterating by proceeding to a new stage of task analysis. All the data gathered so far and results achieved through previous iterations become important inputs for the analysis now starting.

  • SecondIteration.png

    Fig. 41 - Input for second iteration's Task Analysis

  1. Figura (2 inputs)
  2. Lista de requisitos transitados
  3. Lista de novos requisitos

Phase 6 - Solution Conception II

Lorem ipsum

  1. Lista de soluções apontadas
  2. ?Estudos de alguns detalhes refinados?

Phase 7 - Prototyping II

Lorem ipsum

High-Fidelity Prototype

Phase 8 - Evaluation II

Lorem ipsum

  1. Explicar método SUS
  2. Guioes/Protocolos
  3. Execução testes (video)
  4. Resultados extraídos