In most final selection processes we have:

- four/five options from which to choose a winner,

- around ten attributes to rate and

- around ten stakeholders willing to mark the bids.

At the long list stage, the number of options could be much higher and the selection process often uses a set of go/no go criteria. In truth, it is often a cut process rather than selection of the best.

When the final proposals have been marked, we have a ‘slab’ of data:

X_{K} with dimensions I*J_{K}*K where:

I is the number of options (say five)

K is the number of stakeholders (say ten) and

J_{K} is the number of attributes marked.

Therefore, there are I_{K} sets of marks for each option.

PCA is a technique that is usable on two-dimensional data; it cannot be sensibly used on three-dimensional data.

MFA (and Tucker3) were developed to handle ‘slab’ data and create PCA results.

The technique uses covariance calculations therefore the dimensions of the slab of data is important. In a major bid process you are only likely to have around five proposals to mark and the results are going to lack statistical significance. The central limit theorem comes to our rescue at ten and above when things start looking Gaussian, but it is not practical to try and get ten or more bids. MFA therefore suffers the same issues as PCA when a dimension of the data is small.

MFA will give a view on the data, but beware of the poor statistical significance of the result.

Of course, the ‘significance’ of the stakeholders holding a meeting and agreeing weights and score to select a winner could be challenged….

Ian Richmond

Email: ‘About’ and ‘Email Me’ link

.

## Comments

You can follow this conversation by subscribing to the comment feed for this post.