I have some confusion about where the best role for the harmony parametrization should live — in a workflow, or a system of musical creation, generally.
In the simplest case, I see five dials. And a 3D visualization.
It all looks sort of like a machine learning problem, addressing a lot of issues about how sets of notes should be classified as single entities. But that’s internal.
Apart from specific musical issues, it seems as though classification is a way of taking large numbers of (somehow) identifiable objects, and making them into a smaller number of objects — that is, finding a way to treat groups of objects as single things, and simplifying the groupings.
Finding the key of a chord is similar to assigning it a classification. However, the classification of a chord is intrinsically (numerically) unstable, which is a fundamental difference from most problems.
So the quickest way would be to use the JUCE model simply to parse incoming midi over five dials — with a single output.
In the end, I suppose that the point of the system is to treat harmony as an aural measurement of bit-entropy.
Also, being ‘in a key’ is a form of cross-validation, addressing the question of what’s most likely to happen in the future. There is only ever a percentage…
I am haunted by the primitive nature of classifying by keywords and hashtags.
