The role of parametrized harmony

I have some confusion about where the best role for the harmony parametrization should live — in a workflow, or a system of musical creation, generally.

In the simplest case, I see five dials. And a 3D visualization.

It all looks sort of like a machine learning problem, addressing a lot of issues about how sets of notes should be classified as single entities. But that’s internal.

Apart from specific musical issues, it seems as though classification is a way of taking large numbers of (somehow) identifiable objects, and making them into a smaller number of objects — that is, finding a way to treat groups of objects as single things, and simplifying the groupings.

Finding the key of a chord is similar to assigning it a classification. However, the classification of a chord is intrinsically (numerically) unstable, which is a fundamental difference from most problems.

So the quickest way would be to use the JUCE model simply to parse incoming midi over five dials — with a single output.

In the end, I suppose that the point of the system is to treat harmony as an aural measurement of bit-entropy.

Also, being ‘in a key’ is a form of cross-validation, addressing the question of what’s most likely to happen in the future. There is only ever a percentage…

I am haunted by the primitive nature of classifying by keywords and hashtags.


Simple

How simple can computation be?

I think harmony can be a single equation:

    // get the pattern byte
    unsigned short patternByte = qdGetModePatternForPval (aKPDVE [1], partition);
    unsigned short answerBitByte = 0x8000;
    
    // SOLUTION FOR *ANY* DVE
    answerBitByte >>=  (aKPDVE[2] + aKPDVE [3] * aKPDVE [4]) % partition;
    
    // if the answer byte meets the displaced bit in the pattern, shift sharp-side or flat-side
    if ((answerBitByte & patternByte) == 0) {
        answerBitByte >>= (aKPDVE [1] < 4) ? partition : 12 - partition;
    }
    
    // bit shift for K
    return circleShiftLowestOctaveRight (answerBitByte, aKPDVE [0]);

points in space

Speaking informally, quickly, before my son has to go to piano:

The fact of the dot product, and its relationship to projection into spaces (and distance) is crucial. Taking the equation:

y = Wx + b

and figuring out how all can be variables of a sort (weights and data)… there is a reversal moving from the (known) data points into the (unknown) space, defined by weights. Once the space is known, predictions can be made…

without

So much of the question of computers is how to live without them. A solution always seems around the corner, until there’s another corner. At which point, one should perhaps play the violin or something.

The feeling of programming and the (empty) feeling of social media are not so different.

The usefulness of

tf.name_scope("a_name_scope")

is not yet clear. But I guess it will become useful on the TensorBoard.

World of pain

Well it turns out that the world of pain with matplotlib was caused by… uninstalled latex. Geez. It’s a 3.42 GB download. Or I could just delete this line:

rc('text', usetex=True)

Much faster and lighter. But maybe not so pretty.


The stepwise nature of the run() command in TensorFlow is a new thing for me, though I think I can see why it might be so.


The unanswerable question today is this:

What makes a complete collection, or complete enough collection? 

Words seem to work this way — a sticky set of letters. Sentences… sort of seem to work this way. Though often the end of a sentence seems far from the beginning. Particularly in German, which may explain why the verbs sit there at the end — to make you think backwards.

‘What makes things seem to gather into singleness?’ might be another way to put it.


How often it seems as though musical motion depends on the reduction of a group of events to its simplest assortment.  A gathering of bits; a comparison of similarities; a measure of 3 against 2. But all flowing through time, giving momentary assurance.

Monkey bars.

Still pickin’

…at tensorflow.

Always the small things. Just in getting started. Even a matter such as

# Output tensor has shape [2, 3].
fill([2, 3], 9) ==> [[9, 9, 9]
                     [9, 9, 9]]

showing up in the (linter and) documentation when it should be

fill((2, 3), 9) 

that is, parentheses instead of brackets — that can cost twenty minutes… of your life.

I have squares for a most curious generative music notation. But what a mess, what a fluid, this notation idea contains. And how uncomfortably it can spill.

Solving such a matter can be the single step of a short day.