Sunday, January 19, 2020

Notating Discontinuities in Long Time Series

Brian Romanchuk post uses math that requires a starting point in a time series. Starting something creates a discontinuity in any temporal display that might appear in a chart. I would like to suggest a better way to notate the time series used by Brian.

We have a continuous time series -N...-3,-2,-1,0,1,2,3....N which contains one or more discontinuities. The first discontinuity is introduced at time-point 0. This discontinuity would be displayed on a chart by a jump in a continuous line linking all time values.

Having organized our time series around time-point 0, we find the need to enter further discontinuities into the chart. It is very unlikely that any further discontinuities will occur exactly at labeled time-points--instead, they will occur during the time-spans between time points. This gives us the need to label point events more precisely.

The usual way to notate events that occur in a time series is to label the event as x and the time-point as (t), thus we would write x(t) and assign a value as needed.

Notice that this method only allows one value for every time-point. There is no method for indicating events that occur between time-points except to force it in at the next occurring time-point. This becomes a problem when we try to build precise theory (as Brian was trying to do) from averaged data locations.

Improved notation is easily achieved by subdividing the time-points by writing x(t.t). The first 't' is the location in the basic time series; the second 't' is the count number of the discontinuity event. For example, the fifth event occurring in the time-span following the zero point would be labeled x(0.5).

Notice that this system introduces a third dimension into our notation system. Term 'x' is the first dimension, the first 't' is the second dimension and the second 't' is the third dimension (still 'time' but on a different scale). Each can be assigned a single value and located on a chart. [This precision can be very useful when we discuss the introduction of fiat money into an economy.]

In reality, only two dimensions can be displayed on a chart unless we break a rule of some kind. The rule we break here is the rule of scale. For the first 't', the conventional time series scale applies. For the second 't', the time-span between time-points is re-scaled to accommodate the number of events occurring during each time-span--it may be one point; it may be a million. We will restore accounting accuracy for the second 't' by labeling the last eligible event as x(t.N) and assigning the same value to the next occurring position label x({t+1}.0). Thus, x(t.N) and x({t+1}.0) have identical values and occupy the same position on the time series scale.

A little practice in using the system would help here. Let's assume that purely fiat money is introduced into an economy for the first time at t=0. This t is placed on the time series scale.

The first fiat transaction is for $100. (Brian uses M(0) = 0.) We could label this in four different ways by using two notating systems:

x(0) = $100
x(-1,N) = x(0,0) = $100
x(0.1) = $100

A second fiat transaction for $200 occurs during the first time span. The amount of fiat in circulation would increase to $300 but how would we label it using each of the two notating systems?

x(1) = $300
x(0.1) = $300 (If the author chose x(0.0) as the starting point)
x(0.2) = $300 (If the author chose x(0.1) as the starting point)

If there were no further increase in fiat, we could correctly write

x(1) = $300
x(0,N) = x(1.0) = $300

Readers can easily see the added precision gained by using the more precise notation system.

Conclusion and Comment

Economics is a rather strange study. Individual exchanges are always discrete, unique events which are effective discontinuities in the economic life of each individual. On the other hand, the discipline depends upon data collected over time which can only result in imprecise time localization. We are reduced to working with averages.

Precise theory, such as Brian seeks, needs theoretical precision at the expected data placement level. Without precision, we risk taking shortcuts to skewed conclusions without even knowing that we have a problem.

[Disclaimer: I am not a mathematician. What seems to me as a simple and obvious method of time differentiation may have been used by others many times previously.]

(c) Roger Sparks 2020

No comments:

Post a Comment

Comments are welcomed but are moderated. It may take awhile before they appear to be viewed by all.