Friday, November 17, 2017

The "bottom up" inflation fallacy

Tony Yates has a nice succinct post from a couple of years ago about the "bottom up inflation fallacy" (brought up in my Twitter feed by Nick Rowe):
This inflation is caused by the sum of its parts problem rears its head every time new inflation data gets released. Where we can read that inflation was ’caused’ by the prices that went up, and inhibited by the prices that went down.
I wouldn't necessarily attribute the forces that make this fallacy a fallacy to the central bank as Tony does — at the very least, if central banks can control inflation, why are many countries (US, Japan, Canada) persistently undershooting their stated or implicit targets? But you don't really need a mechanism to understand this fallacy, because it's actually a fallacy of general reasoning. If we look at the components of inflation for the US (data from here), we can see various components rising and falling:


While the individual components move around a lot, the distribution remains roughly stable — except for the case of the 2008-9 recession (see more here). It's a bit easier to see the stability using some data from MIT's billion price project. We can think of the "stable" distribution as representing a macroeconomic equilibrium (and the recession being a non-equilibrium process). But even without that interpretation, the fact that an individual price moves still tells us almost nothing about the other prices in the distribution if that distribution is constant. And it's definitely not a causal explanation.

It does seem to us as humans that if there is something maintaining that distribution (central banks per Tony), then an excursion by one price (oil) is being offset by another (clothing) in order to maintain that distribution. However, there does not have to be any force acting to do so.

For example, if the distribution is a maximum entropy distribution then the distribution is maintained simply by the fact that it is the most likely distribution (consistent with constraints). In the same way it is unlikely that all the air molecules in your room will move to one side of it, it is just unlikely that all the prices will move in one direction — but they easily could. For molecules, that probability is tiny because there are huge numbers of them. For prices, that probability is not as negligible. In physics, the pseudo-force "causing" the molecules to maintain their distribution is called an entropic force. Molecules that make up a smell of cooking bacon will spread around a room in a way that looks like they're being pushed away from their source, but there is no force on the individual molecules making that happen. There is a macro pseudo-force (diffusion), but there is no micro force corresponding to it.

I've speculated that this general idea is involved in so-called sticky prices in macroeconomics. Macro mechanisms like Calvo prices are in fact just effective descriptions at the macro scale, and therefore studies that look at individual prices (e.g. Eichenbaum et al 2008) will not see stick prices.

In a sense, yes, macro inflation is due to the price movements of thousands of individual prices. And it is entirely possible that you could build a model where specific prices offset each other via causal forces. But you don't have to and there exist ways of constructing a model where there isn't necessarily any way to match up the macro inflation with specific individual changes because macro inflation is about the distribution of all price changes. That's why I say the "bottom up" fallacy is a fallacy of general reasoning, not just a fallacy according to the way economists understand inflation today: it assumes a peculiar model. And as Tony tells us, that's not a standard macroeconomic model (which is based on central banks setting e.g. inflation targets).

You can even take this a bit further and argue against the position that microfoundations are necessary for a macroeconomic model. It is entirely possible for macroeconomic forces to exist for which there are no microeconomic analogs. Sticky prices are a possibility; Phillips curves are another. In fact, even rational representative agents might not exist at the scale of human beings, but could be a perfectly plausible effective degrees of freedom at the macro scale (per Becker 1962 "Irrational Behavior and Economic Theory", which I use as the central theme in my book).

Thursday, November 16, 2017

Unemployment rate step response over time

One of the interesting effects I noticed in looking at the unemployment rate in early recessions with the dynamic equilibrium model was what looked like "overshooting" (step response "ringing" transients). For fun, I thought I'd try to model the recession responses using a simple "two pole" model (second order low pass system).

For example, here is the log-linear transformation of the unemployment rate that minimizes entropy:


If we zoom in on one of the recessions in the 1950s, we can fit it to the step response:


I then fit several more recessions. Transforming back to the original data representation (unemployment rate in percent), and compiling the results:


Overall, this was just a curve fitting exercise. However, what was interesting were the parameters over time. These graphs show the frequency parameter ⍵ and the damping parameter ζ:


Over time, the frequency falls and the damping increases. We can also show the damped frequency which is a particular combination of the two (this is the frequency that we'd actually estimate from looking directly at the oscillations in the plot):


With the exception of the 1970 recession, this shows a roughly constant fairly high frequency that falls after the 1980s to a lower roughly constant frequency.

At this point, this is just a series of observations. This model adds far too many parameters to really be informative (for e.g. forecasting). What is interesting is that the step response in physics results from a sharp shock hitting a system with a band-limited response (i.e. the system cannot support all the high frequencies present in the sharp shock). This would make sense — in order to support higher frequencies, you'd probably have to have people entering and leaving jobs at rates close to monthly or even weekly. While some people might take a job for a month and quit, they likely don't make up the bulk of the labor force. This doesn't really reveal any deep properties of the system, but it does show how unemployment might well behave like a natural process (contra many suggestions e.g. that it is definitively a social process that cannot be understood in terms of mindless atoms or mathematics).

Wednesday, November 15, 2017

New CPI data and forecast horizons

New CPI data is out, and here is the "headline" CPI model last updated a couple months ago:


I did change the error bar on the derivative data to show the 1-sigma errors instead of the median error in the last update. The level forecast still shows the 90% confidence for the parameter estimates. 

Now why wasn't I invited to this? One of the talks was on forecasting horizons:
How far can we forecast? Statistical tests of the predictive content
Presenter: Malte Knueppel(Bundesbank)
Coauthor: Jörg Breitung
A version of the talk appears here [pdf]. One of the measures they look at is year-over-year CPI, which according to their research seems to have a forecast horizon of 3 quarters — relative to a stationary ergodic process. The dynamic equilibrium model is approaching 4 quarters:


The thing is, however, the way the authors define whether the data is uninformative is relative to a "naïve forecast" that's constant. The dynamic equilibrium forecast does have a few shocks — one centered at 1977.7 associated with the demographic transition of women entering the workforce, and one centered at 2015.1 I've tentatively associated with baby boomers leaving the workforce [0] after the Great Recession (the one visible above) [1]. But for the period from the mid-90s after the 70s shock ends until the start of the Great Recession would in fact be this "naïve forecast":


The post-recession period does involve a non-trivial (i.e. not constant) forecast, so it could be "informative" in the sense of the authors above. We will see if it continues to be accurate beyond their forecast horizon. 

...

Footnotes

[0] Part of the reason for this shock to posited is its existence in other time series.

[1] In the model, there is a third significant negative shock centered at 1960.8 associated with a general slowdown in the prime age civilian labor force participation rate. I have no firm evidence of what caused this, but I'd speculate it could be about women leaving the workforce in the immediate post-war period (the 1950s-60s "nuclear family" presented in propaganda advertising) and/or the big increase in graduate school attendance.

Friday, November 10, 2017

Why k = 2?

I put up my macro and ensembles slides as a "Twitter talk" (Twalk™?) yesterday and it reminded me of something that has always bothered me since the early days of this blog: Why does the "quantity theory of money" follow from the information equilibrium relationship N M for information transfer index k = 2?

From the information equilibrium relationship, we can show log N ~ k log M and therefore log P ~ (k − 1) log M. This means that for k = 2 

log P ~ log M

That is to say the rate of inflation is equal to the rate of money growth for k = 2. Of course, this is only empirically true for high rates of inflation:


But why k = 2? It seems completely arbitrary. In fact, it is so arbitrary that we shouldn't really expect the high inflation limit to obey it. The information equilibrium model allows all positive values of k. Why does it choose k = 2? What is making it happen?

I do not have a really good reason. However, I do have some intuition.

One of the concepts in physics that the information equilibrium approach is related to is diffusion. In that case, most values of k represent "anomalous diffusion". But ordinary diffusion with a Wiener process (a random walk based on a normal distribution) results in diffusion where the distance traveled goes as the square root of the time step σ ~ √t. That square root arises from the normal distribution, which is in fact a universal distribution (there's a central limit theorem for distributions that converge to it). Another way: 

2 log σ ~ log t

is an information equilibrium relationship t σ with k = 2.

If we think of output as a diffusion process (distance is money, time is output), we can say that in the limit of a large number of steps, we obtain

2 log M ~ log N

as a diffusion process, which implies log P ~ log M.

Of course, there are some issues with this besides it being hand-waving. For one, output is the independent variable corresponding to time. This does not reproduce the usual intuition that money should be causing the inflation, but rather the reverse (the spread of molecules in diffusion is not causing time to go forward [1]). But then applying the intuition from a physical process to an economic one via an analogy is not always useful.

I tried to see if it came out of some assumptions about money M mediating between nominal output N and aggregate supply S, i.e. the relationship

N M S

But aside from figuring out that if the IT index k in the first half is k = 2 (per above), then the IT index k' for M S would have to be 1 + φ or 2 − φ where φ is the golden ratio in order for the equations to be consistent. The latter value k' = 2 − φ ≈ 0.38 implies that the IT index for N ⇄ S is k k' ≈ 0.76, while the former implies k k' ≈ 5.24. But that's not important right now. It doesn't tell us why k = 2.

Another place to look would be the symmetry properties of the information equilibrium relationship, but k = 2 doesn't seem to be anything special there.

I thought I'd blog about this because it gives you a bit of insight as to how physicists (or at least this particular physicist) tend to think about problems — as well as point out flaws (i.e. ad hoc nature) in the information equilibrium approach to the quantity theory of money/AD-AS model in the aforementioned slides. I'd also welcome any ideas in comments.

...

Footnotes:

[1] Added in update. You could make a case for the "thermodynamic arrow of time", in which case the increase in entropy is actually equivalent to "time going forward".

Interest rates and dynamic equilibrium

What if we combine an information equilibrium relationship A ⇄ B with a dynamic information equilibrium description of the inputs A and B? Say, the interest rate model (described here) with dynamic equilibrium for investment and the monetary base? Turns out that it's interesting:



The first graph is the long term (10-year) rate and the second is the short term (3 month secondary market) rate. Green is the information equilibrium model alone (i.e. the data as input), while the gray curves show the result if we use the dynamic equilibria for GPDI and AMBSL (or CURRSL) as input.

Here is the GPDI dynamic equilibrium description for completeness (the link above uses fixed private investment instead of gross private domestic investment which made for a better interest rate model):


Wednesday, November 8, 2017

A new Beveridge curve or, Science is Awesome

What follows is speculative, but it is also really cool. A tweet about how the unemployment rate would be higher if labor force participation was at its previous higher level intrigued me. Both the unemployment rate and labor force participation were pretty well described by the dynamic information equilibrium model. Additionally, if you have two variables obeying a dynamic equilibrium models, you end up with a Beveridge curve as the long run behavior if you plot them parametrically.

The first interesting discovery happened when I plotted out the two dynamic equilibrium models side by side:


The first thing to note is that the shocks to CLF [marked with red arrows, down for downward shocks, up for upward] are centered later, but are wider than the unemployment rate shocks [marked with green arrows]. This means that both shocks end up beginning at roughly the same time, but the CLF shock doesn't finish until later. In fact, this particular piece of information led me to notice that there was a small discrepancy in the data from 2015-2016 in the CLF model — there appears to be a small positive shock. A positive shock would be predicted by the positive shock to the unemployment rate in 2014! Sure enough, it turns out that adding a shock improves the agreement with the CLF data. Since the shock roughly coincides with the ending of the Great Recession shock, it would have otherwise been practically invisible.

Second, because the centers don't match up and the CLF shocks are wider, you need a really long period without a shock to observe a Beveridge curve. The shocks to vacancies and the unemployment rate are of comparable size and duration so that the Beveridge curve jumps right out. However the CLF/U Beveridge curve is practically invisible just looking at the data:


And without the dynamic equilibrium model, it would never be noticed because of a) the short periods between recessions, and b) the fact that most of the data before the 1990s contains a large demographic shock of women entering the workforce. This means that assuming there isn't another major demographic shock, a Beveridge curve-like relationship will appear in future data. You could count this as a prediction of the dynamic equilibrium model. As you can see, the curve is not terribly apparent in the post-1990s data (the dots represent the arrows in the earlier graph above):


[The gray lines indicate the "long run" relationship between the dynamic equilibria. The dotted lines indicate the behavior of data in the absence of shocks. As you can see, only small segments are unaffected by shocks (the 90s data at the beginning, and the 2017 data at the end).]

I thought the illumination of the small positive shock to CLF 2015-2016 as well as the prediction of a future Beveridge curve like relationship between CLF and U were fascinating. Of course, they're both speculative conclusions. But if this is correct, then the tweet that set this all off is talking about a counterfactual world that couldn't exist: if CLF was higher, then we either had a different series of recessions or the unemployment rate would be lower. That is to say we can't move straight up and down (choosing a CLF) in the graph above without moving side to side (changing U).

[Added some description of the graphs in edit 9 Nov 2017.]

...

Update 9 November 2017

Here are the differences between the original prime age CLF participation forecast and the new "2016-shock" version:



Tuesday, November 7, 2017

Presentation: forecasting with information equilibrium

I've put together a draft presentation on information equilibrium and forecasting after presenting it earlier today as a "twitter talk". A pdf is available for download from my Google Drive as well. Below the fold are the slide images.



JOLTS data out today

Nothing definitive with the latest data — just a continuation of a correlated negative deviation from the model trend. The last update was here.


I also tried a "leading edge" counterfactual (replacing the logistic function by an an exponential approximation for time t << y₀ where y₀ is the transition year which is somewhat agnostic about the amplitude of the shock) and made an animation adding the post-forecast data one point at a time:


Essentially we're in the same place we were with the last update. I also updated the Beveridge curve with the latest data points:


Friday, November 3, 2017

Checking my forecast performance: unemployment rate

Because more young adults are becoming unemployed on account of they can't find work. Basically, the problem is this: if you haven't got a job, then you’re outta work! And that means only one thing — unemployment!
The Young Ones (1982) “Demolition”
Actually, the latest unemployment rate data tells us it continues to fall as predicted by the dynamic information equilibrium model (conditional on the absence of shocks/recessions):




The first is the original prediction, the second is a comparison with various forecasts of the FRBSF, and the third is a comparison with two different Fed forecasts.

In trying to be fair to the FRBSF model, I didn't show the data from the before I made the graph as new post-forecast data (in black). However, in these versions of the graph I take all of the data from after the original forecast (in January) as new:



There also don't appear to be any signs of an oncoming shock yet; however the JOLTS data (in particular, hires) appears to be an earlier indication that the unemployment rate — by about 7 months. That is to say, we should see little in the unemployment rate until the recession is practically upon us (although the algorithm can still see it before it is declared or even widely believed to be happening).

Update + 2.5 hours

Also, here is the prime age civilian labor force participation rate:


Thursday, November 2, 2017

Chaos!

Like the weather, the economy is complicated.

Like the weather, the economy obeys the laws of physics.

Like the weather, the economy is aggregated from the motion of atoms.

Doyne Farmer only said the first one, but inasmuch as this is some kind of argument in favor of any particular model of the economy so are the other two. Sure, it's complicated. But that doesn't mean we can assume it is a complex system like weather without some sort of evidence. Farmer's post is mostly just a hand-waving argument that the economy might be a chaotic system. It's the kind of thing you write before starting down a particular research program path — the kind of thing you write for the suits when asking for funding.

But it doesn't really constitute evidence that the economy is a chaotic system. So when Farmer says:
So it is not surprising that simple chaos was not found in the data.  That does not mean that the economy is not chaotic.  It is very likely that it is and that chaos can explain the patterns we see.
The phrase "very likely" just represents a matter of opinion here. I say its "very likely" chaos is not going to be a useful way to understand macroeconomics. I have a Phd in physics and have studied economics for some time now, with several empirically successful models. So there.

To his credit, Farmer does note that the initial attempts to bring chaos to economics didn't pan out:
But economists looked for chaos in the data, didn’t find it, and the subject was dropped.  For a good review of what happened see Roger Farmer’s blog.
I went over Roger Farmer's excellent blog post, and added to the argument in my post here.

Anyway, I have several issues with Doyne Farmer's blog post besides the usual "don't tell us chaos is important, show us" via some empirical results. In the following, I'll excerpt a few quotes and discuss them. First, Farmer takes on a classic econ critic target — the four-letter word DSGE:
Most of the  Dynamic Stochastic General Equilibrium (DSGE) models that are standard in macroeconomics rule out chaos from the outset.
To be fair, it is the log-linearization of DSGE models that "rules out" chaos, but then it only "rules out" chaos in regions of state space that are out of scope for the log-linearized versions of DSGE models. So when Farmer says:
Linear models cannot display chaos – their only possible attractor is a fixed point.
it comes really close to a bait and switch. An attractor is a property of the entire state space (phase space) of the model; the log-linearization of DSGE models is a description valid (in scope) for a small region of phase space. In a sense, Farmer is extending the log-linearization of a DSGE model to the entire state space. 

However, Eggertsson and Singh show that the log-linearization doesn't actually change the results very much — even up to extreme events like the Great Depression. This is because in general most of the relevant economic phenomena we observe appear to be perturbations: recessions impact GDP by ~ 10%, high unemployment is ~ 10%. In a sense, observed economic reality tells us that we don't really stray far enough away from a local log-linearization to tell the difference between a linear model and a non-linear one capable of exhibiting chaos. This is basically the phase space version of the argument Roger Farmer makes in his blog post that we just don't have enough data (i.e. we haven't explored enough of the phase space).

The thing is that a typical nonlinear model that can exhibit chaos (say, the predator-prey model defined by the Lotka–Volterra equations) has massive fluctuations. The chaos is not a perturbation to some underlying bulk, but is rather visiting the entire phase space. You could almost take that as a definition of chaos: a system that visits a large fraction of the potential phase space. This can be seen as a consequence of the "butterfly effect": two initial conditions in phase space become separated by exponentially larger distances over time. Two copies of the US economy that were "pretty close" to start would evolve to be wildly different from each other — e.g. their GDPs would become exponentially different. Now this is entirely possible, but the difference in GDP growth rates would probably be only a percentage point or two at best which would take a generation to become exponentially separated. Again, this is just another version of Roger Farmer's argument that we don't have long enough data series.

Another way to think of this is that the non-trivial attractors of a chaotic system visit some extended region of state space, so you'd imagine that a general chaotic model would produce large fluctuations in its outputs representative of the attractor's extent in phase space. For example, Steve Keen's dynamical systems exhibit massive fluctuations compared to those observed.

Now this in no way rules out the possibility that macroeconomic observables can be described by a chaotic model. It is just an argument that a chaotic model that produces the ~ 10% fluctuations actually observed would have to result from either some fine tuning or a bulk underlying equilibrium [1].

In a sense, Farmer seems to cedes all of these points at the end of his blog post:
In a future blog post I will argue that an important part of the problem is the assumption of equilibrium itself.  While it is possible for an economic equilibrium to be chaotic, I conjecture that the conditions that define economic equilibrium – that outcomes match expectations – tend to suppress chaos.
It is a bit funny to begin a post talking up chaos only to downplay it at the end.  I will await this future blog post, but this seems to be saying that we don't see obvious chaos (with its typical large fluctuations) because chaos is suppressed via some bulk underlying equilibrium (outcomes match expectations) — so that we essentially need longer data series to extract the chaotic signal.

But then after building us up with a metaphor using weather which is notoriously unpredictable, Farmer says:
Ironically, if business cycles are chaotic, we have a chance to predict them.
Like the weather, the economy is predictable.

??!

Now don't take this all as a reason not to study chaotic dynamical systems as possible models of the economy. At best, it represents a reason I chose not to study chaotic dynamical systems as possible models of the economy. I think it's going to be a fruitless research program. But then again, I originally wanted to work in fusion and plasma physics research.

Which is to say arguing in favor of one research program or another based on theoretical considerations tends to be more philosophy than science. Farmer can argue in favor of studying chaotic dynamics as a model of the economy. David Sloan Wilson can argue in favor of biological evolution. It's a remarkable coincidence that both of these scientists see the macroeconomy not as economics, but rather as a system best described using their own field of study they've worked in for years [2].

What would be useful is if Farmer or Wilson just showed how their approaches lead to models that better described the empirical data. That's the approach I take on this blog. One plot or table describing empirical data is worth a thousand posts about how one intellectual thinks the economy should be described. In fact, how this scientist or that economist thinks the economy should be properly modeled is no better than how a random person on the internet thinks the economy should be properly modeled without some sort empirical evidence backing it up. Without empirical evidence, science is just philosophy.

...

PS

I found this line out of place:
Remarkably a standard family of models is called “Real business cycle models”, a clear example of Orwellian newspeak.
Does Farmer not know that "real" here means "not nominal"? I imagine this is just a political jab as a chaotic model could easily be locally approximated by an RBC model.

...

Footnotes

[1] For example NGDP ~ exp(n t) (1 + d(t)) where the leading order growth "equilibrium" is given by exp(n t) while the chaotic component is some kind of business cycle function |d(t)| << 1.

[2] Isn't that what I'm doing? Not really. My thesis was about quarks.