Monetary Policy and Financial Exuberance

This note came in from Seeking Alpha’s Wall Street Breakfast:

The Chicago Fed’s Charles Evans doesn’t expect the first rate hike until the 2nd half of 2015 – somewhat later than Yellen’s “six-month” (from QE’s end) remark which suggested a boost as soon as April 2015. Speaking to reporters after his speech, Evans suggested holding off on hikes until 2016 could be appropriate given the state of the economy. Nevertheless, Evans sees a Fed Funds rate of 1.25% by the end of 2016 – the low end of FOMC guesses, but 25 basis points higher than his forecast three months ago. As for hiking rates to cool “financial exuberance” – an idea seeming to gain a little traction with some Fed members in recent days – Evans says “monetary policy is not the best tool to mitigate this risk.”

From my perspective leading up to 2007, monetary policy was the cause of financial exuberance. It seems to me it should also be part of the solution. Is it the best tool to mitigate the risk? Perhaps not, but of the remaining tools (fiscal policy, taxes, and regulation), it is the only one available right now to be used.There doesn’t seem to be any resolve in the chambers of Congress to even acknowledge this risk exists.

Sometimes waiting for the best tool to become available starts to look like indifference. This could lead us to the same outcome as we saw in 2008. I do not condone this level of abdication of responsibility. I especially dislike it when those who perpetrate the situation tell us the problem now is ours to resolve.

Federal Budget in April

This news item came to my Inbox from the newspaper, The Hill.

News from The Hill
House to consider Ryan budget in April
By Russell Berman
House Republicans in April will consider a budget authored by Rep. Paul Ryan (R-Wis.) that sticks to a bipartisan spending level for 2015 but balances within a decade, Majority Leader Eric Cantor (R-Va.) told lawmakers on Friday.
Cantor’s announcement sets up what could be the most difficult budget vote since 2011 for the Republican majority, since it will require dozens of conservatives to endorse a $1.014 trillion spending level that they opposed in December. Ryan, the Budget Committee chairman, will propose deeper cuts in future years to keep the party’s commitment to erasing the federal deficit within 10 years.

I’m uncertain what Rep. Ryan hopes to accomplish, but I will be glad to hear some talk about the budget. There needs to be a candid conversation. My own Senator, Patty Murray, has said she will not bring forward talks regarding a budget for 2015. That seems irresponsible considering what needs to change. At this stage, Rep. Ryan is showing leadership. We will need to see how the conversation goes.

For now, the opening is cast since the phrase “balances within a decade” seems quite impossible to me without significant changes to Medicare and Social Security. That truth needs to be told by a prominent member of Congress. If Rep. Ryan does not mention it, the dis-ingenuity from the Congressional budget committees will continue.

Credit Growth and GDP Visualization

I thought about writing a post describing this new series I will follow. Instead, I decided to use it as a curriculum for data visualization. Let’s start with the graph, which will open in a new window or tab. The focus of your eyes should immediately go to the red and green lines. Those are the important elements of the graph and it should be what you notice first.

Let’s move beyond that and talk about why they are the most noticeable. They are a brighter color compared to the rest of the items on the graph. The right-hand side is clear of other items and that space gives us a clear view of the most recent data observations of the two series.

After that, we may look to the horizontal or vertical axis — the horizontal axis because we are interested in the time of the observation or the vertical because we are interested in the scale of the data. In both cases, there are numbers without text. I have been a strong advocate for labeling each axis until recently when I noticed people using text within the graph to describe the axes. There are two advantages to this method: text is not rotated 90 degrees for the vertical axis and the text box doubles as the legend. In effect, we have replaced three text boxes (the horizontal axis label, the vertical axis label, and the legend) and replaced them with a single text box that may not have any more words than the composite of the original three.

In this particular case, we see what the horizontal and vertical axes are. We can also see a legend which shows the two data sets using the color of the line as part of the legend. This removes the graph type icon from the legend of most software and doubles the density within the new legend — a single line describes the series and defines which series it is.

Perhaps next our vision moves to the center of the graph which shows another text box with the most recent observation. I generally approve of this on all graphs since it shows how current the data is as well as the value of the most recent observation. It is not always easy to tell what each observation is especially as the line moves farther away from the scale which is generally on the left-hand side. This text box leaves no doubt on the value at the end point of the line. Notice that I also included the color of the line for each data observation. While this is not necessary for this graph, it provides reinforcement on the last observation for each data series to match it to the proper line.

By now, we have noticed the large paragraph of data near the left-side of the graph. I generally do not like detailed text on graphs. I especially do not like them when I am including a lecture — if I am truly speaking to the graph, there is no reason for additional text to distract from my comments. The counter to the lecture is the printed graph, which is what we are seeing now. For this type of presentation, it might be useful to include an explanatory paragraph to provide detail into how to read the graph and why it is meaningful. Since this graph will be updated at best quarterly, including background and detail will be helpful for any audience.

Finally, there is a text box in the lower-right which shows the source of the data. I am usually quite deficient at successfully including the data source. It is best to include it because you will have skeptics who will not believe your conclusions and observations until they look at the data series themselves.

Okay, that is it for the text boxes. They add a lot to the visual stimulation of the graph, but I think they have their purpose and they do not detract from the data. There is one item to discuss that is not seen — the gridlines. I did not speak of them because they do not stand out. Tableau defaults to a light gray color and I highly approve. Microsoft Excel defaults to black and it can make it unnecessarily difficult to read a graph. In this case, they are subtle and come into visual acuity only if needed.

Mastering the Federal Budget

Did you know you could master the federal budget in under ten minutes? One function of a data analyst is to take a large amount of data and distill it to a few important points so decisions can be made quickly and accurately. All we have to do to get you ready for your ten minutes is to make one assumption — we will have a balanced budget. Later we can relax that assumption, but for now, let’s assume spending must equal revenue. All figures that follow are from the 2014 budget and are shown in billions of dollars..

Table 1. Income Sources
Source Dollars
Individuals $1,380
Payroll 1,020
Corporate 330
Miscellaneous 150
Excise 90
Customs 30
All Sources $3,000

Let’s start with tax revenues. There are only six sources of revenue and there are a couple of terms you should learn. First the combination of individual taxes and corporate taxes is called federal funds. These are designed to be specifically for discretionary spending. Just keep that in mind for now; we’ll define that soon.

Take a look at payroll taxes. These are also called trust funds and have been designed specifically to fund the trust accounts like Social Security and Medicare. The rest of the income sources are small and would complicate the analysis. For now, you should keep track of two items — federal funds and trust funds.

Tabe 2. Expenditures
Type Dollars
Mandatory $2,432
Interest 228
Subtotal $2,660
Discretionary 1,140
All Spending $3,800

When we move into the spending categories, this is where things get simpler and the time saved from the analysis comes significantly into play. There are only three basic types of spending. Mandatory spending is by far the largest. All of the programs that get funded within the mandatory spending category are earned-benefit programs. That is, recipients have earned a benefit by participation in the program or through eligibility requirements determined by Congressional oversight. These programs include Social Security, Medicare, and SNAP. The combination of Social Security and Medicare expenditures are 87% of Mandatory spending. The remaining 13% are for programs with eligibility requirements.

Notice that mandatory spending should be funded from trust funds. Yet the revenue from trust funds doesn’t cover the mandatory spending. That means federal funds must be allocated to cover the remaining mandatory spending.

The second category is simply labeled Interest in Table 2, but the full title would be Interest on the Federal Debt. I would maintain this should also be considered Mandatory spending since the continued payment of interest fulfills the “full faith and credit” obligation of the government when it issues debt.

With that, I included a Subtotal line in Table 2. The purpose is so we can compare those required expenses to the income table. You can see that 89% of the income planned is already spent (2,660 / 3,000). The remaining 11% gets allocated to discretionary programs. When you hear talk in the press regarding the Appropriations process, this is the part they are talking about — how to allocate money towards the programs within the discretionary spending category. Unfortunately, that 11% is only 30% of the funds requested by those discretionary programs. While there are currently 21 of these programs, let me list off the top five in order of dollars requested:

  1. Department of Defense
  2. Department of Education
  3. Department of Veteran Affairs
  4. Department of Housing
  5. Department of State

Lower down on this list are programs like the Department of Homeland Security and the Department of the Treasury (which includes the Secret Service). If we want a balanced budget and if we want to make it fair, then all of these programs get 30% of their requested funds. Done! All income is allocated and we have met our assumption.

Alternatively, let’s say we cannot compromise our military and the Department of Defense gets all of its request with the remaining programs getting zero. The DoD’s request of $673 million would put us with a 7% budget deficit. The consequence of fully funding just this one program forces us to cancel our assumption of a balanced budget. Withdrawing our assumption then opens up all kinds of discussion on which programs to fully fund and which ones don’t get fully funded. Our point here is to understand the federal budget and at under 700 words, we are done.

The relaxing of our assumption is why the budget is so contentious. It is also why I think we are hearing less and less about requiring a balanced budget. Projections regarding the budget indicate that by 2025, the Mandatory spending category will consume more than 100% of tax revenues. For now, we cannot afford to fully fund the programs of the Federal government. In about a decade, we won’t even be able to fully fund the earned benefits promised to people earning them today. Our opportunity to solve this problem without significant pain ended twenty years ago. Now we are left without good choices and only have bad choices and disastrous choices. Unfortunately, I’m not sensing any meaningful leadership discussion coming from the Office of the President, the Senate Budget Committee, or the House Budget Committee. What is left is for me to do my little part here by trying to get some understanding through the simplification of the budget so we can tell our leaders they have run out of time to talk about the situation and it is time to make changes that impact our future.

There are a lot of resources available on the internet to learn about the budget and follow developments. I follow Stan Collender, a blogger at Forbes. This analysis was made possible from many pages at nationalpriorities.org.

Decreasing Participation Rate Consequences

One of the measures I have been following is the participation rate in the United States. As a reminder, the participation rate is the number of people who are employed or actively looking for work as a percentage of the civilian non-institutional population.I like following this measure because it can tell us more about the economy than the unemployment rate, as I will discuss.

The participation rate has been decreasing since 2000. The economy was flying high back then and the incentive to work through high wages and the possibility of great wealth was very strong. Two recessions since that year has caused unemployment to rise significantly twice. In contrast the participation rate should remain stable even as employment drops because the number actively looking for employment will rise and the two numbers offset each other. Instead we have seen the participation rate drop from about 67% to about 62%. That drop means that not only have people lost employment, but they have stopped actively looking for work. If you think that a drop of 5% is not much, the rest of this analysis might change your mind.

Table 1. Participation Decomposition
Measure Year Value
Population Feb-2000 211,576,000
Feb-2014 247,085,000
 
Participation
Rate
Feb-2000 67.0%
Feb-2014 62.7%
 
Static Level
Participants
Feb-2000 141,775,000
Feb-2014 165,547,000
 
Actual Participants 155,027,000
Missing Participants 10,520,000

Since the February data was recently released, let’s use that for this analysis. Back in February 2000, the participation rate reached a peak of 67.0%. If we take the civilian population of February 2014 (247,085,000) and multiply it by 67%, we get 165,547,000. This number represents the potential civilian labor force given the observed peak in participation rate, or the Static Level Participants in Table 1. The labor force as of February 2014 was 155,027,000. The difference is 10,520,000, what we should call the additional non-participation population.

The population over this period has not remained constant. From February 2000 to February 2014, the population has grown from 211,576,000 to 247,085,000. The labor force back in February 2000 was 141,775,000. Looking at this as a breakdown of factors, the increase in population indicates 23,772,000 additional people should be in the labor force. With the decrease in the participation rate, 13,252,000 are non-participating.

Table 2. Recalculating Unemployment
Measure Value
Unemployed 10,893,000
Employed 144,134,000
Rate 7.0%
Missing Participants 10,520,000
Static Level Rate 12.9%

Let me take a side step and mention unemployment. Calculating the unemployment rate is taking the number of unemployed and divide by the sum of employed and unemployed. The current rate is 7.0%, which you can follow from Table 2. Notice the sum of the unemployed and the employed is the same as the Actual Participants in Table 1.

With those numbers, we can move to our first conclusion. If we assume the participation rate was currently at its observed peak, we should add the missing participants to the current unemployed numbers (this assumes the missing participants have reached the point where they are discouraged enough they are no longer actively looking for work rather than the possibility they are better off not working). The new calculation becomes the sum of the unemployed plus the missing participants divided by the sum of the unemployed plus the missing participants plus the employed and reach a figure of 12.9%. Since we are working with February of 2000, contrast that number with the unemployment rate back then of 4.4%. In addition, the growth in the non-participation population has been 32% while the population in total has grown 17%.

The increase in the lack of participation does itself have consequences. In particular, I want to look at federal tax revenues. Individual income taxes plus corporate income taxes combine to be called federal funds or general revenues. When you hear talk about Congress working on appropriations, they are working on spending federal funds. For this analysis, I will look at individual income taxes only. Between 2000 and 2013, individual income taxes rose from 1,004,462 to 1,316,405 (in millions of dollars). That might be a confusing number since we are multiplying millions by millions. That 2013 number is 1.316 trillion dollars.

What would it be though if those 10,520,000 missing participants were working? Let’s step through this. The median individual income in 2010 was $26,197 (this is the most recent figure from the Census Bureau). Multiplying those two numbers yields $275.6 billion. If the tax rate was 27%, the additional individual income taxes collected would be $74.4 billion (I recognize 27% might be a high number for an individual working at the median income, but some of these will be second incomes to households and they could see a higher rate than 27%).

Comparing 2013 tax revenues to 2000 tax revenues shows a 31% increase. Adding in the lost potential revenue would show a 38% increase. There are twenty-one agencies that would be funded by this additional revenue. This dollar figure is so large it could fund one of sixteen agencies in full (some agencies are so small, the lost funds could support more than one at 100%). The second conclusion to reach is the lost revenue might seem small compared to total revenue, but it would have a big impact on the federal budget and the appropriations process.

In conclusion, a decrease in the participation rate from 67.0% to 62.7% has caused a lost revenue figure of $74.4 billion in 2013. The next consequence to examine would be the impact social service programs from the increase in the non-participation population. Maybe I’ll get to that someday.

Graph Axis

One of the inspirations I have for wanting to conduct a class on data visualization is watching students attempt to show data. It is unfortunate they are not being guided to what makes an effective graph. I thought about this today when I saw a graph of the capacity utilization of a machine and the y-axis was scaled from 0 to 1.2.

There are two things wrong with the y-axis. First, the measure of capacity utilization is in percent which means the y-axis should also show percent. Second, capacity utilization can never go above 100% which means the y-axis should not be shown above 100% either. I know that Excel 2007 and prior versions defaulted with a black box encasing the graph and that made a full bar chart difficult to fully understand, but it is not hard to remove that black box.

HomeworkCompletionI was thinking about a similar graph that I would update and show to students on how they were making progress through the homework. I made an example on the right. I’m not terribly thrilled with the data labels, but I can modify those later. The graph shows every student has completed homework 1 and 80% have completed homework A. From there, it might be a reasonable conclusion that we haven’t reached the point where the topics in homework 3 have been covered in class. The structure of the class allows the students to complete the homework at their own pace. We can see there are some who are working ahead. That could be an issue if they become bored in class or lose track of what they should be learning on their own.

The point of displaying this graph is to show a proper way to scale the y-axis when displaying percentages.The data label helps confirm the first bar does not extend above the top of the graph (if there needed to be an additional clue).

Since this data will be shown every week, the next step might be to devise a 3-d bar graph. I am generally opposed to any 3-d graph when using 2-d media, but it could be constructed properly without being impossible to read. Showing it in combination with the current 2-d version would make for a good story.

Personal Observations on Housing

One aspect of analytics that is getting pushed more is prescriptive analytics. This is where an analyst goes beyond analyzing the past and predicting the future to the place where alternative futures are laid out and the best alternative is prescribed. I’m not there yet when it comes to items I am following, but I think I can make some predictions.

Let’s start with my very local market of housing. I met with a builder on Saturday who mentioned prices on homes in our area are being seen 29% above the most recent tax assessment. This sounds awfully high to me and adds evidence another bubble in housing prices is underway. There is a judgment there that I want to examine:

  • when my wife and I bought in 2010, I expected prices to fall a bit but the availability of this particular house was a driver in the timing of our purchase decision
  • the tax assessment of the house fell in 2011 and 2012
  • the tax assessment spiked upward in 2013 back to the level at which we purchased in 2010

That decrease in assessment in 2011 and 2012 wiped out the 20% equity position we had when we purchased. Then in 2013, it returns with the increase in assessment. In the interim, the tax rate didn’t stay flat. In an effort to maintain their collections, our state and local governments increased the tax rate and we paid more in property taxes in 2013 than we did in 2010. In fact ,we paid 12% more.

Now along comes a report that homes are selling at a pace of 29% above the tax assessment. It is logical to expect the 2014 assessed value will be above the 2013 value and it might be up more than 10%. The conclusion has to be the expectation for property taxes in 2015 will rise by the amount of the increase in assessed value less depreciation. Given the area we live in, the depreciation we have experienced is very small.

My conclusion is to be quite concerned about the rise in property taxes for this year and next. My income is not keeping up with the increased tax burden and the consequence will be reduced savings and reduced spending.

Data Analytics and Data Visualization

Some may know that I recently got a second job as an adjunct professor in the University of Washington system. While the first course I will be teaching is on Operations and Project Management, which is part of the supply chain discipline of the School of Business, I have been encouraged to work on an emphasis of data analytics and visualization. Last month, I was invited to give a guest lecture to a class on visualization. Developing those slides gave me inspiration to consider developing a curriculum around analytics and visualization. To organize my thoughts, I decided to begin here on WordPress.

It seems to me a class should be organized around the steps of developing data to answer questions. The following list could be used for academic or industrial purposes. I don’t think this list is original, but it seems logical and fits with my own approach at work.

  1. Have a question without an answer
  2. Find a data source that may lead to answer
  3. Look for incompleteness, inconsistencies, and possible errors
  4. Perform initial analysis and construct initial visuals
  5. Write a story describing the analysis and visuals and include a conclusion
  6. Has the initial question been answered? Are there new questions?

I have encountered instances where only steps 2, 4, and 5 are performed. That has resulted in dissatisfaction from the requestor and the analyst. It seems almost all of the dissatisfaction arises from missing step one. If you don’t know what you would like to solve, you don’t know when you have finished. It is the finish that is determined in step six. Sometimes an answer to a question brings up new questions since the conclusion may not be what was expected or the answer doesn’t perfectly answer the initial question.

I have also seen instances where step three is skipped. This can be fatal to any analysis since it can lead to incorrect conclusions or worse, no conclusion when there should have been one. There generally isn’t a way to easily verify the accuracy of a data set. Incompleteness can be easy to check. My background has been in time series analysis which may be the easiest of all types of data sets to verify completeness. My best advice is to spend time with the data and use statistics along with graphs to see if the data looks reasonable. Odd situations can be observed with a basic approach that can lead to questions regarding events that influenced the data.

One of my goals will be to also examine the graphs I have constructed and maintained on the data site. I have some opinions on the construction of charts and how to do it well. With regard to software, I have been using Microsoft Excel for twenty years and still consider it to be the best general purpose software for data analysis. For this blog though, I have turned towards Tableau Public for visualization. I used a recent entry to discuss how that software has made it simpler for me to keep the visuals of this blog updated.

Okay, that is a simple introduction. No time table on what I put in this category exists, but this will keep my imagination active for awhile.

10-Year Yield Update

I have updated the graph of the 10-year yield to include all of February’s data. The average yield during February was 150 basis points below January’s average. I was surprised by that result since I really haven’t heard much commentary regarding the yield.

I have a table on the graph which shows the average over the past six months. February had the penultimate average of the six months with only October 2013 being lower. The yield is still within a fairly narrow range since June 24, 2013 of 2.55 and 3.03.

I added a new table today that shows the average over the past six years. The trend from 2009 to 2012 was straight down, but the trend from 2012 to 2014 has been straight up.

I have thought about constructing another view of this graph which shows wave analysis. It would be very complicated, but it is an interesting way to look at data. Here’s how to think of it:

  • the current period began June 24, 2013 and has lasted 173 days with a range of 2.55 and 3.03
  • a transition period occurred between May 2, 2013 and June 23, 2013 that lasted 36 days
  • the previous period began June 1, 2012 and lasted 229 days with a range of 1.47 and 2.06

You get the idea. Defined range periods have an indeterminate length. Transition periods are sharp movements between defined range periods and are typically short in duration. The current wave pattern seems to have begun during the fourth quarter of 2008. I will continue to look over this graph to see if there is more interesting patterns.

Note: After thinking about it some, I decided to give it a go. There hasn’t been that many changes in the data since 2008. I updated the graph to include some color depicting what period the data is in. Blue data indicate periods of stability. Red data indicate transition periods. Green data indicate wave-based transition periods. Wave-based transitions are typically slower and involve opposite directions movements of minor-period length. There are currently three green waves in the graph and you can see the saw-tooth pattern of the line.

Indicators

There is an art to watching economic indicators. They don’t move in the same direction at the same time. Take the following consecutive stories from this morning’s Wall Street Breakfast on Seeking Alpha.

Eurozone consumer prices tumble. Eurozone CPI dropped a record 1.1% on month in January after rising 0.3% in December, with the fall much sharper than the 0.4% decline that was expected. The index was dragged down by a tumble in the cost of non-energy industrial goods. On year, inflation was +0.8%, as in December. The sharp monthly fall in CPI comes amid concerns about deflation in the eurozone, although the ECB has so far been sanguine.

German corporate optimism increases again. The German Ifo institute’s business climate index has increased to its highest level in 2 1/2 years, rising to 111.3 in February from 110.6 in January and topping consensus that was also 110.6. The current-situation reading rose and exceeded forecasts, although the expectations print slipped. “The German economy is holding its own in a changeable global climate,” says Ifo.

Reconciling these two items isn’t easy. Prices should be dropping because demand is bad. But if demand is bad, corporate optimism shouldn’t be increasing. They key is reading the second item carefully. The business climate and current situation readings are better than expected, but the expectations were lower than forecast. They can match up if prices continue to fall which would be the correct attitude towards future business expectations.

Follow

Get every new post delivered to your Inbox.