Wednesday, November 10, 2010

A Strategy Guide for my Economics-Based Video Game

Recently I have gotten really interested in computer programming. Right now I am learning the Java programming language, and have used it to create a simple text based game called Cake Wiz. You may ask, "What does this have to do with economics?" Actually it has quite a lot to do with economics. The game I created is a business simulation game, where the player runs a bakery. I've written this article to explain the economic principles behind the game. Before I give too much away, you might want to play the game first. You can access it here:

(And here's a screenshot. On the page, press the "click here" button to start your game. I wasn't clear about that.)

(Special thanks to good friends Steve and Bennique Blasini for letting me use a page of their business' website for my silly game. They are brilliant special effects artists and if there are any hollywood producers out there reading this: hire BFX Image Works if you ever need some computer graphics done for a film.)

In my game, the player makes four decisions every business day at the bakery:
1. how much cake mix to buy, based on a price that fluctuates.
2. how many cakes to bake from the mix, keeping in mind that cakes expire after two days but cake mix does not expire.
3. how much to spend on advertising
4. what price to sell the cakes for
Without giving away all the secrets how the game works, I will say that the number of cakes you can sell every day is based on four very important concepts in business and economics:

1. The price elasticity of demand. This is a measure of the change in the quantity demanded in response to a change in price. Except in rare cases of "snob appeal", people will demand less of a good if its price is higher. So in the game, if you start selling your cakes for a higher price, not only can the quantity sold fall, but your total revenue might fall if the price increase does not offset the quantity drop. You also might lose unsold cakes to spoilage.
2. Diminishing marginal returns. In this case, I'm referring to diminishing marginal returns of dollars spent on advertising. This basically means that you can spend too much on advertising. At a certain level of spending, every extra dollar spent on advertising may not have as powerful an effect as the dollars spent before it. To explain, imagine that one out of every 20 commercials on TV was for the "Shake Weight." This would probably greatly increase sales of the Shake Weight over not advertising at all. But if 20 out of 20 commercials on TV were for the Shake Weight, what effect would this have? TV viewers would probably say "I get it already!!! Enough with the Shake Weight!!", and the effect of the extra 19 commercials would undoubtably not be worth the extra cost. So keep this in mind when choosing your level of advertising spending in the game.
3. Inventory Management. In my game, and in business in general, you don't want to accumulate a lot of inventory that will go to waste, especially since goods like cakes expire with time. But on the other hand, you don't want to run out of inventory, which will cause you to miss out on extra sales, and possibly lose disappointed customers for good.
4. Randomness. In my game, as in business, you may find there's an unexplained, random element to your circumstances. You should change your behavior to hedge against the effects random fluctuations in demand or the cost of productive inputs.

So basically, the key to doing well at my game is using trial and error to find the right pricing, advertising level and inventory level to make a good profit in the face of randomness.
Hope I didn't give too much away and that you enjoy my game as much as I enjoyed programming it.

Wednesday, November 3, 2010

The Economics of Frontal-Bus-Squishage

Have you ever gotten onto a city bus and immediately been stuck in a huge cluster of people at the front of the bus, when there is plenty of space, and maybe even seats in the back? I have often struggled with this curious phenomenon, and my inner economist seeks a reason for, and maybe a solution to this problem. I have come to the conclusion that the problem of front-of-bus squishage can be easily understood through an examination of the unique costs and benefits that riders face in different parts of the bus.

When people think of prices, they often think only of money given in exchange for something else of value. But everything of value has a price, often not paid in money, but with other assets, such as personal space, comfort or dignity. Personal space is the asset that bus riders often give up when they are crammed together in the front of the bus, even when there are unused assets towards the back that could make everyone’s ride more pleasant.
Why do these assets go unused? After careful consideration, I've realized that bus blockage situations happen because of two things:
1. The bus riders who would most benefit from the extra space on the bus are those who would face the most costs in acquiring it. And,
2. Bus riders who would face the least costs in acquiring more space on the bus are those who would least benefit from more space.
To put this more simply, those who can more easily access the extra space have less need for it. Thus the extra space in the back goes unused.
Allow me to explain.
Envision an empty bus. As riders get on the bus, they immediately take the seats. Once all of the seats are taken, riders have no choice but to stand. In the absence of external forces, people will tend to want to stay put. In more common usage this can be categorized as "laziness". Because of this law of behavioral inertia, riders who come onto a bus with no available seats, rather than moving immediately to the back to clear space for new riders, will tend to stand in the front, relatively close to where they got on. There is, when entering a non-crowded bus with no available seats, no direct and universal incentive for bus riders to move further back, and because of this, as more riders get on the bus, clusters of frontal-bus-squishage form. And because of the different costs and benefits facing bus riders at different parts of the bus, once they form, these clusters are hard to break up. To help explain, take a look at this diagram I have artfully put together:

The diagram singles out two bus riders, person A and person B. Person A is stuck in the middle of a cluster of people (zone A), while person B is at the edge of the cluster, (in the spacious zone B). In this formation, Person A would greatly benefit from the extra space at the back of the bus, but would face the costs of squishing past three people in order to get there.
If you think the word "cost" is inappropriate to describe what person A faces here, ask yourself: do you enjoy squishing past people in thick crowds? Probably not, both for your sake, and out of a polite desire not to squish others. Thus you would face a cost in getting from zone A to zone B. The unpleasant squishing would be the price you’d pay for more space.

So person A, in order to benefit himself, and indirectly the people around him (in econ-speak, a positive externality), would pay a relatively high price for moving to zone B. Person A would need to press up against other riders, and awkwardly slide and slither through. This is a cost, as real to person A as the cost of a loaf of bread. Person B however, who is already at the edge of zone B, has plenty of space in front of him. And for the relatively low cost of simply moving his feet for a few steps, Person B could move further into zone B, and thus help lighten the blockage for everyone in zone A. But because person B is not being squished from both sides like everyone in zone A, he won't personally benefit much from moving towards the back. Person B's needs for space have largely been satisfied.
In the act of moving towards the back, Person A and other riders like him, face high costs and high benefits, while Person B and other riders like him face low costs and low benefits.
And there we have it. A cluster of front-of-bus-squishage. The cluster will break only if:
1. person B and/or others like him realize that they need to move back, or
2. if a few brave (or rude?) souls in Zone A decide it is better to squish past everyone to get to zone B, than to remain in the cluster. This is an altogether less pleasant solution than in 1, which involves no extra squishage.

So far there is only one tool in use that I know of for preventing or resolving the frontal bus squishage problem: Shame. The bus driver needs to shame the "person B"s of the world into moving back, usually by yelling "move back, everybody". (Even more effective is "I'm not moving this bus until you move to the back!") But maybe an automated system would be more effective. If money were no object, engineers could design a bus that uses sensors to detect clusters of frontal bus squishage. Upon detection, a polite yet insistant robot voice could ask the "person B"s to "please move back" over and over until the situation was resolved. If need be, it could also give them mild electric shocks until they do. Here is a diagram of how that would work:

(I sure hope anyone reading this has a sense of irony. For the record, I am not a psycho.)

But what about carrots rather than sticks? Adding extra benefits in exchange for moving to the back could work just as well, or even better than shame or electrocution. How about a free snack dispenser, or a gentle foot-massaging floor that activates in the back when the sensors detect a squishage?
All kidding aside, to solve this problem, the costs and benefits must be realligned, either by making it more costly to create a blockage, or more beneficial to prevent one. In econ-speak this would "internalize the externality." Snacks, electric shocks, or massages could theoretically all be used. But in reality, it seems we can only rely upon the shame that a good and forceful bus-driver can inflict on Person B. That, and hopefully, bus riders' courtesy to their fellow passengers will help too.

Saturday, August 14, 2010

The Econ Geek's Guide to Deal or No Deal: an Empirical Study

The game show "Deal or No Deal" is an econ geek's dream. Not only is it a thrilling spectacle for game show lovers, it is also a laboratory for studying human risk-taking behavior. For those who don't know the rules, on the show there are 26 numbered briefcases, each with a tag inside, showing an amount of money. The amount of money in each case ranges from 1 penny to 1 million dollars. The contestant first chooses one of the cases to take into possession, and then through the rest of the game, eliminates cases from the remaining 25, starting with 6 cases at once, then 5 then 4 then 3 then 2 then 1 at a time until all but the contestant's case is gone. As each case is eliminated, the amount it contains is exposed, thus letting the contestant know what amount is not in her own case. If the contestant eliminates all of the 25 cases, she walks away with the amount of money in the initially chosen case. The twist is that there is a "banker" on the show who after each round of elimination, offers the contestant an amount of money to stop playing. Because, superstitions aside, the choice of eliminating one numbered case over another does not matter, the only pertinent decision in the game is whether to take the deal or keep playing, (which makes the title of the show particularly fitting).
As the contestant continues to eliminate from the 25 cases, by inference, she gets a better idea of what is in her own case, and so does the banker. So if the contestant eliminates the case that has the penny, that's a good thing, because it means that the personal case doesn't contain the penny. If the contestant eliminates the million dollars, she knows that her personal case doesn't contain the million, and this is a bad thing.
For years I have watched this show, and wondered "how does the banker choose the amounts of each offer?" After quantitatively studying this (albeit with a limited sample of 64 offers from 9 complete games) I think I have come close to answering this question.
To understand how the banker makes his offers, there's one key mathematical concept to keep in mind: expected value. Expected value is equal to the sum of the values of all possible outcomes multiplied by their respective probabilities. It is the average amount of money per person that a large group of people would win on this game if they never took deals.
When one starts the game, the dollar values of the cases are as follows:
$0.01, $1, $5, $10, $25, $50, $75, $100, $200, $300, $400, $500, $750, $1000, $5000, $10000, $25000, $50000, $75000, $100000, $200000, $300000, $400000, $500000, $750000, $1000000
And the probabilities of each outcome are equally likely: 1/26= (approximately) 0.03846.
So, the expected value, (in this case also just the average of all values because the probabilities are the same), is equal to:

$0.01/26 + $1/26 + $5/26 + $10/26 + $25/26 + $50/26 + $75/26 + $100/26 + $200/26, +$300/26 + $400/26 + $500/26 + $750/26 + $1000/26 + $5000/26 + $10000/26 + $25000/26 + $50000/26 + $75000/26 + $100000/26 + $200000/26 + $300000/26 + $400000/26 + $500000/26 + $750000/26 + $1000000/26 = $131,477
So, when you start the game, the expected value of your personal case, before any of the remaining cases are eliminated, is $131,477. What if, before you even started playing, the banker offered you $80,000 to not play the game at all? Would you take the deal? On average, taking this deal would get contestants much less than playing through all the way. But for reasons I shall explain later in this article, the banker usually makes offers that, just like this one, are significantly lower than the expected value of the case, and despite the low offers, very few contestants actually play through all the cases.
As one plays the game, and eliminates cases, the expected value of what's in one's personal case changes. Lets say that the contestant has eliminated 24 of the cases, leaving just the personal case and one other case in play, and that the only two possible values remaining are $0.01 and $1,000,000. Because the probability of either outcome is 1/2, the expected value of the contestant's personal case would be equal to:
$0.01/2 + $1,000,000/2 = $500,000
What if at this point in the game, the banker offers a deal for $250,000? Ask yourself: if you had a choice here between choosing your case, which might have a million dollars in it or might have a penny, or taking a deal for $250,000, what would you do?
Personally I would take the $250,000. This illustrates an important concept in economics: risk aversion. I am risk averse in this situation because I would choose a certain reward over an uncertain one, even if the expected value of the reward in the uncertain event is greater than the certain reward.
So in Deal or no Deal, the banker always wants the contestant to take the deal, right? Wrong. If contestants took the first or second deals there wouldn't be much of a show, and the network would need to bring on more contestants, and thus give away more prizes, to fill airtime. On top of this, the show tends to get more interesting as it goes along and people make the riskier decisions. For these two reasons, one related to the costs of broadcasting the show, and the other related to the benefits of having a more interesting show, there is an incentive to make lower offers to the contestants in early rounds to get them to play for longer.
The quality of an offer can actually be quantified. There's a measurement I use to find the quality of a deal relative to what cases are still in play. I call it the "Offer Quality Ratio", and here it is:
offer quality ratio = (offer amount)/(expected value of remaining cases at time of offer)

So lets say there are the following cases left on the board (by the way, this is from an actual episode): $1, $100, $50000, and $100000
and the contestant gets an offer of $27000.
The expected value is = $1/4 + $100/4 + $50000/4 +$100000/4 = $37525.25
So the Offer Quality Ratio = $27000/$37525.25 = 0.7195
In other words, the offer is around 72% of the expected value of playing to the end.

After the challenging task of watching TV for hours, I have collected data on 64 offers from 9 complete Deal or No Deal games. My results show that as contestants play the game, they tend to get rewarded with higher quality deals relative to what cases are still in play. From these 9 games, here are the average Offer Quality Ratios for offers in each round:

First offer: 28.9% of expected value, (Standard Deviation 15.8%)
Second offer: 42.8% of expected value, (Standard Deviation 15.0%)
Third offer: 47.7% of expected value, (Standard Deviation 13.6%)
Fourth offer: 54.8% of expected value, (Standard Deviation 12.2%)
Fifth offer: 65.1% of expected value, (Standard Deviation 17.8%)
Sixth offer: 65.0% of expected value, (Standard Deviation 16.1%)
Seventh offer: 84.0% of expected value, (Standard Deviation 17.3%)
Eighth offer: 90.6% of expected value, (Standard Deviation 18.3%)
Ninth offer: 97.6% of expected value (S. Dev 3% from a limited sample of just two offers)

Though I'm sure this study would benefit from a larger sample size, there are two conclusions I have drawn from it.
First, the quality of deals relative to what is on the board tends to rise as games progress. As you can see, the first offer tends to be incredibly low, and is usually not even worth considering.
Only the most risk averse contestants would take a first offer that's under 30% of expected value. As the game progresses, the managers of the show, weighing the costs of giving a bigger payout against the benefits of a more interesting show and more airtime per contestant, increase the quality of the offers.
Secondly, the standard deviation (the average amount of sample variation above or below the average value of all elements in the sample) of offers is significant, at around 15 to 20 Offer Quality percentage points, and remains rather constant throughout the game. This is either because there is an element of randomness built into the offer-determining formula used on the show, or there are hidden variables determining part of each offer. Maybe contestants are given a psych evaluation before the show that gives insight into their risk profiles? This would help the banker minimize payouts while maximizing the thrill of the show.

So, the moral of the story is, if you are ever on Deal or no Deal, fortune might favor you because of your boldness. Rather, I should say, the "banker" (and by that I mean the managers and producers of the show) might favor you with a good offer because you've made the show more interesting, and thus more profitable.

Sunday, August 8, 2010

Why Soccer is Less Popular in the U.S.

What is it about soccer that has stopped it from really taking off as a spectator sport in the United States? Could it be the low goal scoring? The constant change of possession? In my opinion, the answer has less to do with the aesthetics of the game than it does with economics (big surprise, right?). Allow me to explain.
To understand the market for televised soccer, one must first understand the economic quirks of the television market in general. Specifically, antenna television has the problem of being what economists call a "public good." A public good has two characteristics:
1) It is non-rivalrous in consumption, meaning one person's consumption of the good does not prevent others from using it. While goods like apples are rivalrous in consumption, meaning if you eat an apple, someone else cannot also eat that apple, one person's watching a television program does not stop anyone else from watching it on a separate TV.
2) It is non-exclusive in consumption, meaning nobody can be stopped from consuming the good if they want to consume it, making it impossible to collect money in exchange for consumption. While a grocery store can prevent the theft of its apples, a TV network broadcasting over airwaves cannot prevent anyone with a television from harnessing those airwaves to watch TV programs.

Cable and satellite broadcasts, however are excludable. These advancements have bypassed the problem of money collection for TV services. But things were different before the age of cable TV. In these early decades, there were two choices for financing television, either publicly funding it, or getting revenues from advertisers. The United States, unlike many other countries in the world, has relied primarily upon a private system of television funding, based on advertising. In the US, the advertiser became the real customer, paying for airtime, and the TV viewer was a bystander to the process. The commercial break became a necessity. In other countries however, governments stepped in to create networks like the BBC in Britain, that were funded by taxation, thus eliminating the need for commercial breaks.

But what does this have to do with soccer? Quite a lot really. The game of soccer is divided into two continuous 45 minute halves. Other than half-time there are no natural breaks in the game, like time-outs in American football or basketball, or breaks between 9 different innings as there are in baseball. This is a problem for broadcasters who depend on commercial breaks for their only source of revenue. For this reason, in an advertising based financing system, soccer games will tend to be chosen less than other programs. Why show a soccer game with only a few commercial breaks during half-time, when you can show a basketball game with numerous time-outs, breaks between quarters, and a half-time break? So antenna-televised soccer is not just a public good, but a public good that is resistant to advertising.
To explain why soccer is not so popular in the United States, my theory is, in recent decades when soccer became the world's most popular sport, its lack of exposure on US television played a role in its relative lack of popularity. Soccer haters may disagree, but the economic logic is sound. The pay-for cable and satellite sports networks that sprang up in the age of cable, or new forms of web based broadcasting, may eventually give soccer the exposure it needs to be on par with football, baseball and basketball. Just don't expect any of the traditional networks to broadcast a soccer game when there's a perfectly good basketball game to show.

Saturday, July 24, 2010

More Econometric Fun With The Saw Movies (This Blog Article is in 3D)

Last year, I attempted to predict the box office success of Saw 6 using regression analysis aided by statistical software. Since the gruesome Halloween tradition of the Saw franchise will continue this year with "Saw 7: 3D" I'm going to try it again and see if I can improve on my powers of prognostication.
To help the non-econometrically inclined understand what's going on, here's the same basic explanation of regression analysis I included on my James Bond article:
For those unfamiliar with regression analysis, it's a statistical method that searches for correlation among phenomena. It uses calculus to find the mathematical equation that best fits a group of numerical data. This type of math is really labor intensive, and for large data sets was near impossible before the computer age. I don't know how the calculus works, but that's OK for my purposes. I just think of statistical software as a "magic box" that spits out predictive functions when I put numbers into it.
This is the method I shall use to try to predict how much money Saw 7 will make at the domestic box office.
So let's get started. Here is the data of how much money each Saw film has made:

Saw 1 - $55,185,045 (2004)
Saw 2 - $87, 039,965 (2005)
Saw 3 - $80,238,724 (2006)
Saw 4 - $63,300,095 (2007)
Saw 5 - $56,746,769 (2008)
Saw 6 - $27,693,292 (2009)

To improve the analysis, as I did with the James Bond article, I am going to adjust for inflation by putting everything into 2004 dollars. This will automatically remove the inflation that distorts the comparability of year to year data.
Here's the same data converted into 2004 dollars:

Saw 1 - $55,185,045
Saw 2 - $84,177,916
Saw 3 - $75,707,623
Saw 4 - $58,098,757
Saw 5 - $50,177,182
Saw 6 - $24,585,575

Here it is on a chart:

As you can see, after a sizable jump from the first movie to the second, I think due to the built in publicity of the first film, the box office has declined with each new release. I think this represents the economic principle of diminishing marginal returns, as film viewers get tired of seeing the same thing year after year (in this case, viewers are getting tired of seeing people get tortured by sadistic Rube Goldberg contraptions.)
So what does regression analysis predict for Saw 7? Let's plug in the data and find out.
I will use the same econometric model I used in my earlier Saw article. This model will use only two variables to mathematically predict how Saw 7 will do. The model will be "explaining y in terms of x and z." These explanatory variables are:
1. The numerical order of the release of the films (1,2,3,4,5, and 6)
2. A "sequel dummy" variable (a value of 0 or 1 depending on if the film is the first in the series. So when I plug in the data, Saw 1 will get a 0, and Saw 2 through 6 will each get a 1) This "sequel dummy" isolates the positive effect on the box office that is the result of the built in publicity created by the first film.
And here it goes. Plugging in the data, the statistical software gives me the following function:
Boxoffice[t] = -14471512.3 order[t] +46778902.5 sequeldummy[t] +69656557.3 + e[t]
To translate this statement into English, it says:
The box office of a Saw movie decreases by an average of around $14 million with each new Saw film that is released:
(-14471512.3 order[t])
The box office also increases by around $46 million just from the built in publicity of being part of a franchise, as is the case with Saw 2-6: (46778902.5 sequeldummy[t])And at the end of the function there's: (+69656557.3).
This $69 million figure is the y-intercept, so if you could break the laws of both reality and filmmaking, and release "Saw Zero" this is how much money it would make at the box office (order and sequeldummy would both be 0 in this case leaving just the intercept.)
So what does this mean for Saw 7? Lets plug it in. For Saw 7:
Order = 7
Sequeldummy = 1
So our model will be:
BoxOffice(saw 7) = -$14,471,512*(7) + $46,778,902*(1) + $69,656,557
= -$101,300,584 + $46,778,902 + $69,656,557
= $15,134,875 (in 2004 $s)
Adjusting to current dollars, (2009 is the best I can get)
BoxOffice(saw 7) = $17,047,984

So, this model predicts Saw 7 to make around $17,047,984 at the U.S. box office.
To me, this seems like a very low number, if only for one reason:
Saw 7 will be in 3D!
So there are going to be body parts and blood flying at the audience, which will certainly add to its appeal. (Though I'm not much of a fan of the series, I might even see it because it's in 3D.) Adding the 3d-ness of the film into the model would have been tricky, and I prefer to keep this as simple as possible. However I just found a statistic online saying that 3D movies gross on average 4 times as much as 2d movies. If this is true, does it mean that Saw 7 will gross around $70 million dollars? No. Most 3d movies are big budget, family friendly spectacles like Avatar, or the Pixar films, which tend to get higher grosses in the first place. Nonetheless, I would expect Saw 7, just from the fact that it will be in 3 dimensions, to earn more, maybe significantly more, than $17 million. Especially if there is extra blood and guts flying in the audience's face.

Wessa, P. (2010), Free Statistics Software, Office for Research Development and Education,
version 1.1.23-r6, URL

Sunday, July 18, 2010

Free Samples and Diminishing Marginal Utility

Have you ever been at a grocery store and tried a free sample of some food, maybe some potato chips, and thought, "Wow! I could eat a million of these"? You then bought the product, took it home and realized after the second handful of chips that you really didn't want to eat a million of them anymore? If this has happened to you, you've helped to illustrate a very important concept in economics: diminishing marginal utility.
Diminishing marginal utility is the economic and psychological fact that in general, when people consume more of any item (not just food, but other things such as movies as I've explored in earlier articles), their desire to consume more of that item decreases. So one might really enjoy that first potato chip, but after eating a certain number of them, not want to eat any more, even to the point that one might get disgusted by the thought of eating more chips.
There are important biological reasons for human psychology to be this way. If people never got tired of eating potato chips no matter how many they consumed at one time, they would make themselves very sick. The same goes for non-food items, though perhaps to a lesser extent. I'm sure some people out there would be happy with an infinite number of shoes (Imelda Marcos or the Sex and the City girls perhaps?) Nonetheless a certain level of moderation exists in our psychology, and for some very good reasons.
Anyway, the point of my writing this article is to provide a word of advice to consumers: Know your own utility function.
For those who don't know what a utility function is, it's a mathematical or graphical representation of how much satisfaction one gets from consuming more of an item. Though I won't get into the tricky situation of trying to quantify utility, which is an abstract, personal and subjective thing, it is clear that utility diminishes with more units consumed. To make the right purchases for themselves, consumers should realize that the free sample they taste is unique. The next unit of the product will not taste the same, because utility is diminishing with every unit.
But the grocery store doesn't want you to be aware of this. The grocery store wants you to think that every potato chip will be as good as that first one, and that when making your purchase, you will think that your utility function will not decrease, as in this utility function graph:

If one's utility function were like this, every potato chip would be just as good as that first one. Grocery stores would thrive for a while, but humanity would eat itself into extinction. Thankfully this is not the case. In reality, people's utility functions decrease, like this one:

(Notice that just before 50 chips, utility is actually about to go negative. This means the person would get negative utility from more chips, probably due to physical discomfort. Not good for your stomach.)
So when you're at the grocery store and you try a free sample, remember that those tasty potato chips are hitting your taste buds at the very tippy top of your utility function. It's going to be downhill from there. And though at that moment you might feel like you can eat a million chips, if you bought these million chips, you might end up wasting 999,950 of them.

Monday, July 5, 2010

The Social Premium on Alcoholic Beverages

Ever sit and have a drink at a fancy bar and wonder "why am I paying $8 for a glass of wine?" The answer to this question, of why a glass of wine at a bar might sell for more than an entire bottle of wine at the grocery store, can be uncovered by economic principles.

There are three forces at work here pushing the price up.
1. The extra costs that must be incurred by the bar in order to serve you that drink. In particular, unlike a grocery store purchase, where consumption is a self-serve process, in its pricing, a bar must cover all of the costs of serving drinks, from bartender's wages to dishwashing detergent.

2. Individual drinks are smaller in quantity than what is usually sold at the grocery store, which eliminates the possibility of a quantity discount for a larger purchase.

3. What I call "the Social Premium" on these drinks. This premium arises from the social benefits of drinking at a bar as opposed to drinking elsewhere. These extra benefits to drinkers at bars make it rational for them to pay more per drink. Some of the social benefits may include: Interaction with the opposite sex, a hopping dance-floor, an epic game of pool with a complete stranger, and countless other things that are easier (in economics-speak "less costly") to find at a bar than other places. It's true that one can plan a party to gain these same social benefits with cheaper drinks, but that entails its own costs.

Many of these extra social benefits are not guaranteed to happen. But the mind of the consumer constructs an expected value of all things that might happen, and factors this into his/her decision of whether or not to purchase a drink at a certain price. The greater the customer's expected benefits of purchasing that drink at the bar, the higher the price can go. This logic holds for all products, not just alcohol. Alcohol just provides a particularly useful example of a social premium, because it is greatly associated with social interaction.

A group of economists should get together for a giant pub-crawl and econometrically study the ratio of the price of drinks sold at bars with the equivalent drinks bought from local grocery stores. My guess is that the results would show the perceived possibility of sexual relations to be an important driver of the price. This could be quantitatively studied through such metrics as male/female ratios at different bars. If this does drive the price of drinks, in a sense (and a very cynical sense), many bar patrons are "paying for sex", they just don't realize it.

Sunday, June 20, 2010

Predicting the Success of the Next James Bond Movie

(Disclaimer: Take everything I write in this article with not just a grain of salt, but a giant salt shaker's worth. I cannot predict the future. Nor do I claim to be an experienced statistician. Also, the film industry is one of the most unpredictable of all industries. Nonetheless, it's fun to try to predict the future, and it sure would be cool if I ended up being right.)
In an earlier article, I had run a regression analysis on the "Saw" movies in an effort to predict the box office results of Saw 6. (For those unfamiliar with regression analysis, it's a statistical method that searches for correlation among phenomena. It uses calculus to find the mathematical equation that best fits a group of numerical data. This type of math is really labor intensive, and for large data sets was near impossible before the computer age. I don't know how the calculus works, but that's OK for my purposes. I just think of statistical software as a "magic box" that spits out predictive functions when I put numbers into it. In this case, the software will be spitting out a function that tries to explain the box office results of James Bond movies in terms of various other bits of data that seem to increase or decrease a film's gross.) In my analysis of the Saw movies, the results showed that after an initial "sequel bump", (I think because of built-in publicity that boosted the box office of the second movie,) the series faced diminishing marginal returns with each film.
To analyze the James Bond franchise with the same methodology has its own challenges.
First of all, to compare Bond films, which have been released over many decades, I needed to adjust for inflation. So using a nifty online Consumer Price Index calculator I put everything into 1963 dollars (the year of the release of the first Bond film, "Dr. No.").
Secondly, unlike the Saw films which have been released every year at Halloween since 2004, the Bond movies have been inconsistently released since 1963. Because of this the principle of diminishing marginal returns may not be so apparent, as over time there have been new generations of Bond fans, replenishing the returns of the franchise. A twelve year old kid watching "Dr. No" in 1963 might years later get sick of watching Bond films, but will one day be replaced by another twelve year old kid watching "Goldeneye" and so on.
I have altered the regression analysis in ways that I think might address these differences, specifically by including several more explanatory variables. Mathematically, I am treating the films within a Bond actor's series, as a franchise in itself, with the same "sequel bump" from an actor's initial publicity from the first film. As I write this, I am expecting to see diminishing returns with the more films they do. I am also expecting to see diminishing returns for the Bond series as a whole. Here are all the variables I am plugging into a piece of online statistical software that spits out predictive functions:
1. The US box office gross in 1963 dollars. (Dependent Variable)
2. The order of release of each film (1-23).
3. A sequel dummy variable (value 1 or 0, depending on if the film is a sequel)
4. The order of release of each film within a Bond actor's series of films.
5. A sequel dummy variable for within each Bond actor's series of films (value 1 or 0, depending on if the film is the first or later in an actor's film's, e.g. "Casino Royale" gets a 0 for being Daniel Craig's first Bond movie, but "Quantum of Solace" gets a 1 for being his second.)
6. The US population for the year of release. This should help adjust for the population shifting that has happened over decades. While the Saw movies more or less happened in a population stable environment, this is not true for Bond.
Here is my raw data (column 3 in 1963 dollars):

As you can see from the data, the more recently released Bond films, when adjusted for inflation aren't as spectacularly successful as they might seem. It looks like Thunderball did the best, and in 2009 dollars, that film's gross would have been $427,953,216! what a success.
Now lets see what the magical software gives us when we plug all of this in. Here is the resulting predictive function:
gross[t] = -141158653.367232 -5176769.11092571rank[t] + 1426600.81371098bondnum[t] + 25241734.2461257seqdummy[t] + 1661342.67067288bondseqdummy[t] + 850.639431542035USpopthou[t] + e[t]
These results seem to reveal several things:
1. Population plays a role in determining box office gross ($850 added to the gross for every thousand US residents).
2. While there appears to be diminishing marginal returns for the Bond series as a whole (in the function losing around $5 mil per film), there appear to be increasing marginal returns for each Bond actor as they continue to make films. This is shown by the positive "bond sequel dummy" variable and the positive "bond number" variable. I found this surprising. Maybe, as more films are released, the audience and the actor form a bond (I know, you're groaning right now).
So now lets try to predict the US gross of the next Bond film, if it ever gets released.
Assuming Daniel Craig is in it, and with the simplifying assumption that the population is the same as it is now, we can plug in for all the variables:
rank= 24 (24th Bond film)
seqdummy=1 (this is not the first bond film)
bondnum=3 (this is Daniel Craig's 3rd Bond Film)
bondseqdummy=1 (this is not Daniel Craig's first bond film)
USpopthou=307,006 (most recent US population stats)

So calculating this, we get:
gross = -141158653 -5176769*24 + 1426601*3 + 25241734*1+ 1661342*1 + 851*307006 + e[t]
gross = $27,043,876 +or- error (in 1963 dollars)
converting to current dollars: $187,472,910 +or-error
So expect the next Bond movie to make around $187 mil. Not really though. Countless other variables are at work, and the most important variable is impossible to determine until the movie is finished. This is the "how much will people like it" variable. There really is no accounting for taste.

Wessa, P. (2010), Free Statistics Software, Office for Research Development and Education,
version 1.1.23-r6, URL

Friday, May 21, 2010

The Efficient Allocation of the Right of Way

This may not be representative of society in general, but lately when passing by crosswalks, I have noticed two things:
1. Cars being less willing to stop for waiting pedestrians.
2. Pedestrians being more willing to wait for a break in the traffic before starting to cross, instead of demanding that cars stop for them.
This got me thinking about the traffic rules for crosswalks (giving pedestrians the right to cross whenever they want to), and has led me to some interesting insights into why the rules are the way they are. I do not know, however, why I see so many drivers and pedestrians acting contrarily to the rules. What follows is a meditation on proper crosswalk behavior, and how economic efficiency dictates what customs we follow (or rather should follow).
To begin with, let's make some assumptions that will allow this thought experiment to take place.
Let's assume that drivers and pedestrians can all be categorized into four groups, and that people in these groups behave the same each time they approach a crosswalk. The four groups are as follows:
1. Pedestrians who wait for the cars to pass before they walk, denoted by Pw (pedestrians who wait)
2. Pedestrians who do not wait for the cars to pass before they start walking, denoted by Pnw (pedestrians not waiting)
3. Drivers who wait for pedestrians to cross, following the law as it currently stands, denoted by Dw (drivers who wait)
4. Drivers who do not wait for pedestrians to cross, just driving through and forcing the pedestrian to wait or get run-over, denoted by Dnw (Drivers not waiting)

Thus there are the following four scenarios that could happen at the crosswalk:
Pnw meets Dw (pedestrian crosses easily)
Pnw meets Dnw (pedestrian and driver both try to go, resulting in a dangerous face-off)
Pw meets Dw (pedestrian and driver both wait around like idiots, resulting in a delay until they sort out who should go)
Pw meets Dnw (Driver goes and pedestrian waits for break in traffic)

Now lets consider the costs involved in each scenario.
(denoting "meets at the crosswalk" with a /)
Pnw/Dw: the cost of the driver having to waste time and gas to stop.
Pnw/Dnw: the cost to both pedestrian and driver of a possible harmful or fatal accident.
Pw/Dw: the cost of the driver's time and gas, as well as the pedestrian's time.
Pw/Dnw: the cost of the pedestrian's time.

Clearly Pnw/Dnw entails the highest cost.
We'll return to this discussion of cost shortly, for now we have more assumptions to consider.
Let's assume that 80% of pedestrians don't wait for a break in traffic, and the remaining 20% wait for the cars to pass. This gives us probabilities that a random pedestrian will be of each group. Here are the probabilities:
("probability of an event" is here denoted by P(event))
Secondly, lets assume that 80% of drivers are those who stop for people at crosswalks, and the remaining 20% are the jerks who just plow through. This gives us probabilities that a random driver will be of each group. Here are the probabilities:
From these probabilities we can create a probability distribution for each of the possible scenarios at a crosswalk.
Because a certain type of pedestrian coming to a crosswalk, and a certain type of driver coming to a crosswalk are totally unrelated, independent events, we can find the probabilities for each situation by multiplying the driver and pedestrians' probabilities together.
So this gives us the following probability distribution:
P(Pnw/Dw) = 0.8*0.8 = 0.64
P(Pnw/Dnw) = 0.8*0.2 = 0.16
P(Pw/Dw): = 0.2*0.8 = 0.16
P(Pw/Dnw): = 0.2*0.2 = 0.04
So in this imaginary world I have created, 16% of all crosswalk encounters create a possibly dangerous showdown of pedestrian versus car, the most costly of the scenarios. 4% of the time there will be the boring situation of both driver and pedestrian wasting their time and/or gas. And the remaining 32% are efficient situations where either the driver or the pedestrian waste time/gas, but not both.
The first conclusion to be drawn from this hypothetical situation is that fewer costs will be incurred if all drivers and pedestrians knew what the rule was and followed it consistently. Let's imagine another world with different laws, where 100% of pedestrians were Pws and 100% of drivers were Dnws. Pedestrians would spend more time waiting than they do under the current rules, but the dangerous Pnw/Dnw scenario, and the extra time-and-gas-wasting Pw/Dw scenario would both be eliminated. This shows that there are clearly efficiencies to be gained from people behaving consistently as the result of clear property rights, regardless of who is given the rights in the first place. I believe this is the essence of the famous "Coase Theorem" in economics. In this case it would be cars that "own" the right of way. If pedestrians uniformly respected this property right, the result would be better than if pedestrians and cars didn't know or care about who has the right of way, resulting in accidents and delays.
With that being said, cars clearly should not have the right of way. The "transaction costs" (also a key element of the Coase Theorem) pedestrians face in crossing the street (e.g. the chance of getting run-over) are obviously much higher than those that cars face (e.g. wasting some gas). Thus to create a more efficient society, the law allocates the property right in a way that minimizes costs.
If drivers and pedestrians would just act like they understand who owns the crosswalk, it might be safer out there.

Saturday, May 15, 2010

Tobacco: Optimal Fines for Enabling Adults

The other day I was walking past a liquor store and saw a sign in the window, explaining that adults who buy cigarettes for minors will face a $200 fine. This made me think of the concept of the optimal punishment for a crime, which I was first introduced to by Donald Wittman's Economic Foundations of Law and Organization, ( a great book I highly recommend.)
To determine the optimal punishment for a crime, societies take into account both the harm caused by the crime and the probability of catching the criminal, which tends to conform to the following model:
with P being the probability of getting caught if you commit a crime, and H being the harm the crime causes to society. This leaves F, the appropriate fine to be levied out so that, in the aggregate, it will make criminals pay for their behavior, thus efficiently deterring crime.
So if smashing someone's window costs the victim $100 dollars in damage, and there is a one in ten chance of being caught for the crime (P=0.1), the appropriate fine for smashing a window would be $1000. If suddenly, (maybe due to new window smashing technology), it became twice as hard to catch a window smasher (P= 0.05), the punishment should double to $2000. The punishment should also double if the cost of window repair were to double, (H=$200). So with optimal punishments, crimes that cause more damage, as well as those that are harder to catch, are met with proportionally harsher fines.
So is the $200 fine for buying tobacco for minors an optimal punishment? My initial reaction is "heck no." But sound policy is not based on reactions, here I shall try to provide quantitative analysis to help answer this question. Though I do not have all the data needed to answer it, I will set up a framework into which data could be plugged, to lead us closer to the truth.
To begin with, lets try to get an idea of H (the cost to society of buying a pack of cigarettes for a kid).
To find H, we must isolate the smoking (both present and future) that would happen directly as a result of an adult buying a pack of cigarettes for a minor. It must be differentiated from smoking that would happen otherwise. Obviously when an adult buys a pack of cigarettes for a kid, this is increasing smoking by at least one pack, and possibly more than that because that one pack could lead to more smoking in the future. While some kids will get hooked for life because of that one pack, and possibly die of lung disease, others may give it up after a single puff. The probabilities involved in this game of slow motion Russian roulette are very hard to quantify. But to arrive at something close to H, one could start by taking a large sample of people who had been given cigarettes by adults when they were minors. The next (very challenging) step would be to use regression analysis to try to isolate the effect of each illicit tobacco purchase on the minor's cigarette consumption over a lifetime. Lets assume that an amazing statistical study determines that each purchase of a pack of cigarettes for a minor leads to 1.1 more packs to be smoked in total than would happen in absence of the crime, (the extra 0.1 being because of kids led to further smoking as a result of the one pack that was bought for them). The next step would be to find the cost to society (to the smoker and everyone else) incurred because of that one pack. Searching around the internet, I've found a group of scholars who say the total cost to society from one pack of cigarettes is $40. For our purposes, lets assume this is the cost. Under these assumptions, we have found H.
H = $40*1.1= $44
So the total cost to society from the crime is $44, the societal cost times the expected additional quantity of tobacco consumed.
Now let's try to think of a way to find P, the probability of getting caught. (As an editorial note, this seems to me to be a pretty easy crime to get away with. All an adult has to do is find a discreet way of passing the cigarettes to the youth, end of story.) It would be difficult, but one could find the probability of getting caught for the crime, by taking the total number of convictions for the crime, and dividing this into an estimate of the total number of times cigarettes were purchased for youths, which could be estimated through surveys of young smokers.
That is beyond my means, so lets just assume that one out of every hundred of these crimes is discovered and prosecuted (which I would guess to be a very generous assumption.) Now we have P.
P = 0.01
And with P and H, we can find the optimal punishment, F.
Plugging P and H into the formula gives us:
So using these assumptions, the appropriate fine for buying a pack of cigarettes for a minor should be $4400. This is 22 times the actual fine for my jurisdiction.
However, this is not a real study and largely based upon numbers I pulled out of my imagination, and from an academic paper that, to be honest I only read the abstract of. But I wouldn't be surprised if the probability of getting caught for this crime is a lot lower than 1 out of 100 (thereby increasing F), and that $40 is a good estimate of the total societal cost of a pack of cigarettes.
So for the sake of argument let's now assume that this estimate of F is close to reality. What could be a reason for the big difference between F and the actual $200 fine. Perhaps because of the legal and social acceptance of smoking as an adult, the law only considers the damages to society incurred while these smokers are minors, while the actual costs of smoking (e.g. addiction, lung disease) are heavily back-loaded to times long after the young smokers have grown up, and their habit has long been accepted by the society it damages.
So who would be harmed by increased fines for this crime? Just the enabling adults and the tobacco industry.


Donald Wittman, Economic Foundations of Law and Organization, Cambridge University Press, 2006

Sunday, April 4, 2010

Innovation and Value Creation

Let's say that a person A is living in an apartment she finds sufficient at the rate she is paying, and that there are other identical apartments available for the same price. Now what would she do if her lease was up, and the rent at her current apartment was going to increase? The answer to this question depends largely on one very important factor: the transaction costs of moving. These could be the costs of van rentals, moving supply purchases, and the opportunity costs of person A's spending time searching for other housing. (To make things simple let's ignore the time value of money.) Assuming she expects to face $1000 in total costs to find and move into sufficient replacement, what would happen if her rent were to increase by $50 per month? This would amount to a $600 increase for the duration of the next yearlong lease. Clearly she would rather face the cost of higher rent ($600) than bear the costs of moving ($1000). But if the proposed rent increase were $100, for a total of $1200 she would rather face the costs of moving than the higher rent.
The transaction costs of uprooting yourself to obtain new housing are particularly high, relative to other goods. This is not the same as making a choice among buying from different vendors of apples, or electronics.
But in the situation like the one Person A faces, with high transaction costs, there is the opportunity for an innovator to create value for Person A, and profit from it. Lets assume that someone else, Person B, were able to invent an amazing new moving van that could reduce Person A's transaction costs of moving to $500. Then even with the smaller, $50 rent increase, it would be worthwhile for Person A to move. Person B could then charge person A a certain price, (less than the $100 difference between the new transaction costs and the rent increase,) and both Person A and B would benefit.
The facilitation of transactions that would otherwise not happen is one important way that innovators can create value for society. For example websites make it easier to shop around for housing, which reduces the opportunity costs for housing seekers. (The same goes for Star Trek collectibles.)
Increasing benefits and reducing costs is the essence of innovation. And in the end, innovation is the only reliable way for individuals to get abnormally rich. They must find ways to abnormally reduce the costs or increase the benefits for others, and demand a price for it. Finding ways to allow mutually beneficial trades to happen, that would not otherwise happen due to large transaction costs, is one way to achieve this goal and make a buck.

Sunday, February 21, 2010

The Economics of Perception (wow, I don't often get this polemical)

"Lisa, I'd like to buy your rock."
-Homer Simpson

Imagine two people, Person A and Person B, in a world where there are only two goods: pumpkins for eating, and yo-yos for entertainment.
Person A is really good at growing pumpkins. His pumpkin growing skills are such that he can grow and harvest a pumpkin for a marginal cost of $1 each. He is less skilled at manufacturing yo-yos, however, and if he were to try, would incur a marginal cost of $5 per yo-yo.
Person B is not so good at growing pumpkins. If he were to try to grow a pumpkin, he would incur a marginal cost of $5. But he's a yo-yo manufacturing maniac, incurring only $1 of marginal cost per yo-yo.
Person B is hungry one day, so he goes to Person A and says, "I'd like to buy a pumpkin." Person A, says, "that'll be $6" and person B responds, saying: "Six Dollars! are you kidding, I can grow a pumpkin myself for cheaper than that!" Before person B can storm off, person A says, "OK I'll give it to you for $4," and B says: "you've got yourself a deal." Person B gets his pumpkin, and person A gets a margin of $3 to use to buy more yo-yos, and produce more pumpkins in the future.
Person A is bored one day, and wants to add a new yo-yo to his collection, so he goes to person B and says "I'd like to buy a yo-yo." Person B, says, "that'll be $6" and Person A responds, saying: "Six Dollars! are you kidding, I can manufacture a yo-yo myself for cheaper than that!" Before Person A can storm off, Person B says, "OK I'll give it to you for $4," and A says: "you've got yourself a deal." Person A gets his yo-yo, and person B gets a margin of $3 to use to buy more pumpkins, and produce more yo-yos in the future.
As in this example, in market economies a buyer will pay a seller a price to do something that is, (taking all costs into account, including opportunity costs) too costly for the buyer to do himself. The price will be greater than or equal to the seller's marginal cost of production (or procurement), and less than or equal to the buyer's marginal cost of obtaining that object through any other means, either through self-production or buying from a different seller. Often, because of the complexity of products, self-production is not an option, so in this case the upper limit on the price a seller can charge will be the price offered by competing producers.
Another factor in determining the price is harder, if not impossible to quantify. This factor is the purely psychological valuation of different goods and services. This value varies from person to person, and within the mind of each individual person, can vary from time to time. If there were a magical brain measuring device to give readings of exactly what people are willing to pay for different items, we could find a measurement of the mean dollar value that a group of people are willing to pay for a good or service. Let's denote this unknowable value with the letter V.
So, if the upper limit of prices is determined by the prices of competitors, what can an individual seller do to make abnormal profits? One method is to try to increase the unknowable value, V, to a level higher than the price of competitors. This can be done through marketing efforts that improve people's perception of your product, without actually changing your product. On an individual level, this can take the form of person to person salesmanship, or on a wider level through mass media advertising. The more the collective perception of value can be shifted upwards through marketing efforts, the higher the price can go.
This is how companies can sell junk for high profits, like the snake-oil diet drugs and herbal "neuroboosters" that can be seen advertised on TV. You'll probably see some zero value junk advertised at the sidebar of this very blog as you are reading this article.
Prohibitions against false advertising are a precondition for a well functioning market. Purely deception based industries are at best zero-sum activities, and at worst they can be truly harmful scams that drag people into economically dangerous situations, as in cases where deceptive contracts tie people to huge outlays of money for nothing valuable in return.
I strongly believe that all purely deceptive industries need to be shut down by regulators, not because I feel a sense of outrage on behalf of people who get duped, but because there are opportunity costs society faces from the existence of such industries. If people can make a living through deception, they will turn away from, (and turn their victims' dollars away from) useful industries. To give a functional definition of "useful", I say that a product is useful to people, if knowing all necessary information about the product, they would still want to use it.
What if Person B comes to Person A looking to buy a pumpkin, and person A says: "I don't grow pumpkins anymore, I produce and sell Neurobooster Pills" and with great salesmanship, sells person B the pills, but they're really just tic-tacs. Is this economic efficiency? In an economy where regulators allow this to happen, resources will be wasted as consumers engage in trial and error to verify the truth. And the capabilities of the information age do not make things better. Today, there is an unprecedented access to information, both true and false. This is the age of tremendously useful online encyclopedias and journals, as well as the age of fake Yelp reviews and SEO tricks. The internet can be both a font of wisdom, and an echo chamber of lies.
Regulators must shut down scam artists, not just to protect their potential victims, but also to lead business people down the path to the real, not just perceived, creation of value, not wasteful industries of subterfuge, salesmanship and fine print.

Remember, if you get scammed, don't just sit there, report it to the Federal Trade Commission.
Check out the FTC website:
There's lots of information on different scams, from job scams (a truly booming industry) to phony diet pills, and using the website it's really easy to report a scam to the FTC. There's even a little cartoon to explain how to report. Don't keep quiet, it's your civic duty to our economy!

Tuesday, January 12, 2010

The Saw Movies, and Diminishing Marginal Returns, Pt 2

Well, my econometric predictions were way off. Saw 6 only grossed $27,693,292. However, this still proves the broader point that the marginal returns are diminishing. But it looks like they are diminishing at a faster rate than my simplistic model could predict
: (
But this was far from a box office failure. The production budget for Saw 6 was 11 mil, so that's a pretty good gross margin. There will continue to be new Saw movies until the expected marginal revenue falls below the necessary cost of production and distribution. Eventually, they might start releasing them direct to DVD (or internet), which would be a lower cost/lower revenue alternative.