December 28, 2011

Beautiful Spectrum of Today's Worker-Types

It used to be that we have only two types of workers: blue-collar workers who perform manual labor, and white-collar workers who perform professional, managerial, or administrative work. These days, labor scholars actually have become more creative and came up with more colors than these two:

  1. Green-collar workers are those employed in the environmental sectors of the economy
  2. Pink-collar workers perform work said to be stereotypical women's work and are typically in the service industry
  3. Gold-collar workers who are classified into two: young, low-wage workers who invest in conspicuous luxury; or, highly-skilled knowledge workers, traditionally classified as white collar, but who have recently become essential enough to business operations as to warrant a new classification
Finally, we also have another type whose appropriate color still signify further ambiguities in the definition assigned to it: gray-collar workers, whose occupations incorporate some of the elements of both blue- and white-collar, or are completely different from both categories. Too ambiguous in fact that they can come from different industries (farming, fishing, forestry, and other forms of agribusiness; health care, aged care, child care, and the personal service sector; protective services and security; food preparation and the catering industry; high-tech technicians; skilled trades, technicians, etc.; typists, stenographers).

Grey-collar workers often have associate degrees from a community college in a particular field. They are unlike blue-collar workers in that blue-collar workers can often be trained on the job within several weeks whereas grey-collar workers already have a specific skill set.

Anyway, there is still a lot in the color spectrum that hasn't been taken yet. Considering the ever-expanding range of worker-types, expect more color-workers to come out in the future.

Maybe I'll try calling one.

Source: Wikipedia

November 16, 2011

When economists and anthropologists debate

Bloomberg Businessweek posted an article about David Graeber, "a 50-year-old anthropologist—among the brightest, some argue, of his generation—who made his name with innovative theories on exchange and value" and his role in the ongoing "Occupy Wall Street Process."

While I respect what he is trying to do to advance his beliefs, I have to disagree with his ideas about the nature and origins of money and consequently how it relates to debt (two things the article considers the center of what protesters are angry about):


"Economics textbooks tell a story in which money and markets arise out of the human tendency to “truck and barter,” as Adam Smith put it. Before there was money, Smith argued, people would trade seven chickens for a goat, or a bag of grain for a pair of sandals. Then some enterprising merchant realized it would be easier to just price all of them in a common medium of exchange, like silver or wampum. The problem with this story, anthropologists have been arguing for decades, is that it doesn’t seem ever to have happened. 'No example of a barter economy, pure and simple, has ever been described, let alone the emergence from it of money,' writes anthropologist Caroline Humphrey, in a passage Graeber quotes.

People in societies without money don’t barter, not unless they’re dealing with a total stranger or an enemy. Instead they give things to each other, sometimes as a form of tribute, sometimes to get something later in return, and sometimes as an outright gift. Money, therefore, wasn’t created by traders trying to make it easier to barter, it was created by states like ancient Egypt or massive temple bureaucracies in Sumer so that people had a more efficient way of paying taxes, or simply to measure property holdings. In the process, they introduced the concept of price and of an impersonal market, and that ate away at all those organic webs of mutual support that had existed before."

Here we go again.

Let me answer by using a famous philosophical thought question: if a tree falls in a forest and no one is around to hear it, does it make a sound? I know we are scientist and it is of great prerogative that we base our findings in facts that are observable and backed by proof. But even if we don't have proof, it doesn't mean the fact is not true. It's just that we don't have enough evidence to prove the fact is true. Just like in statistical regression analysis, you don't say you do not reject the null hypothesis for higher p-values: you say you fail to reject the null hypothesis.

Because if this is not the case, I can argue against Graeber using the same reasoning: are there proofs that it was governments/states that created money? I don't think there are. So now we have to base our conclusions on two conjectures. Which one is stronger?

That money is created by the state is a weak conjecture based on the simple fact, which is true then and is still true today, that any one single entity (individual or government) is not omnipotent--such entity will never have all information. How did governments choose which is the right form of money to create? Did the emperor woke up one morning and thought, "Ooo, I like them shiny things. Let's use gold as our money!" Rome wasn't built in a day, and neither was Ancient Egypt and Sumer, so it will be a wonder at what point did the state decide what type of money and when to implement the use of the money. Surely the state has to be extremely knowledgeable about the workings of markets and transactions to be able to come up with the perfect medium of exchange. But this again is likely not the case. As Friedrich August von Hayek states it in one of his famous works, "The Use of Knowledge in Society" (1945), "a centrally-planned market could never match the efficiency of the open market because any individual knows only a small fraction of all which is known collectively":


"The peculiar character of the problem of a rational economic order is determined precisely by the fact that the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form, but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess. The economic problem of society is thus not merely a problem of how to allocate 'given' resources-if 'given' is taken to mean given to a single mind which deliberately solves the problem set by these 'data.' It is rather a problem of how to secure the best use of resources known to any of the members of society, for ends whose relative importance only these individuals know. Or, to put it briefly, it is a problem of the utilization of knowledge not given to anyone in its totality."

Now let's turn to the other conjecture. Is it possible that the origin of money is an evolutionary process which began in barter trade and ultimately ended up in more manageable and saleable form that greatly improved exchanges and transactions? I see this as a more stronger conjecture. Carl Menger was one of the first to point this out, in his famous work, "On the Origin of Money" (1892). In it, he says that money is not created by a state edict, but money evolved in the marketplace. It was the individuals who decided what is the most marketable good that can be used as a medium of exchange. It is the market that chose which commodity is best as a medium for exchange in terms of saleability, durability, and transportability. Money did not come immediately, but instead evolved through time as the market discovers new commodities that are more saleable, more durable, and more transportable. For Menger, the state only came in to perfect what the market considered was money by recognizing and regulating the medium:


"Money has not been generated by law. In its origin it is a social, and not a state-institution. Sanction by the authority of the state is a notion alien to it. On the other hand, however, by state recognition and state regulation, this social institution of money has been perfected and adjusted to the manifold and varying needs of an evolving commerce, just as customary rights have been perfected and adjusted by statute law."

The development of money is one example of Menger's complete theory of social institutions. For Menger, social institutions arise from individuals interacting with each other, and each with his or her own subjective knowledge and experiences. Together and through time, these human actions spontaneously evolved and eventually created institutions. Money is an institution, where individuals find certain patterns of behavior, such as the use of gold coins as a medium of exchange, that helped in attaining a person's goals more efficiently, such as for transactions, and then adopt such behavior.

Bottomline is, money itself has desirable qualities, among them are three things that Menger points out--saleable, durable, and transportable. It's highly unlikely that any one person, and at a very short time, can come up with one commodity that has these properties. The creation of money is evolutionary, and it involves more than one person accepting such commodity as medium of exchange.

So let's give up on this no-proof-of-barter-economy argument, shall we?

November 13, 2011

Institutions as Capital?

In the recent Liberty Forum held in New York sponsored by the Atlas Economic Network, Michael Fairbanks, philanthropist and author of a couple of books, talked about the elements that characterize the process of creating prosperity. His talk, heavily based in his framework "Changing the Mind of a Nation: Elements in a Process of Creating Prosperity", advocates moving away from economists' common view of prosperity as a "flow" concept to a "set of stocks" concept. This basically considers prosperity as "the enabling environment that improves productivity." This is connected to Mr. Fairbanks's work of finding ways to improve people's lives through enterprise development and technological innovation.

I took great interest in how he enumerates seven kinds of capital (the "prosperity as a stock" he refers to, page 1 to 2):


  1. Natural endowments such as location, subsoil assets, forests, beaches, and climate.
  2. Financial resources of a nation, such as savings and international reserves.
  3. Humanly made capital, such as buildings, bridges, roads, and telecommunications assets.
  4. Institutional capital, such as legal protections of tangible and intangible property, efficient government departments, and firms that maximize value to shareholders and compensate and train workers.
  5. Knowledge resources such as international patents, and university and think tank capacities.
  6. Human capital, which represents skills, insights, capabilities.
  7. Culture capital, which means not only the explicit articulations of culture like music, language, and ritualistic tradition but also attitudes and values that are linked to innovation.

I take exception on how he considers the last four as social capital. Unless he has a broader concept of social capital, the term is typically used to pertain to the value of network trusting relationships between individuals in an economy. “Social capital” can be considered capital in the sense that strengthening the network will help an individual achieve his or her productive endeavors. However, by categorizing institutional capital and knowledge resources as social capital, the analytical discourse gets confused. The former pertains to economic and political institutions and the latter to technology or knowledge resources, which are different from the term “social capital” as commonly used in the literature. For instance, can institutions be considered as a form of capital? One of the most basic things you learn in economics is that capital is a factor of production that is not wanted for itself but for its ability to help in producing other goods. Institutions are therefore not capital because these are not factors of production. Institutions are, in a sense, the rules of the game--it can either enhance or diminish productivity. Good institutions are productivity-augmenting, much like technology in a standard neoclassical growth model.

Mr. Fairbank’s reference to Nobel Laureate Douglass C. North under the section "Institutionalize the Change” also needs some clarification. He cites: "Douglass North writes that institutions are norms. Change needs to create new norms of behavior. We look not to creating new institutions but to upgrading existing institutions that have reached their functional limits due to globalization, changes in how prosperity is created, and worldwide shifts in values and attitudes. This means improving the rule of law and building democracy to upgrading schools, private firms, and civic organizations."

From this citation, it can be gleaned that North is referring to institutions as the “rules” of the game. Categorizing it as capital makes it more like a "tool" of the game, which is far from what North describes it to be in his famous treatise on institutions (from his 1991 JEP article, "Institutions"):

"Institutions are the humanly devised constraints that structure political, economic and social interaction. They consist of both informal constraints (sanctions, taboos, customs, traditions, and codes of conduct), and formal rules (constitutions, laws, property rights). Throughout history, institutions have been devised by human beings to create order and reduce uncertainty in exchange. Together with the standard constraints of economics they define the choice set and therefore determine transaction and production costs and hence the profitability and feasibility of engaging in economic activity... Institutions provide the incentive structure of an economy; as that structure evolves, it shapes the direction of economic change towards growth, stagnation, or decline."

Institutions are hardly factors of production used by the economy as they pursue economic growth. Institutions' contribution can be thought of similar to technology's contribution to economic growth--they augment production and expand an economy's production possibility frontier. But institutions cannot be found in either the x-axis or the y-axis.

It can be construed as the framework or environment under which economic players – labor, consumers, and capitalist – interact. Being a framework, economic players cannot individually control institutions as they would any machinery and make tradable goods out of raw resources, much like they would do with capital.

The view that institutions augment the typical neoclassical growth model is hardly new. There are papers, such as Daron Acemoglu, Simon Johnson, and James Robinson's 2001 paper "The Colonial Origins of Comparative Development" where institutions are analyzed within the framework of the neoclassical model. The literature, however, is yet to come up with a comprehensive theory that will capture how institutions contribute to economic growth. Some papers such as Jose Aixala and Gema Fabro's "A Model of Growth Augmented with Institutions" and Edinaldo Tebaldi and Ramesh Mohan's "Institutions-augmented Solow Model and Club Convergence" attempt to do this. But there is still much that needs to be done.

I may take a crack at this frontier.

November 6, 2011

Class Attendance Among Students: A Transactions Cost Perspective

We are in the middle of an academic semester and we are in a somewhat troublesome predicament. While students are attending the lecture classes, few are attending the individual discussion sessions. Most of the time, less than ten out of twenty students attend in each lab session.

It's not their fault--attending the discussion sessions is not required. It has never been since. But it was different this time because in the previous semesters, attendance was perfect. The reason for perfect attendance was the weekly quizzes. We use the weekly quizzes as what Oliver E. Williamson might consider as "hostage." Students have to attend if they want to take the quiz. The quiz, of course, matters for a student's final grade.

It is different this semester. The quizzes are administered online and during the weekend. So for students who may not really care about attending discussion sessions or who think that they can breeze through the semester attending the lectures and studying the class textbook by themselves, there is really no incentive to attend.

What is the transaction here? You may say it's the fact that the student decided to pursue their respective chosen degrees. But we can look at a more micro level. We can consider the pursuit of a degree consisting of further individual transactions, namely the enrollment in different subjects. Students enroll in a subject with the sole purpose of obtaining a benefit--passing and getting a good grade (and hopefully learning as well)

Ex ante costs aside (like monetary and non-monetary costs involved in the process of enrolling), let's concentrate on ex post transaction costs. Definitely, among such transaction costs are typical day-to-day activities students undertake going to classes and studying for tests--all with the ultimate goal of eventually passing the course and getting a grade

If attendance is not required in either lecture or dicussion sessions, and if students feel attending only lecture classes is sufficient enough to get a good grade, then for some students there is really no point in attending discussion labs. So we are really in an institutional arrangement where not attending discussion labs is a rational choice for students.

Now, if we change the arrangement, we should definitely expect a change in the behavior of the students. This turns out to be the case when we instituted a policy wherein there is an incentive for students to attend lab classes. For the remaining lab classes until the end of the semester, we will check attendance. At the end of the semester, we announced that we will randomly select two sessions where we will award 5 points of credit toward the final grade for those who are present during that session. While the effect is not a perfect attendance, there is indeed a significant change in the number of those who started to attend.

We can certainly make the case that there are three types of students in this case. The first type are those who regularly attend, and these are the students that are really interested in learning. The second type are those that started to attend after instituting the new incentive scheme. We can say that for these students, they value a higher grade and the transaction costs involved in attending the lab sessions are low compared to the benefits of getting that higher grade.

In retrospect, you wonder if the differences in the behavior of the students reflect the fact that the benefits one student gains is subjective, or could it be that the transaction costs turned out to be the one that is subjective in nature. Let's face it, some students are really constantly determining if that extra incentive is really worth all the hassles of going to lab sessions. Or maybe some students are really just plain lazy.

October 17, 2011

Legalize drugs?

An upcoming policy forum sponsored by the Cato Institute discusses "Mexico and the War on Drugs." The main speaker is actually now singing a different tune: if you can't beat them, join them:

Mexico is paying a high price for fighting a war on drugs that are consumed in the United States. More than 40,000 people have died in drug-related violence since the end of 2006 when Mexico began an aggressive campaign against narco-trafficking. The drug war has led to a rise in corruption and gruesome criminality that is weakening democratic institutions, the press, law enforcement, and other elements of a free society. Former Mexican president Vicente Fox will explain that prohibition is not working and that the legalization of the sale, use, and production of drugs in Mexico and beyond offers a superior way of dealing with the problem of drug abuse.

I hope Mr. Fox is not considering completely legalizing drugs because that would mean we're abandoning the central reason why we're banning it in the first place--the negative externalities associated with drug use abuse. Studies are everywhere pointing to how drug abuse can lead to problems in society such as increases in murders, rapes, and vehicular accidents, among others.

On the other hand, if Mr. Fox is really suggesting legalizing but with some sort of regulation still in place, I would not think the same problems would go away. Think, for example, such regulation would place a limit to how much drugs one should consume. That would be very difficult to monitor. In addition, there would be some sort of opportunism that will definitely take place. Some individuals would take advantage of such regulations. Those individuals set up another black market.

And the song would continue to keep playing...

October 9, 2011

Institutions and Development

There is already a rich literature on the relationship between institutions and economic growth which Pande and Udry (hereafter referred to as PU) enumerate in their 2006 paper. Shirley (2005) and Straub (2008) also provide some excellent surveys. Majority of the papers strongly conclude that good quality institutions lead to better economic performance for a given country.

Most of the literature use cross-country data in their analysis. PU, however, points to the three limitations of doing so in the identification of channels through which institutions affect economic growth.

First, PU argues that the measurement used for institutional quality is “necessarily coarse”:

"The cross-country literature has largely relied on broad indices of institutional quality. The use of coarse institutional measures implies that cross-country regressions are typically unable to isolate the causal effect of any single institution."

PU also point to the possibility of generating omitted variable bias in the econometric methods used:

"[T]he inability to include the entire array of institutions which impinge on, say, growth as independent variables (often due to the small set of available instruments) raises the possibility of omitted variable bias. For example, some indices of institutions used in the cross-country literature are very strongly biased towards measuring the institutional environment facing urban and/or formal sector agents."

PU’s final concern deals with the nature of heterogeneity of institutions and of the actors being affected by such institutions, even within a given country. One of the many examples they gave is the fact that mechanisms of contract enforcement in the urban sectors may differ significantly to the mechanisms of contract enforcement in the rural sectors.

All in all, the authors conclude that:

"[T]he extraordinary diversity of institutional practices across and within countries places natural constraints on the usefulness of cross-country analyses for understanding the specific channels through which institutions affect economic outcomes, and how these institutions, in turn, respond to economic, demographic, political and social forces."

Given the limitations to current macro-centric efforts in the literature, PU suggest that future research in the relationship between institutions and growth is “best furthered by the analysis of much more micro-data than has typically been the norm in the literature.” Specifically, PU suggest two research programs that have significant potential:

  1. Using “policy-induced variation in specific institutions within a country to examine how specific institutions influence economic outcomes.”
  2. Exploiting the fact that “incentives provided by a given institutional context often vary with individuals’ economic and political status, and so close examinations can be done of the economic choices of individuals in a specific institutional context.” Identifying the specific channels on how institutions affect economic behavior and hence economic outcome can be achieved by analyzing how individuals’ respond to the same institution. Likewise, doing so would also help understand “how institutional change arises in response to changing economic and demographic pressures.”

Pande and Udry's paper goes into the heart of the new institutional economics (NIE) school. NIE has always been about institutions, and how institutions affect the behavior of agents in an economy. North (1991) states that “institutions provide the incentive structure of an economy; as that structure evolves, it shapes the direction of economic change towards growth, stagnation, or decline.”

This paper even presents another push to what I would consider an ongoing paradigm shift in how economists study the relationship between institutions and economic growth. The paradigm shift that I’m referring to is the current efforts in the literature of moving away from looking at the macroeconomic view of how institutions affect economic outcomes and into looking at the microeconomic view. PU may not have been pioneers in this research (for example, Gibson and Rozell, 2003, looked at how improved road access leads to lower poverty in Papua New Guinea), but where this paper is revolutionary is in laying down a firm conceptual framework in undertaking further research in analyzing the relationship at the micro level.

One of the strengths of the paper is the methodical way that PU presents their ideas. They start by doing a review of the rich “macro”-literature and proceeded to identifying the limitations of how the existing papers establish “a causal link between a cluster of ‘good’ institutions and more rapid long run growth.” They not only focused on some technical limitations, such as the problems of omitted variable bias when it comes to econometric specifications, but also on one obvious drawback—these studies never really specified the exact channels through which institutions do affect economic growth. They then proceeded to their main thesis which would solve the limitations—looking at the micro level in two different ways. The solutions they suggest also seemed practical, as they close their paper with an application of their suggested research framework to the land tenure system in Ghana.

PU’s suggestion of analyzing the relationship between institutions and economic growth at the micro level in two research perspectives is the biggest contribution of this paper in the literature. I mentioned earlier that this paper tries to suggest a solution to one obvious limitation in the current literature—that of the lack of specifying the exact channels through which institutions affect economic growth. Analyzing the exact channels has important implications in terms of policy. North (1993) points out that “successful development policy entails an understanding of the dynamics of economic change if the policies pursued are to have the desired consequences. And a dynamic model of economic change entails as an integral part of that model analysis of the polity since it is the polity that specifies and enforces the formal rules.” This paper provides two frameworks for researchers and policymakers on how to go about understanding the dynamics.

Even if the current empirical literature is strongly based on the theory that institutions affect growth through its effect on individual agents, it would still greatly advance the tenets of NIE if authors can provide empirical basis on what is happening down below—at the micro level. PU has definitely helped in that effort with this paper.

Referencs

Gibson, John and Scott Rozelle (2003), “Poverty and access to roads in Papua New Guinea”, Economic development and Cultural Change, 52(1):159-85.
North, Douglass C. (1991), “Institutions”, The Journal of Economic Perspectives, 5(1):97-112.
North, Douglass C. (1993), “The new institutional economics and development”, Working paper, Washington University in St. Louis.
Pande, Rohini and Christopher Udry (2006), “Institutions and Development: A View from Below” in Richard Blundell, Whitney Newey, and Torsten Persson, eds., Advances in Economics and Econometrics Theory and Applications (New York: Cambridge University Press) 981-1022.
Shirley, Mary M. (2005), “Institutions and Development”, in Claude Menard and Mary M. Shirley (eds.) Handbook of New Institutional Economics, Dordrech: Springer, 611-38.
Straub, Stephane (2008), “Infrastructure and growth in developing countries: Recent advances and research challenges”, World Bank Policy Research Working Paper 4460.

October 4, 2011

Reversal of Fortune

There has already been studies that document the phenomenon called “reversal of fortune,” (RF) or reversal of relative income: areas that were relatively prosperous in the past are relatively poor now, and areas that were relatively poor then are now relatively rich. The authors (Daron Acemoglu, Simon Johnson, and James A. Robinson, hereafter AJR) adds to this literature by documenting the reversal among former European colonies. Using urbanization rate and population density to represent prosperity in the 1500, the authors find that there is indeed “a negative association between economic prosperity in 1500 and 1995.” They show through different econometric specifications that this relationship is robust. Even after controlling for continent dummies, the identity of the colonial power, religion, distance from the equator, temperature, humidity, resources, whether the country is landlocked, and excluding the “neo-Europes” (the United States, Canada, New Zealand, and Australia) from the sample, the negative relation still holds.

In trying to explain this pattern, the authors dismiss some existing theories that explain RF, such as the “geography hypothesis.” The geography hypothesis “explains most of the differences in economic prosperity by geographic, climatic, or ecological differences across countries.” AJR provide evidence that weights against this simple version of the geography hypothesis. AJR also claim that there is little evidence to support even the more sophisticated versions of this hypothesis such as the “temperate drift hypothesis.”

For AJR, the more plausible explanation for RF is what they termed “institutions hypothesis.” AJR’s main hypothesis is that:

[A] cluster of institutions ensuring secure property rights for a broad cross section of society, referred to as “institutions of private property,” are essential for investment incentives and successful economic performance. In contrast, “extractive institutions,” which concentrate power in the hands of a small elite and create a high risk of expropriation for the majority of the population, are likely to discourage investment and economic development.

AJR find historical and econometric evidence suggesting that:

European colonialism led to the development of institutions of private property in previously poor areas, while introducing extractive institutions or maintaining existing extractive institutions in previously prosperous places. The expansion of European overseas empires, combined with the institutional reversal, is consistent with the reversal in relative incomes since 1500.

Finally, AJR were able to document that the reversal in relative incomes among the former colonies was related to industrialization. They surmised that societies with institutions of private property “take advantage of the opportunity to industrialize, while societies with extractive institutions fail to do so.” They concluded that this led to the industrialization that took place in the nineteenth century and played a central role in the long-run development of the former colonies.

The main finding of AJR in this paper is that the reversal of relative income among former colony countries that we observed today are primarily due to “European colonialism that led to the development of institutions of private property in previously poor areas, while introducing extractive institutions or maintaining existing extractive institutions in previously prosperous places.” This fits perfectly within the major tenets of the new institutional economics (NIE) school. Coase (1998) established that “it is the institutions that govern the performance of an economy.” North (1991) explained that “institutions are the humanly devised constraints that structure political, economic and social interaction” and that institutions “consist of both informal constraints (sanctions, taboos, customs, traditions, and codes of conduct), and formal rules (constitutions, laws, property rights).” Using this framework, Williamson (2000) states that the NIE “has been concerned principally with levels 2 and 3” of what he considers the four levels of social analysis. He refers to the second level as “institutional environment,” where “much of the economics of property rights is.”

The main strength of the paper is how AJR overcame one of the limitations of making comparative analysis involving a long timeframe. Data in 1500 is certainly not perfect, if not absent. The authors instead became creative and used urbanization rate and population density as proxy for the level of prosperity. Their effort is likewise not without theoretical precedence. They cited numerous papers in the literature that support their claim. Furthermore, they backed up this claim by conducting regression analysis to support the use of said proxies.

While I agree with how the paper describes “equilibrium institutions” (“when extractive institutions were more profitable, Europeans were more likely to opt for them” versus “Europeans were more likely develop institutions of private property when they settled in large numbers”), it must be that the reasons for the persistence of institutions are not only limited to these two cases. Further developments in theory must be made to come up with other reasons the two types of institutions persist. Austin (2008), for example, points to this oversimplification of European colonies into “settlers” and “non-settlers.” Each may have a different kind of “historical path” than the one described by AJR.

Huillery (2011) looking at French West Africa, for example, states that there are some colonies with extractive institutions that performed better simply because they have more European settlers than colonies with extractive institutions. In other words, colonized areas that received more European settlers have performed better than colonized areas that received less European settlers. Acemoglu et al. actually have the same idea—the more settlers the better, but according to Huillery, this also applies even among extractive colonies.

Furthermore, the time period between 1500 and 1995 is too long to disregard any other factors that might help explain the reversal. If we refer to Williamson’s (2000) four levels of social analysis, AJR’s analysis may very well be operating within second level, where “the definition and enforcement of property right and of contact laws are important features.” In the 500 years since 1500, however, things may be happening at the third level, where “governance of contractual relations become the focus of analysis.” For instance, we might find one in history where a dictator rules a country, but which enforces good property rights.

In spite of these weaknesses, this paper is another great addition to the literature of how institutions, particularly those involved in securing property rights, explain economic outcomes. The paper adds empirical support to the finding that countries with institutions that secure property rights are more developed than countries that do not have such institutions.

In addition, and more importantly, this paper provides an explanation about how these institutions came to be. In a sense, this is paper lends support to the work of La Porta et al. (1998), which looks at how institutions themselves are a product of history, i.e. colonialism. Their paper looks at the “legal rules covering protection of corporate shareholders and creditors.” They find that “common law countries generally have the strongest, and French civil-law the weakest, legal protections of investors.” Protection of investors’ rights is important because according to La Porta et al., “without these rights, investors would not be able to get paid, and therefore firms would find it harder to raise external finance.” This lack of free-flowing capital would be detrimental to the productivity, and hence welfare, of a country. Much like AJR’s paper, the laws of a country, especially one that is a former colony, are received through “colonial transplantation.” Both papers talk about how different property-securing institutions led to different outcomes. Whereas AJR looks at the current RF phenomenon, La Porta et al. looks at the current state of different legal rules covering the protection of investors.

References

Acemoglu, Daron, Simon Johnson, and James A. Robinson (2002), “Reversal of fortune: Geography and Institutions in the making of the modern world income distribution”, Quarterly Journal of Economics, 117(4):1231-94.
Austin, Gareth (2008), “The ‘reversal of fortune’ thesis and the compression of history: Perspectives from African and comparative economic history”, Journal of International Development, 20:996-1027.
Coase, Ronald (1998), “The new institutional economics”, The American Economic Review, 88(2):72-4.
La Porta, Rafael, Florencio Lopes-de-Silanes, Andrei Shleifer, and Robert W. Vishny (1998), “Law and finance”, Journal of Political Economy, 106(6):1113-55.
North, Douglass C. (1991), “Institutions”, The Journal of Economic Perspectives, 5(1):97-112.
Williamson, Oliver E. (2000), “The new institutional economics: Taking stock, looking ahead”, Journal of Economic Literature, 38(3):595-613.

April 19, 2011

Trends in dissertation topics

Speaking about the road I'm traveling right now, it would be interesting to know if there's any trend on the subfields in economics chosen by PhD students for their dissertation. Well, one study finds that there is and the factor that is influencing this trend is also the trends in papers being published among the top-notched research journals.

Sheng Guo (Florida International University) and Jungmin Lee (Sogang University) conducted an analysis of recent trends on the subfields of study that doctoral students in economics choose for their dissertation between 1991 and 2007:

"[W]e find that the trends in the subfields of study of doctoral dissertations follow those of articles published at five major general-interest journals (American Economic Review, Quarterly Journal of Economics, Journal of Political Economy, Review of Economic Studies, and Review of Economics and Statistics)... Our findings show that the subfield trends in dissertations are in accordance with the research trends in journal articles. The relationships hold strong even after we control for the job openings for the various subfields."

Guo and Lee used some simple regression methods to arrive at their conclusion. They emphasized, however, that they are not claiming any causality here. The beauty of the results for them is that they may lead to some sense of what is really going on. I for one would not be surprised if this was really the case. If I need any suggestions for a dissertation topic, the journals would be among the first that I will look at. The journals are a great source of learning what are the frontiers of economics that you can try to help contribute. Things you learn in the university are not enough to know that. Professors can only tell you as much as his fields of interest is.

With regards to the subfields with which the co-movements were more pronounced:

"In particular, we find strong relationships between the dissertation topics and the published article topics in the subfields of Microeconomics; Health, Education and Welfare; and, Economic Development and Growth. It is interesting to note that each of these subfields have undergone substantial changes during the last twenty years."

So it would be interesting to see if there are new trends among the top journals with regards to the topic or subfields being published. I could certainly use a topic or two to start thinking about my own dissertation.

By the way, Guo and Lee's paper is also useful for one of their tables. Appendix Table 1 shows the annual average of economics PhD degrees granted from 1991 to 2007. Here are some universities included in the list:


Harvard University (32.9)
University of California, Berkeley (29.8)
University of Illinois (28.8)
University of Chicago (28.2)
Stanford University (25.9)
Massachusetts Institute of Technology (25.5)
University of Wisconsin (22.5)
Indiana University (9.8)
University of Missouri (8.2)
University of Massachusetts (7.7)

April 1, 2011

Forecast Error Variance Decomposition in STATA

A very related concept to impulse response functions (IRF) is forecast error variance (FEV) and forecast error variance decomposition (FEVD). To understand these two terms, let's go through each word per word.

Let Γt denote an information set containing yt, as well as earlier values of y. The forecast of yt+1 made at time t is Ε[yt+1t]. This is called a one-step-ahead forecast. Then, the forecast error is the difference between what is the forecasted value, and what the value turned out to be eventually:

εt+1 = yt+1 - Ε[yt+1t]

Generally speaking, and if we consider an autoregression (AR) process, the k-step-ahead forecast error is:


where Φi is, if you remember from a previous post, a matrix that contains the effects of a one-unit increase in innovation on the value of the y variable.

We have to treat positive and negative forecast errors symmetrically, so we square them. The result is none other than the FEV:


To illustrate, let's go back to the example we used in our impulse response analysis. The resulting IRF's of up to 3 periods ahead were:


If we look only at yt up to 3 periods ahead, the FEV's are:


If you notice, since we're only looking at yt, calculating the FEV is just adding up the square of the elements of the first rows of the matrices (the first rows correspond to yt for each period t). Just remember, as we move further from one time period, the sum is cumulative--we add the FEV in period t as well as all other previous periods.

Now, as the FEV corresponds to effects on yt from all sources of impluse shocks, FEVD basically separates FEV into components attributed to each of these sources. In our example, since we have a bivariate VAR system, impulse shocks will come from two sources, (ety,etx):


Of course, it is much easier to understand FEVD if we express them in ratios. So, for example, the contribution of x's structural innovation to the FEV of y in t = 1 is 3.75 ÷ (6.25 + 3.75) = 0.375 or 37.5%. The contribution of y's structural innovation to its own in t = 2 is 6.5 ÷ (6.5 + 3.75) = 0.63414 or 63.414%.

Much like the IRF, FEV is easy to implement in STATA. Just use the IRF TABLE command with the FEVD option. So if we use the real GDP and real oil price data we had before, the commands and results are as follows:


Again, the NOCI option is there to supress reporting of the confidence intervals. So, similar to the IRF table results, you use the footnotes as a guide to identify which variables are the impulse sources and which variables are the affected ones. In this bivariate system, it should be that for each row (which corresponds to time periods), columns (1) and (3) should add up to 1 and columns (2) and (4) should add up to 1 as well.

March 31, 2011

Information Criterion in STATA

As illustrated from yesterday's exercise, you might find yourself in a situation where you will wonder how many lags do you use when you come up with an autoregression (AR) model. This is an important issue in economic modeling because, as much as we like to put more variables in a model to capture realistically the behavior of the dependent variable, introducing more variables also introduce more errors. This modeling philosophy of parsimony is popularized by Box and Jenkins (1976, Time Series Analysis: Forecasting and Control, Holden-Day). So Box and Jenkins advocated using as few parameters as possible in modeling (popularity of the parsimony philosophy that time also coincided with the famous Robert Lucas' critique).

There are many lag-order selection statistics, or information criteria, that are out there. The most famous four are:

1. Final prediction error (FPE) created by Hirotsugu Akaike (1969, "Fitting autoregressive models for regression," Annals of the Institute of Statistical Mathematics, 21:243-47):


2. Akaike information criterion (AIC) also created by Akaike (1974, "A new look at the statistical model identification," IEEE Transactions on Automatic Control, 19(6):716-23):


3. Bayesian information criterion (BIC) created by Gideon E. Schwarz (1978):


4. Hannan-Quinn information criterion (HQC) created by Edward J. Hannan and B.G Quinn (1979):


For all formulas above, p is the number of AR lags, q is the number of moving average (MA) lags (yes, these statistics are applicable to ARMA models), and Lnn) is the log-likelihood value of the function.

The FPE is used primarily for AR models whereas the last three are for general ARMA models. As you can see, these three are similar in a sense they basically contain two terms: the first term captures the advantages of having more variables in that the model's fit goes up; but the second term captures the disadvantages--this is where the philosophy of parsimony kicks in. In all four, the lowest value indicates the most appropriate number of lags.

As I've shown yesterday, you can easily calculate these statistics in STATA after estimation with the use of the the command VARSOC. But if you want to calaculate these statistics directly in STATA, read on. To illustrate, suppose we simulate an ARMA(2,2) process exactly the same thing we did before:


set seed 1
sim_arma y, arcoef(.66666667 .11111111) macoef(.25 .25) et(e) nobs(600) sigma(1) time(time)


Since we simulated the data this way, we know that the correct model should have two AR lags and two MA lags. So let's check three variants of this model: the correct specification; a specification with only one MA lag; and a specification with three MA lags. So, throughout, we use the correct number of AR lags (that is, two).

We first estimate using the ARIMA command, which uses the maximum likelihood method:


arima y, ar(1/2) ma(1/2) nocons


You just interchange ma(1/2) with ma(1/1) for one lag and ma(1/3) for three lags. Then after each estimation, we calculate the information criteria. To calculate AIC:


di ((-2)*e(ll))+(2*(e(ar_max)+e(ma_max)))


To calculate BIC:


di ((-2)*e(ll))+((e(ar_max)+e(ma_max))*(ln(e(N))))


Finally, to calculate HQC:


di ((-2)*e(ll))+(2*(e(ar_max)+e(ma_max))*(ln(ln(e(N)))))


So these commands simply display results from the estimation stored in STATA. e(ll) is the maximum likelihood value, e(ar_max) is the number of AR lags, e(ma_max) is the number of MA lags, and e(N) is the number of observations. The resulting statistics are compiled in the table below:


Based on the results above, the model with two MA lags have the lowest value in all three criterions, which means that we should use two MA lags. This is not unexpected as the data we simulated do follow a two-lag AR/two-lag MA process.

In closing, it's also easy to create an information criterion of your own provided that your proposed formula should capture two things: advantages of having more variables; and a penalty for having more variables. I even created one of my own. I call it Newey-Akaike information criterion, or NAIC. It's a cool name since the acronym also looks like an anagram of my last name. The other reason for the name is because the criterion I propose is what a think a mix of the AIC and a formula for lag selection parameter I adopted from Newey and West (1994)--and so the "N" is for Newey and West and "A" is for Akaike:


NAIC captures the advantages of having more variables (the first term, which is no different from the others) and the disadvantage of having so (the second term). We can go through the same process again but we use the following command for calculating NAIC:


di ((-2)*e(ll))+(2*(e(ar_max)+e(ma_max))*ln(3*((e(N)/100)^(2/25))))


The result of using this proposed criterion is as follows:


As it turns out, above shows that NAIC is also a valid criterion. NAIC indicates that the appropriate number of MA lags is two, being the lowest value--as it should be.

March 30, 2011

Impulse Response Function in STATA

Impulse response analysis in time series analysis is important in determining the effects of external shocks on the variables of the system. Simply put, an Impulse Response Function (IRF) shows how an unexpected change in one variable at the beginning affects another variable through time. It is so widely applicable that we can use on our previous analysis of the relationship between GDP and oil prices.

It should be emphasized that we are not looking at how one variable (oil prices, for example) affects another variable (GDP, for example). We can easily look at the coefficients to know that. What we are looking for are how unexpected changes that directly affect oil prices affect GDP. In a sense, we are looking at shocks coming from the error term related to oil prices, and how such shocks change GDP.

Now, we're not going to discuss impulse response functions the easy way. Before we go into using STATA to compute the impulse response functions, we're going to look at the econometrics behind it. The formula for an IRF is:

Ψi = ΦiB-1Λ½

where B-1 is the matrix of coefficients of all the variables at time t; Λ½ is the lower Cholesky decomposition of the variance-covariance matrix of et (both Λ and Λ½ are diagonal matrices with zero non-diagonal elements); and, Φi is another matrix that contains the effects of a one-unit increase in innovation at date t (et) on the value of the y variable at time t+s:


For example, if we have two variables, (yt, xt), and we're looking at how the error terms, (eyt, ext), affect each of the two variables, the IRF can be summarized as:


Of course, the elements of the matrix are different at each point in time, as we will see shortly.

Now, let's put numbers behind the IRF. Suppose we are analyzing a vector autoregression (VAR) system. We are interested in the following structural vector autoregression (SVAR):


where zt = (yt,xt)'.

But of course, if we want to estimate an SVAR, we instead estimate an observationally equivalent reduced-form vector autoregression (RFVAR) simply because they're easier to estimate:


where Σ is the variance of the error term of the RFVAR (ε).

Now, since both forms of VAR are equivalent, it should be that:



Assuming invertibility is verified (the matrix of coefficients is nonsingular--it has a determinant), we can derive the series of Φi by looking at the MA(∞) representation of zt:


Finally, for the last element of the IRF (Λ½), we make use of the following formula:


then we apply a Cholesky decomposition to get Λ½. In STATA, we use the CHOLESKY function to derive the Cholesky decomposition of a matrix. For example, given that Λ is:


To derive the Cholesky decomposition in STATA, we simply use the following commands:


matrix a=(4,0\0,3.75)
matrix b=cholesky(a)


The first line is where I input the 2X2 matrix and name it a, and b is the resulting Cholesky decomposition.

Alternatively, we can get Λ½ directly by applying another formula:


Σ½ is the lower Cholesky decomposition of the variance-covariance matrix of the RFVAR error term (εt). Applying the formula, we get:


Now, we can apply the formula to get the IRF. Suppose we want to compute the responses for t = 0, 1, 2:


For example, if we want to know the responses of (yt) to one standard deviation shock in (ext), we get 0 for the first period, -½(15)½ for the second period, and 0 for the third period.

Now, the we have the algebra and the econometrics out of the way, let's look at implementing these in STATA. It's way more simpler than the procedure above.

Let us use the data from our previous GDP-oil price analysis. Using that data (already in first-difference log forms), we run the original VAR command with 4 lags:


Before we proceed, we can check if we do need four lags by obtaining lag-order selection statistics. The STATA command VARSOC shows four information criteria (I'll discuss more of this tomorrow) that shows how many lags are the most appropriate:


It seems that we only need a single lag (check the lags with the *). So, rerunning the VAR with only one lag, we get:


Then as a post-estimation command, we run STATA's IRF command after the VAR estimation:


The first line is needed as STATA needs an active file to where the results of the impulse response analyses are kept.

As the footnotes indicate, the first column displays the responses of GDP to one standard deviation shock in eGDP. The second column show response of oil to a shock in eGDP. For the third and fourth columns, the effects of a shock in eoil on GDP and oil, respectively, are shown. The table shows up to nine time periods (quarters in this case).

The NOCI option is there to suppress reporting of the confidence interval. Of course, you can show the intervals by not including this option. Another option that might be useful for you is STDERROR, which shows the standard errors.

STATA provides a very convenient tool to do impulse response analysis. The IRF command can also create graphs which is useful if you prefer a visual look instead of looking at the numbers.

March 29, 2011

Exclusion Test Using STATA

I haven't been traveling lately as I have been very, very busy. Let me make up by sharing some of the things that have made me busy--STATA stuff. We started yesterday with how to simulate an ARMA sequence. Now, a more practical use, is how to do exclusion test using STATA. An exclusion test is basically an F-test to see whether one or more variables are significant in explaining the dependent variable.

For example, there's an issue of whether oil price shocks have a symmetric impact on GDP growth--that is, do we find that both oil price increases and oil price decreases affect real GDP growth, or is the relationship only significant with an increase in oil prices? Ni, Lee, and Ratti (1995) found that positive normalized shocks have a powerful effect on growth while negative normalized shocks do not. Their results, however, are based on the premise theat oil pirce change is likely to have greater impact on real GNP in an invironment where oil prices have been stable, than in an environment where oil price movement has been frequent and erratic. Then we also find on the other side of the spectrum woks such as Killian and Vigfusson (2009). Using an alternative approach, they find that impulse responses are actually of roughly the same magnitude in either direction of the oil price change. Their result is consistent with formal tests of symmetric responses.

We can do a simple test of the symmetry on our own with the use of STATA. All we need first is data, which I got from the excellent Economic Research Department of the Federal Reserve Bank of St. Louis. We just need quarterly data on real GDP, West Texas Intermediate (WTI) crude oil prices, and the producer price index (PPI). We calculate real crude oil prices by dividing WTI by PPI. We then take the natural log and then take a first difference of these two variables to approximate growth rates (DLRGDP and DLROIL). Finally, to test the symmetry, we create a series consisting of only the positive elements of the oil price changes with negative changes set to zero (DLROILP), and another series consisting of only the negative elements of the oil price changes with positive changes set to zero (DLROILN).

We use a bivariate VAR for our model with 4 quarterly lags. The exclusion tests then proceeds as follows: (1) we estimate the whole model--DLRGDP, DLROILP and DLROILN; (2) save the results in STATA; (3) we do a second estimation, this time excluding either DLROILP or DLROILN; (4) save the second results in STATA; and (5) run the F-test that the excluded lagged variables of DLROILP/DLROILN are indeed not significantly different from zero. The following commands in STATA are used


reg dlrgdp l.dlrgdp l2.dlrgdp l3.dlrgdp l4.dlrgdp l.dlroilp l2.dlroilp l3.dlroilp l4.dlroilp l.dlroiln l2.dlroiln l3.dlroiln l4.dlroiln

est store a

reg dlrgdp l.dlrgdp l2.dlrgdp l3.dlrgdp l4.dlrgdp l.dlroiln l2.dlroiln l3.dlroiln l4.dlroiln

est store b

ftest a b


a and b are arbitrary names I assign to the two estimations. The result of the F-test is as follows:


The exclusion test of real oil price increases is very signficant while that of real oil price decreases is not. These results confirm a positive effect of oil price increases on real GDP but not real oil price decreases. There is asymmetry in this case. Although I didn't present it here, the coefficients of the lagged variables of positive oil price shocks are all negative in the four quarters (significant in the second and fourth quarters), indicating that positive oil price shocks have negative effects on real GDP growth.

We could also check the symmetry of oil price shocks on overall price level. Data on CPI can also be obtained from the St. Louis Fed website. The hypothesis is that establishments are quick to increase the prices of the commodities they sell if they see that world oil price has increased. But if there was a decrease in world oil price, the adjustments in their prices is slow, if there will be changes at all. We can check this empirically by going through the same procedure above--this time having first difference in log CPI as the dependent variable (and using nominal oil prices instead of real oil prices). The results are:


Well, there's also asymmetry on the effects of oil price shocks on consumer price index. Increases in oil prices are significant, but decreases in oil prices are not. I also did not show it here, but the coefficients of the lagged variables of positive oil price shocks are positive (except for the fourth quarter). This indicates that increases in oil prices are associated with increases in overall consumer prices.

March 28, 2011

Simulating an ARMA Process using STATA

If you need to simulate an ARMA process (provided you already know the coefficients of both the AR component and the MA component), you can use STATA to do so. What you need is the SIM_ARMA program created by Jeff Pitblado. You can download this program from the Boston College STATA program repository.

For example, suppose you want to simulate an ARMA(2,2) process with:

α (z) = 1 - 2/3 z + 1/9 z2
β (z) = 1 + ¼ z + ¼ z2
εt ∼ WN(o,1)
n = 600

You use the following stata command:

sim_arma y, arcoef(.66666667 -.11111111) macoef(.25 .25) et(e) nobs(600) sigma(1) time(time)

y is the resulting simulated data. The numbers inside the parenthesis of arcoef and macoef assigns the coefficients of the AR component and the MA component, respectively. Here, I use decimal numbers since it seems it does not read fractions (STATA seems to read and numbers with the "/" sign as an interval). et assigns the name of the error term while time assigns the name of the time variable. Finally, sigma assigns the standard deviation of the error term.

February 3, 2011

Most Influential Economists

According to the Economist magazine, when they asked their experts at the Economics by Invitation which economist was most influential over the past decade, there were ten:

1. Ben Bernanke
2. John Maynard Keynes
3. Jeffrey Sachs
4. Hyman Minsky
5. Paul Krugman
6. Adam Smith
7. Robert Lucas
8. Joseph Stiglitz
9. Friedrich Hayek
10. Alan Greenspan

There was a follow-up question: which economists have the most important ideas in a post-crisis world? The votes were:

1. Raghuram Rajan
2. Robert Shiller
3. Kenneth Rogoff
4. Barry Eichengreen
5. Nouriel Roubini

There were other nominations, but you have to go to the Economist's website to look at them. Of course, there was a caveat to this poll that they admitted:

"This obviously isn't a scientific poll of the profession, but it is interesting to get some sense of from where the profession sees its influence emanating."

Most, I would agree; while some, I am not aware of. But, to reiterate what Economist magazine has mentioned, they all seemed to have strong economic ideas that they managed to influence the voters.

January 25, 2011

Revolutionizing tests

Suppose there's a test. They made the test extremely difficult. Then, it's natural that you would expect that this example of a screening mechanism would result in high quality passers. I would expect so too. Then again, we could be wrong.

In their latest Economic E-Journal paper entitled "Tougher Educational Exam Leading to Worse Selection," Eduardo Andrade and Luciano de Castro shows us instead the possibility of a counterintuitive result: an increase in the exam difficulty may reduce the average quality of selected individuals. Well, their study seems more focused to the labor market but the authors also applied their analysis for teachers and students as well:

"This apparently counterintuitive fact arises because tests do not emphasize all abilities that are important for job performance. A large number of papers show that noncognitive skills not tested in exams are important determinants of the performance in the labor market. When the standard rises, at the margin candidates with relatively low cognitive skills but high noncognitive skills decide not to make the effort to meet the new standard. Candidates who succeed display more cognitive skills but the average level of noncognitive skills falls. As all skills contribute the workers' productivity in the market, the net effect may be a reduction on the average quality (productivity) of those individuals who pass the standard."

Bottomline is, I think Andrade and de Castro's main message is that these tests have to take into consideration non-cognitive aspects, which are likewise important--not just in the labor market but in a school setting as well. I tend to agree more to the point that tests have to be customized to what it is intended for in the first place--and this seems to be one of their recommendations:

"Our results also offer a testable implication: the test is more effective in enhancing productivity when the mix of skills tested is closer to the set of skills needed in the job... it is more important to design the exam in order to test skills directly relevant to the jobs than to raise the standard."

January 24, 2011

Taller people are smart too

It has already been a classical finding (not referring to the school of thought) that taller workers receive a substantial wage premium and this wage premium is attributed to non-cognitive abilities. But what about the accepted standard that intelligence (and hence cognitive abilities) exlain the often cited skill-bias wage premium?

Well, in their latest NBER working paper entitled "Height as a Proxy for Cognitive and Non-Cognitive Ability," Andreas Schick and Richard Steckel may have managed to bridge that gap. They recognize that nutrition, which is a determinant of adult height, is also important to cognitive and non-cognitive development:

"Using data from Britain’s National Childhood Development Study (NCDS), we show that taller children have higher average cognitive and non-cognitive test scores, and that each aptitude accounts for a substantial and roughly equal portion of the stature premium."

So, bottomline is, it shouldn't be surprising that we generally see taller people have higher wages. Taller people have higher wages because they're smart enough to land a high-paying job. Or at least firms hiring them think they are smart enough to give them high-paying jobs.

This also explains why most athletic people I know are very intelligent. I mean think about it. You have to be very smart to also have very good hand-eye coordination (basketball players, quarterbacks, receivers, etc.).

January 17, 2011

How effective is the reserve requirement as a monetary tool? Another historical perspective

Maybe not as effective as the Federal Reserve System might think. At least not in the 1930s. In their latest NBER working paper, Charles Calomiris, Joseph Mason, and David Wheelock adopts a microeconomic approach to answering the question, "Did Doubling Reserve Requirements Cause the Recession of 1937-1938?" Their answer: no.

"[W]e find that despite being doubled, reserve requirements were not binding on bank reserve demand in 1936 and 1937, and therefore could not have produced a significant contraction in the money multiplier. To the extent that increases in reserve demand occurred from 1935 to 1937, they reflected fundamental changes in the determinants of reserve demand and not changes in reserve requirements."

It is already a growing consensus that reserve requirements are no longer that effective as it has become less binding for most banks. This is definitely one case where government intervention is unnecessary as the market (the banks) itself will try to change its reserve holdings to adapt to its changing environment, and hence subtly putting a safety net against bankruptcy.

Such safety net is, in the first place, what the Fed's reserve requirements are there for. So there's no need for government to put some redundancy in this case. On the other hand, the use of reserve requirements to control the money supply is again also unnecessary as the market inadvertently causes such policy tool to be non-binding.

January 15, 2011

Choose your children's classmates

It seems that it matters that you enroll your kids to a class of his peers, and when I say his peers, I mean of your kid's own age. In his latest Policy Research Working Paper, Liang Choon Wang of the World Bank finds that there's a negative effect of having a class of different ages to that class' academic achievement.

After analyzing exogenous variation in the classroom variance of student age in 14 developing countries to examine its effects on student achievement, Wang finds that:

"[G]reater classroom age variance leads to lower fourth graders’ achievement in mathematics and science. For every one month increase in the classroom standard deviation of student age, average achievement falls by 0.03 standard deviations for both math and science."

It should also be noted that such detrimental effects only affect academic performance; there's no significant negative effects on the behaviors of the students. Expectedly, Wang recommends a strategy of age grouping rather than age mixing in schools in order to achieve higher average academic achievement.

It would be interesting to analyze further the distribution of grades among those students affected since Wang's results are only based on the average grade. Some students may have benefited from the age sorting scheme. In particular, it might not be surprising to find a skewness in the distribution, specially if we find that the older ones are the ones getting higher grades.

January 14, 2011

Demand for household maids might go up (or why it's okay for husbands if their wives are not working)

That is, so that wives can concentrate on doing housework. From the latest paper by Mark Bryan and Almudena Sevilla-Sanz in Oxford Economic Papers journal, it seems that husbands , and even their wives, will have lower wages if they engage in housework while maintaining full-time jobs:

"[H]ousework has a negative impact on the wages of men and women, both married and single, who work full-time. Among women working part-time, only single women suffer a housework penalty. The housework penalty is uniform across occupations within full-time jobs but some part-time jobs appear to be more compatible with housework than others."

It's even worst if they have children.

Now, the authors are clear to point out that the causality may not be one-way as you normally would receive from this finding: "[I]ndividuals with more housework responsibilities may be less career oriented and thus earn lower wages." In other words, they choose to find a less-paying job because they know they'll spend some of their productive time on doing household chores.

Since the world has becoming more and more sophisticated, it's not anymore a traditional view that husbands prefer that their wives concentrate on home and the kids. There's now a bigger economic reason behind such preference.

Either that or just simply hire a maid.

By the way, today's my father's birthday and I find this paper very appropriate for today's post. You see, my mother stayed home and became a housewife. It was a mutual decision between my parents so that my father can concentrate on work while my mother takes care of the household and the children. Fortunately, as it stands, such arrangement turned out well. I find myself and my siblings in good standings. Now, some would say my parents are of the traditional type, but then again, my father kept up with the times (being a clever one that he is) and it was primarily an economic reason that drove my parent's decision.

So, Happy Birthday Pa. Thanks for being an economic thinker.

January 13, 2011

Patience is indeed a virtue

Happy New Year to all! And to commemorate the new year, I find this interesting paper that is surprisingly very appropriate to yours truly. You see, one of my new year's resolution to my family is to become more patient. In many occassions, I find myself busy with many things at the same time that I regrettably easily lose my temper to a family member. To help me keep my resolution for the new year, discovering this excellent paper is a way for a good start.

The Chinese proverb "Patience is a virtue" is one of those ageless quotes that seems to ring true every now and then. Experimental economics has brought it even further.

In their latest paper for the Institute for the Study of Labor, Matthias Sutter, Martin Kocher, Daniela Rützler, and Stefan Trautmann studied 661 children and adolescents in an experiment and found that there's a link between impatience on the one hand and health and savings decisions on the other:

"More impatient children and adolescents are more likely to spend money on alcohol and cigarettes, have a higher body mass index (BMI) and are less likely to save money."

I knew there is a wisdom behind this old Chinese proverb. So it seems it's not just that impatience causes stress which eventually is detrimental to your health (then again maybe it is stress that causes one to resort to alcohol and cigarettes).

The same way we can think of other old proverbs, modern economics can give us tools to check which old quotes are applicable and which ones are not. Sutter and the others have given proof of how true this one Chinese proverb is.

Now, what do old quotations say about the cause of impatience? That can help me too.