This article originally appeared on the BeyeNETWORK.
With elections coming up in November and a new Congress – the 110th – scheduled to be elected, it seems like an appropriate
A couple of years ago, I was invited to lecture at a symposium to mark the role the Census Bureau has played in America’s growth. At that event, I heard Dr. Peyton Young, Co-Director of the Center on Social and Economic Dynamics at the Brookings Institution, give a fascinating lecture on the history of Congressional apportionment, and it introduced me to what is a classic example of the use of data collection, algorithms and debate in the public policy arena in order to solve a key challenge in the process of governing ourselves. In effect, parliamentary apportionment in a representative democracy is arguably the original application of business intelligence in the public sector.
The founding fathers realized that for true democracy to work, there would have to be a fair way for each citizen to have his or her say in the affairs of government. Because it was both impractical as well as inefficient for everyone to vote on every issue, the concept of representative democracy took shape. One key question the first Congress had to answer was how to implement representation in order to comply in the best way possible with the concept of “one man, one vote” implicit in the Declaration of Independence and shaped by multiple Supreme Court decisions underpinning this democratic ideal.
The Constitution kicked off the debate by stating in Article I, Section 2, Clause 3:
Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers, which shall be determined by adding to the whole Number of free Persons, including those bound to Service for a Term of Years, and excluding Indians not taxed, three fifths of all other Persons. The actual Enumeration shall be made within three Years after the first Meeting of the Congress of the United States, and within every subsequent Term of ten Years, in such Manner as they shall by Law direct. The Number of Representatives shall not exceed one for every thirty Thousand, but each State shall have at Least one Representative; and until such enumeration shall be made, the State of New Hampshire shall be entitled to chuse three, Massachusetts eight, Rhode-Island and Providence Plantations one, Connecticut five, New-York six, New Jersey four, Pennsylvania eight, Delaware one, Maryland six, Virginia ten, North Carolina five, South Carolina five, and Georgia three.
Leaving aside the issues of morality raised by who was counted as a full person, who was counted as a fraction and who was just not counted, this clause set the wheels in motion for an answer to the question: How are citizens to be fairly represented?
Let us remember first that our Congress is made up of two chambers: the Senate and the House of Representatives. The Constitution stipulates that every state, independent of size, will be represented by two senators, each elected to a six-year term. This was the way the founding fathers recognized the concept of the confederation of states that is at the heart of our “federal” system. But the House of Representatives, where members serve for terms of only two years, was intended to be composed of delegates from each state in direct proportion to their population. Hence, it was always expected that the more populous states would have more power in the House, through a greater number of votes, than the less populous states.
The data collection, in a sense, is the easiest part. It is mandated by the Constitution, as we have seen, and it happens every ten years. It has become such a monumental effort that almost one million people were needed to complete the 2000 census. Furthermore, there is enough debate over whether the more accurate number is the one obtained by actually enumerating – counting one by one – all the people that live in the United States or whether that is better done using sophisticated sampling techniques. (While the expert demographers prefer the latter, Congress has insisted on the former.)
The apportionment population of the United States according to the 2000 decennial census was 281,424,177. That is not the total population of the U.S., but it is close. The apportionment calculation totals all citizens and noncitizens of the 50 states, including the military and federal civilian employees stationed outside the United States (and their dependents living with them) that can be allocated back to a home state. It notably excludes the population of the District of Columbia.
So how are the representatives apportioned based on this total population? This is where the algorithms come in and the fun really starts. First, let’s mention the constraints. We have seen that the Constitution grants each state at least one Representative independent of the state’s size. Furthermore, Congress fixes, through statute, the size of the House of Representatives. Currently that number is 435, corresponding to that same number of Congressional districts nationwide.
Congress likewise decides the procedure of apportioning the number of seats among the states to determine how many members of the House each state will have. This happens after every decennial census. (Incidentally, the Constitution leaves redistricting, or the design of the boundaries of Congressional districts, to each state. This is also called “gerrymandering,” given some of the gyrations that state politicians have gone through in order to shape the political geographies in their favor.)
When the results from the census are delivered, it typically renews the debate over how to apportion the seats in Congress. Currently, we know that California has 53 representatives; Texas, 32; New York, 29; Florida, 25; and so on with Arkansas, Delaware, Montana, North and South Dakota, Vermont and Wyoming each having only one seat. So how exactly does the allocation get done? What algorithms do they use for the process?
Well, let’s talk about the algorithms. Some of the most prominent names in our history have taken a stab at what they thought were the best algorithms. John Adams, Thomas Jefferson, Alexander Hamilton and Daniel Webster have all suggested approaches. The devilish challenge that they all have had to grapple with is that for a mathematically perfect allocation of House seats in exact proportional correspondence to population, one would need to have fractions of a representative serving in Congress, which is not possible. According to the 2000 census, California, for example, has 33,930,798 and hence 12.06% of the apportionment population; Texas, New York and Florida have 7.43%, 6.75% and 5.70%, respectively. That means that their quota of the 435 available seats in the house should be in direct proportion to these percentages and thus 52.447, 32.312, 29.376 and 24.776. But since there can only be a whole number of members for each state, the decision had to be made on what to do with the fractions or remainders. Jefferson suggested rounding down; Adams suggested rounding up; Webster suggested rounding at the mid-point (0.5); and on and on and on.
That is where the debate starts. Michel Balinski and Peyton Young (Fair Representation, Brookings Press, 2001) do a fantastic job in narrating the action, analyzing its political significance and describing the underlying math. It started with the very first Congress, where President Washington vetoed the Hamilton algorithm in the first exercise of this power.
Some of the elements of the debate were the early rivalry between the small states and the large states. The Jefferson method favored the large states – his home state, Virginia, being the largest at the time – while Adams’ seemed to favor the small states. Webster’s algorithm was the most unbiased, but only if there were no fixed number of seats for the House. Obviously, there was a significant amount of acrimony and debate over the issue because the power of each state’s ability to influence Federal decisions hangs in the balance. Congress has used different methods over the centuries and is the only arbiter to decide which one shall be used every time the results of a new decennial census are delivered.
Most apportionment algorithms depend on establishing a number representing the target number of persons that should be represented by one member in the House. This becomes the ideal size of a Congressional district, and the number becomes the divisor which will be used to establish a state’s quota. But depending on the actual divisor and which “divisor” method was used, the results often left a lot to be desired. In one case, a state would actually lose a seat when the total number of representatives in the House was increased by one. “How can one state lose a seat when there are more to go around?” were the shouts heard in Congress when the so-called “Alabama Paradox” reared its ugly head. In another, the so-called “population paradox” emerged, whereby using an apportionment algorithm, a state that was growing faster than another state in the period between two decennial census would actually lose a seat to the state that actually decreased in population. A third dilemma that was deemed not only to be unfair but just did not make sense was the “New States Paradox.” When Oklahoma became a state in 1907, its population proportionally gave it 5 seats; and depending on the apportionment method used, it would have consequently forced New York to give up a seat and Maine to gain one even though the absolute populations of the latter two states had not changed relative to each other. In fact, it can be proved that in a representative democracy, one-man, one-vote is a mathematical impossibility.
Hence, there has evolved what Balinski and Young call a theory of apportionment aspiring to simply steer a course of proportionality and fairness in the distribution of Congressional seats. Eventually, from this theory there has been a steady move forward progressing through algorithms with arithmetical monikers such as the methods of “largest divisors,” “smallest divisors,” “major fractions,” “harmonic means” and “largest fractional remainders.”
So how is it done today? Again, recalling that after each decennial census the debate begins anew, the current approach is still the so-called Hill method or Method of Equal Proportions, named after Joseph A. Hill, Chief Statistician of the Census who developed the approach in 1911. The Hill algorithm basically starts with the House fixing the total number of seats to be apportioned and then “gives to each state a number of seats so that no transfer of any one seat can reduce the percentage difference in representation between those states.” This is mathematically accomplished by rounding at the geometric mean, or the square root of the product of two numbers. That means that the rounding point between 1 and 2 is not 1.5 as the arithmetic average would be, but 2½or 1.41 and between 10 and 11 it is 110½ or 10.487.
The Census Bureau actually describes the calculation and full process very clearly on their Web site.
The Hill method is certainly not perfect, and there have been a number of calls to change it. Surely, after the 2010 Census is delivered, when the population of our United States will be well over 300 million, some states will be winners and some will be losers in this business intelligence exercise, and you can bet there will be substantial debate over data collection, public policy and the algorithms that we use.
Dr. Barquin is the President of Barquin International, a
consulting firm, since 1994. He specializes in developing information systems strategies,
particularly data warehousing, customer relationship management, business intelligence and
knowledge management, for public and private sector enterprises. He has consulted for the U.S.
Military, many government agencies and international governments and corporations.
Dr. Barquin is a member of the E-Gov (Electronic Government) Advisory Board, and chair of its knowledge management conference series; member of the Digital Government Institute Advisory Board; and has been the Program Chair for E-Government and Knowledge Management programs at the Brookings Institution. He was also the co-founder and first president of The Data Warehousing Institute, and president of the Computer Ethics Institute. His PhD is from MIT. Dr. Barquin can be reached at firstname.lastname@example.org.
Editor's note: More government articles, resources, news and events are available in the BeyeNETWORK's Government Channel. Be sure to visit today!