That curious American
style of regulation
|
|
... Monopolies are regulated for a whole different reason -- because there is no free market, and an unregulated monopoly has the power to gouge consumers. ... . ... we're treated to a hermeneutic debate about whether a telephone call becomes "information" if one leg of it is encapsulated in the Internet Protocol, or some variation thereof... ...If anything, the necessary reform is one that brings the United States, once a beacon of competition, more in line with the rest of the world... |
The American system of regulation, particularly when applied to telecommunications, is based on a curious level of indirection. Rather than answer a question directly, we find the issue framed in terms of categorization, first trying to figure out what general category the thing at hand belongs to, and then applying the norm for that category. That approach sounds logical until one realizes that it is far less accurate than facing an issue head-on and applying first principles to it. This system shows up most clearly when deciding just what it is that needs to be regulated. For some odd reason, deregulation is a popular term. People remember when airlines and banks were deregulated, allowing competition to flourish. But that was deregulation of markets that were already highly competitive, where the need for price regulation was highly questionable all along. Monopolies are regulated for a whole different reason -- because there is no free market, and an unregulated monopoly has the power to gouge consumers. So before something is deregulated, it helps to show that it is part of a competitive marketplace. A straightforward approach might be, for instance, to examine the item or service in question (electricity supply, digital subscriber loop, television broadcast delivery, malpractice insurance, etc.), determine who buys it, determine what alternatives exist, and thus determine the impact of deregulation. Significant Market Power versus categorization Europeans and Canadians usually follow that model. Deregulation, like its cousin antitrust, is based on the notion of significant market power (SMP). If a vendor has a large enough market share to substantially influence the price -- typically 25% -- then it has SMP and is subjected to some degree of regulation. Perhaps it is simply limited in its ability to make major acquisitions of competitors (a common antitrust issue). If it has stronger SMP, it may be subject to some kind of price regulation. But American regulators don't really like to do that. Instead, the American style of regulation is to look at broad categories, Platonically decide how that category is likely to operate in the marketplace, determine if that category merits deregulation, and then assign things to the category based on some characteristic or other that often ends up having nothing to do with the need for regulation. This often works well enough in the short term, when the category's market characteristics are obvious, but it breaks down over time. Take, for instance, the FCC's Computer Decisions. These were hallmarks in the history of regulation. (See Bob Cannon's The Legacy of the Federal Communications Commission's Computer Inquiries for more details.) The Computer I decision, in 1970, was made at a time when AT&T had an iron-clad monopoly over telecommunications, but the computer industry was highly competitive. However, the two industry's weren't totally separable. Computers were beginning to be embedded within the telecom industry (for instance, the Western Electric 1ESS computerized central office switch came out in 1965), while data processing systems were beginning to be hooked up to the network. What should be regulated? The FCC decided that there was "pure communications" subject to regulation, unregulated "pure data processing" outside of the network, and "hybrids" subject to case-by-case determination. That turned out to be unweildy -- too many hybrids led to too much work! Computer II was decided in 1980, after the ARPAnet was demonstrating packet-switched data networking and remote terminals were everywhere. It divided telecommunications into "basic" and "enhanced" services, with only the former subject to regulation. Of course in 1980, there was little competition for basic services -- long distance competition had been authorized, but AT&T still had extreme market power and the local telephone companies still had de jure monopolies. Computer II required Ma Bell to split its unregulated operations into a fully separate subsidiary subject to strict separation requirements. This allowed the competitive market for enhanced services to flourish. And at the time, it was the right answer. Of course in 1980, there was little competition for basic services -- long distance competition had been authorized, but AT&T still had extreme market power and the local telephone companies still had de jure monopolies. But now, 25 years later, the competitive boundary has shifted. What's most surprising, perhaps, is how well the Computer II rules have held up. The long distance market is fully competitive. Some local routes are competitive, and a few large buildings have multiple suppliers of local service. Cable television companies provide an information service (cable modem ISPs) without depending on a telecommunications service. But most ISPs still depend on the local telephone monopoly, the ILEC, for the local loop to their subscribers. The ILEC share of the ordinary voice dial tone business remains dominant, and a large share of their supposed "competition" comes from CLEC resale of their lines, either as Total Service Resale or the dying Unbundled Network Element Platform. Within the telephone network, compensation between carriers is based on a complex system of call classification, wherein the rate one carrier pays another for its share of a call depends on whether it is "local", foreign exchange, intrastate toll, interstate toll, ISP-bound, or "computer to phone" VoIP. And maybe some other categories which show up now and then. So how are these rates arrived at? Naturally, not necessarily by looking at the actual costs, which might be imposed in a monopoly environment, or by the workings of a truly competitive market, which it isn't. Instead, we're treated to a hermeneutic debate about whether a telephone call becomes "information" if one leg of it is encapsulated in the Internet Protocol, or some variation thereof. And there's another debate about whether a monopoly local loop becomes competitive "information service" when the payload it carries includes Internet access traffic -- making the loop's owner, the ILEC, thus free to cut off access to competing ISPs. The categories have taken on a life of their own, even as the rationale for the categorization is nearly forgotten to history. Legislation is not the answer Some, such as Michael Powell, shortly before he resigned his FCC chairmanship, have suggested that the Telecom Act itself is obsolete in a "broadband" world. Again, the American game is afoot, creating a category -- "broadband" -- and assuming that it needs new legislative help because it's not explicitly described in the Telecom Act of 1996. But the Telecom Act is a flexible framework that doesn't describe bit rates or line protocols. It calls for demonopolization of telecommunications. ILECs pretend that it was limited to voice service, and that broadband, whatever it is, can only exist in a propertarian environment, in which all facility owners have total control over their plant, regardless of monopoly power, SMP, or their advantages of incumbency. While the Telecom Act is a poster child for the ills of overly-ambiguous lawmaking, its pro-competitive framework doesn't need to be restricted. If anything, the necessary reform is one that brings the United States, once a beacon of competition, more in line with the rest of the world, where monopolies are reined in based on SMP, not their lawyers' ability to cast their services in the terminology of an obsolete framework. |