« Three dozen new travel destinations (est.) | Main | Desperately seeking »

September 16, 2007

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

claudia

Poor puppy. Let me send you get-well-vibes. Doug is sick too - as he tends to become insomniac when he's sick (as someone else I know), I forced him to take an Ambien and he was snoozing before the boys were out.

Well, done, wife.

You get better. We'll call tomorrow.

Carlos

I feel a little better. Green Bay 35, Giants 13. Old man's still got it.

Maybe I can sleep now.

Noel Maurer

Man, I hope you get better soon. I'm not that worried, but I am thinking about bennies. Gotta be a way, y'know, short of moving to Boston, but I haven't figured it out yet.

As for Blanchard, Goodman, and Razborov & Rudich, well, let me take them in turn.

Blanchard: this I get. His theory is pretty simple. It's a riff off Hart, actually.

Centrally-planned economies had a lot of monopolies and monopsonies, meaning that firms generally had one supplier for their inputs and one buyer for their outputs. Such relationships are generally prone to opportunism, for obvious reasons. In the West, a whole range of institutions have arisen to prevent such abuse, from contract law to trade regulation to vertical integration. (My favorite example would be pre-1959 Cuban sugar contracts.)

In the old East, the central planner prevented abuse by cracking heads. This worked okay (depending on the country and period), but once the central planner went away, there weren't any other institutions needed to prevent inter-firm opportunism. No contract law, no impartial regulators, no markets to finance mergers, and no corporate governance to insure that a merged firm actually in fact its management. Result? Economic decline until the inter-firm relationships could be unwound or alternative institutions created.

Goodman: this I remember from a very painful discussion in high school with a good friend who now teaches in the Baltimore public schools. We were exposed to the idea in a high school science class (we went to Stuyvesant), and our weekend conversation about the class ended with the following interchange, more or less:

Me: "Fine. The f-----r was blue right until I looked at it. So what?"

Joe: "No, you f-----n' d-----s r---n, it's always green, you just haven't looked at it yet. So it's also grue. Kapeesh?"

Me: "What f-----y f--k are you talking about?"

At which point we did something else, involving either beer or Chinese handball. (Does anyone play Chinese handball anymore? I feel old.) Or maybe Missile Command. Like I said, I'm old, I can't remember.

As for Razborov and Rudich, well, this is the first time I've heard of them. Wikipedia is not helpful, either.

Meaning either that this is either a stroke of genius, or the fever has put you in a state not unlike the a good marijuana high.

lala

how terrible you are ill, sir. digressions, etc = most excellent queeny dish of the tallest order.

Dave MB

Razborov and Rudich's "natural proof" paper argues that a particular line of attack will be unable to solve the P vs. NP question. Briefly, there is this set of problems called "NP-complete" for which we know no significantly better method than exhaustive search -- the classic example is finding out which of the n! possible ways to tour n cities is the shortest, given the distances for each pair of cities. One of the seven Clay Institute "Millennium Problems" (hence by that measure one of the seven most famous open problems in mathematics, not just theoretical CS) is to prove that there is no polynomial-time algorithm for any of these problems.

A program to attack this problem from the 80's, popularized by my thesis advisor Mike Sipser, is to find steadily larger subclasses of the set of polynomial-time algorithms and prove that each one does not include any NP-complete problems. The last major progress in this program was in the mid-80's (Smolensky's Theorem). Razborov and Rudich in the early 90's formalized the kind of attacks that were being made -- you take a class of algorithms, find a property that's shared by every problem they can solve but not shared by random problems, then show that some NP-complete problem is "random enough" not to have the property.

The difficulty that R&R pointed out is that we have pseudorandom generators that we believe to be very good -- so good that no polynomial-time algorithm can reliable tell pseudorandom output from random output. And some of these generators operate in small subclasses of polynomial time. In order to get a property that random problems have and easily computable ones don't, you have to be able to distinguish random problems from the computable pseudorandom ones, i.e., "break" the pseudorandom generator -- something we believe to be impossible.

I think the common thread in Carlos' fever dreams may be the sort of Catch-22 idea behind this -- that if breaking the generator is impossible, that very impossibility prevents us from proving that it is impossible. (Proving it by this method, of course -- there could well be others. Razborov followed up the R&R paper by showing that a particular fragment of formal number theory is unable to prove P different from NP. It's conceivable that "P != NP" is a true but unprovable statement, but I'm not up to date on how likely anyone thinks this is.

Wikipedia is pretty weak in my area -- it's possible I should do something about this, since the work would probably persist better than work on more popular areas of Wikipedia.

Carlos

Let me see if I can reconstruct my chain of thought.

Blanchard's mathematical model for the post-transition depression in production -- the bottom of the U curve -- is a little unusual for him. It's not a partial derivative of state variables thermodynamics-looking thing, and it's not a discretized continuous function sort of thing. I think its ancestry comes from neural network models.

Okay, you have an economy characterized by firms in monopoly/monopsony relations with each other. It's a fixed network. There's a probabilistic price threshold (not sigmoidal, but linear, I assume because Blanchard isn't crazy) after transition where a feeder firm will start producing for the market.

Blanchard simplifies the whole thing to one firm with n inputs, and shows how the expected post-transition dip in production becomes worse with increasing n -- analogous to the increasing complexity of the command economy. He claims it's a contrary but complementary result to Murphy, Shleifer, and Vishny's probably too influential 1992 paper on partial price liberalization in command economies, since in his model, all prices are liberalized, and you still get the drop. That's the takeway result for post-transition development people (hi Doug).

But stepping back to the model for a moment, Blanchard created this apparatus of n firms in a network with each other, which *does* act as a productive command economy under certain conditions.

In other words, it's the calculation problem. It's empirically known to be hard. Blanchard has revised it -- in a toy model, to be sure -- to a property of how networks of firms are organized. There's the lead-in to R&R.

The grue paradox... in analogy to an R&R natural property, "grue" is not a "natural" qualia. For some reason I kept on thinking of a Fourier decomposition of a square wave, with those little Gibbs doodles. This would also suggest that apprehension is something computable in polynomial time.

Hey, it beats waking up in a cold sweat over some nightmare you don't want to remember. I hate being sick.

The comments to this entry are closed.