I saw an astonishing book at Borders today:
Probabilistic Reasoning in Intelligent Systems Networks of Plausible Inference by Judea Pearl
If I gathered correctly, under the rubric of AI, Pearl essentially provides rigorous analysis of how to reduce a system of weakly held beliefs into the optimal decision. I am not sure I still have the brain cells aligned right to absorb it very quickly, but it seemed to me potentially very important, and all my plausible inference circuits were telling me the man did it right.
Of course it is heavily Bayesian. It's all about a complex set of relationships between prior beliefs and their consequences.
If my cursory reading is correct, he claims he could systematically reduce a sufficiently formally stated set of beliefs into the most plausible mutually consistent subset, and use them formally to offer likelihood ranges on a decision. Exclamation point.
In the end, I left it there. Maybe if it had been $20 instead oif $85 I'd have picked it up on the spot. I hope someone with their math neurons fully engaged who's interested in policy will give it a thorough reading. It's in a sense quite miscast as an AI book.
I don't think the perfectly reasoned response to uncertain information matters as much as it should, but I also don't think the methodology is practicable in practice for very large problems like climate policy. It's the estimate of prior belief that is the problem. I am sure the monkeywrenchers corporate and anticorporate will be doing their best to prevent coming to sensible conclusions anyway.
I often see my civilized, calm and safe European colleagues (Annan and Gerhauser in particular come to mind) talking about optimal paths and controllable risks. This all stuns me. Of course the optimum policy exists. It appears there are better developed tools for obtaining that optimum than I knew about. Nevertheless, people will not concede their sovereignty to a formula, no matter how cleverly construed, certainly not in all countries and certainly not at all times, on any known precedent. They will cling to their illusions, and some of those illusions will be dangerous, as we ought to have learned from the Easter Islanders.
What good is plausible inference when based on demonstrably inconsistent and yet strongly held views? How can a democracy take account for such an optimization when easily half the population believes things that are impossible?
This is why we will be very lucky to avoid a great global population crash sometime in the relatively near future. It's not that we can't see things coming, it's that we don't always find ourselves in societies with the capacity to react to the balance of evidence.
What's optimal is very much in the eye of the beholder. The beauty of economic models is that at least some of the value decisions can be quantified (value of species, small risk of Earth becoming uninhabitable, relative value of damage in developed and developing countries - see Stern).
ReplyDeleteOn the other hand, as Rumsfeld says, what do you do about unknown unknowns?
How do you know that an action say intended to prevent terrible things from happening doesn't aggravate another problem?
I am not terribly convinced that the default option of "anything that gets us closer to the state of the world before civilisation should be fine" necessarily makes sense, when we've got 6 to 9 billion people on this planet.
Er, may I point out something about your last link there?
ReplyDeletePut these two paragraphs together:
“All of the old-timers knew that subprime mortgages were what we called neutron loans — they killed the people and left the houses,” said Louis S. Barnes, 58, a partner at Boulder West, a mortgage banking firm in Lafayette, Colo. “The deals made in 2005 and 2006 were going to run into trouble because the credit pendulum at the time was stuck at easy.”
http://www.nytimes.com/2007/08/19/business/19credit.html
Two years ago, William Stout lost his home in Allentown, Pa., to foreclosure when he could no longer make the payments on his $106,000 mortgage. Wells Fargo offered the two-bedroom house for sale on the courthouse steps. No bidders came forward. So Wells Fargo bought it for $1, county records show.
http://www.nytimes.com/2007/08/20/business/20taxes.html?em&ex=1187755200&en=f7a1269ba284eef5&ei=5087%0A
The trick to formal systems is that you cannot reduce complex problems to a formal system. Certainly no algorithm for doing so exists, which is why formal proofs of software are not universal and usually restricted to small extremely designed systems.
ReplyDelete> it seemed to me potentially very important, and all my plausible inference circuits were telling me the man did it right
ReplyDeleteMy guilt-by-association inference circuits point the other way - a coathor of Pearl's is a local denialist, who writes impenetrable prose in (what seems to me to be) an attempt to snow his audience.
Caution is advised.
...and here is a sample of his denialism writing.
ReplyDeleteRebane and Pearl coauthored at least 2 papers:
George Rebane, Judea Pearl: The recovery of causal poly-trees from statistical data. Int. J. Approx. Reasoning 2(3): 341 (1988)
George Rebane, Judea Pearl: The Recovery of Causal Poly-Trees from Statistical Data. UAI 1987: 175-182
I wonder if Pearl is a denialist as well. I hear there's good money in it...