Nevertheless we see an overvaluing of the sagacity of the marketplace all over the site. I might tend to call that a bias, but that's just me.
What's really interesting and terrifying (and why this is news, at least to me) is the rationalist or pseudo-rationalist equivalent of last-days-fundamentalism. It's easy to see why people who literally expect the Rapture don't care much about preserving the environment. You wonder how and why their libertarian brethren on the right manage to go along with this, though.
Well, it appears there's a materialist rapture in the offing, as well. We, or perhaps the robots we appoint as our successor sepcies, will have intellectual superpowers, by means of which we can recover from any damage we might incur. So don't worry! We'll be so much smarter later that none of this will matter!
Lest you think I'm exaggerating, here are some quotes, starting with a response as to whether one could set a low dollar value on a guarantee human extinction centuries into the future.
me:
Prof. David Archer of the University of Chicago department of Geosciences is of the opinion that contemporary global warming left unchecked is in fact likely to set of a series of events leading to the relatively sudden release of seabed methane clathrates some thousands of years hence, possibly enough to trigger a much larger global warming event. He does raise the ethical implications of this scenario when he discusses it. So the 630 year question is not entirely a hypothetical.Tim Tyler:
We'll have AI and nanotechnology within 50 years. That will make climate change into an irrelevant storm in a teacup.Mitchell Porter:
I wrote about this issue recently. It could even be the subject of a post here: what is the rational way to approach problems of unsustainability if you expect a Singularity? The answer I proposed is essentially to compartmentalize: treat sustainability as a matter of mundane quantifiable governance, like macroeconomics, and treat the Singularity as a highly important contingency that can't be timed, like a natural disaster. I would still defend that as a first approximation, but clearly the interaction can be more complex: if you really do think that the Singularity will almost certainly happen within 50 years, then you won't care about environmental changes "thousands of years hence", or even those slated for the second half of this century. In general, expectation of a near-term Singularity should skew preferences towards adaptation rather than mitigation.I am definitely rooting against the Singularity in question. We have plenty enough Singularities to deal with as is. I think turning the planet over to machines of our devising is every bit as stupid an idea as boiling the ocean, but I suppose that's just me and my biases again.
Anyway, the end of time as we know it is nigh; I suppose on this model the messiah will return as a cute pet puppy robot from Sony soon enough. So if you feel like boiling the ocean and burning the forests meanwhile, well, that is the least of your sins, compared to supporting public transportation or universal medicine, I suppose.
Reality is going to be replaced by a throwaway science fiction pulp. Is Phil Dick really dead, or is he still alive and we're just part of his dream? An excellent basis for rational planning I must say.
I guess this wouldn't be worth noting at all except that the site itself shows such intense intelligence along with this bafflingly lunatic wishful thinking.
16 comments:
Singularitarianism sounds crazy, to be sure, and no doubt for many (most?) the belief is just wishful thinking, but I wouldn't write them all off too quickly. (The Yudkowsky school, anyway.)
The entire sweep of human history has taken place subject to the design constraints of the human brain. If people can figure out how intelligence really works, and actually improve upon the process--that's a huge development, "hard takeoff" or no.
I think the standard response to the charge of Rapture-ism is "Rapture of the Nerds, Not."
Wait... are you talking about a potential problem 500+ years hence, as though that was predictable? Singularity or not, prediction more than 100 years out seems pretty speculative. Environmental prediction from 1908 doesn't seem like it would have been very accurate, don't you think?
I recommend the recent discussion at The Oil Drum called The Singularity vs Resource Depletion.
The "singularity" would provide one thing that is greatly sought after: escape from responsibility. Imagine moving from having movie stars pretend to live life for us, to having The Machine actually live life for us. We would move from Joe & Jane Sixpack Nirvana to Nerd Nirvana.
At what cost, I don't know. The Machine might treat us as pets, and not treat us very well. There is a nasty Harlan Ellison story about that, which I can't pin down just now.
The rational kernel of Singularity futurism is a sober appraisal of where artificial intelligence, in particular, is headed, and the technological power that implies.
If one views technological development as naturally proceeding from the medieval, through the industrial, to the informational, with the latter stage involving the instrumentalization of powers formerly only possessed by brains and genes, then it is actually not so amazing that the early informational period should be beset with problems arising from the late industrial.
While people may choose to be optimistic about the prospects of machine intelligence coupled to nanotechnology, really all that implies is a tremendous concentration of power (possibly autonomous, or possibly under human control). So the nature of the outcome really depends on the ends to which that power is deployed, which is why Eliezer goes on about "Friendliness" being such a central issue for AI.
With respect to geoengineering, it seems overwhelmingly likely that at a sufficiently advanced level of technology, you can build atmospheric scrubbers which will extract whatever you want out of the atmosphere, to the degree that you wish, and that at a sufficiently advanced level of science, you will know where and how to deploy them in order to get precisely the effects you want and nothing else. So if a Friendly Singularity occurred and we were still in overshoot, it truly should have ceased to be a serious problem.
The harder question is, what is tactically and strategically appropriate pre-Singularity? I am prepared to err on the side of caution - not that my opinions have any political weight! - and endorse plans which aim at CO2 stabilization at 450 or 350 ppm. I think that is all around healthier than Pollyanna cornucopianism, and it is clearly where the mainstream world is headed in any case, especially after 2009. But the problems specific to the informational era also loom in the very near future. Comprehensive luddism really is one way to deal with them, but the number of independent power centers that now exist in the world renders it unlikely to succeed, in my opinion. So I think it better to ask what we would want out of all that power, under ideal circumstances, and then aim to bring those circumstances about.
Seriously, I fail to see how Singularitarianism (wow, that's a big word) has anything to do with the obsession with coal and oil, nuclear power, and boiling polar bears.
-- bi, International Journal of Inactivism
So, Mitchell, your position appears to be that while we are not smart enough to overcome the second law of thermodynamics ourselves we are smart enough to build machines that are smart enough to build other machines to do it for us?
Forgive me if I am insufficiently reassured as to change my opinion about the rational approach to our current situation.
Ian, welcome.
The horizon of predictability of a phenomenon depends on the nature of the phenomenon.
Methane clathrates are icelike substances found at cold temperatures and a specific temperature range. There are huge deposits of the stuff on the sea floor. There are two reasons to think about the stuff. First, it's a huge potential source of fossil fuel/additional greenhouse forcing.
Second, if the ocean warms enough then the substance becomes unstable and massive releases of methane gas ensue.
A sudden release of clathrate-bound methane is a strong hypothesis for the mechanism of the Paleocene/Eocene Thermal Maximum, one of the great extinction events of the past.
Could this happen again? The prediction is based on a few physical systems with long time scales. First, while everynody is all worked up about the onset transient of global warming, the eventual extent and duration of the warming is a relatively well-constrained and relatively simple function of the total emission.
So taking a plausibly bad emission scenario (and assuming the clathrates are not entirely tapped out for their own energy in the process) the long-range warming of the atmosphere is quite well-specified. Then it is a matter of oceanography to determine whether and when that signal propagates deep enough to affect the clathrate deposits.
Reference here.
above should say cold temperatures and a specific pressure range. Methane clathrates are, I believe, unstable at all temperatures at 1 atmosphere pressure or less.
If you take it as a premise that human activity has some significant role in these climate changes, then long term predictions have to include long term predictions of human activity. I don't believe that kind of prediction is possible.
If you don't believe that human activity is a substantial part of these trends, then long term predictions make more sense. You only have to assume that human activity will continue not to play a part in these trends, and you can extrapolate off for hundreds of years. But you just can't extrapolate based on sudden changes among humans. Malthus did that and he was wrong, and turned his name into a phrase. Doesn't it seem likely you would be making the same error?
First, I'm not sure what you are accusing Malthus of. It's not entirely certain that he was wrong. My belief is that there is a maximum tolerably sustainable human population of the earth. I think that was Malthus's main point.
Beyond the Malthus part, I see your point.
Archer's prediction is contingent on 1) continuing releases of greenhouse gases without major restraint until the carbon is basically used up and 2) no subsequent human influences on climate of comparable magnitude and 3) much of the clathrate remains buried in the ocean floor.
The first is sadly plausible. The second is a plausible consequence of the first: we may be much reduced in number and capacity.
The third? Well, if we just dig up the clathrate now we get the second warming immediately on top of the first one without all that inconvenient waiting, so that would be even stupider, but it would at least solve the moral dilemma as to whether we have a right to inflict a rerun of our difficulties on our descendants.
You raise a very good point in the abstract. There is a big push to couple economic modeling to climate modeling in a predictive sense. I think this is crazy. As Paul Baer said to me once, "you don't predict whether you are going to the movies, you decide whether you are going to the movies".
The idea that human behavior is part of the system we are trying to control as opposed to the controller, i.e., that ist is properly viewed as a branch of science rather than a branch of engineering, is I think a key to out present difficulties.
In the specific case at hand, though, the chain of events is simple enough and compelling enough that it genuinely raises the question of whether we have obligations to our very distant successors.
I am, not sure whether the story that some native tribes would always consider the interest of the seventh generation is factual; I suspect a romantic fallacy. It's a nice story though. I wonder whether as our skills increase our obligations should extend deeper into time rather than shallower.
Archer's scenario makes this more than a hypothetical.
Well, Malthus was wrong in that he thought the population limit to be something not so far away, and we are far away from the population numbers of his time and still haven't reached a population limit.
People are speculating that climate change and other environmental factors will introduce a limit -- but that's exactly the same argument Malthus was making (his argument was also mostly environmental), and we still don't see any limit on the horizon.
Maybe the increase in food prices indicate some limit, but I'm not really sure of that interpretation. Increased food prices certainly don't seem on the verge of causing a global population decline. Not to say that everything is rosy, just that there's little empirical reason to predict global disaster. All the predictions I've seen are speculative, not based on currently visible phenomena. The speculations seem to involve two lines which you can extrapolate will cross, causing disaster, but that's a hard extrapolation to make. People don't respond to the speculation, but they do respond to actual hunger, and once they respond the lines won't be straight anymore.
Those lines have crossed in some places. Will the world all start to look like Chad? I doubt it. I don't think the responses to these localized problems are very similar to the response to a globally analogous problem. Chad lost the ability to respond constructively to problems long before the environmental problem became acute. As much as people like to talk down the citizenry, we're nothing like Chad and predicting our non-response seems silly.
And anyway, I'm arguing against worrying about far-away problems, not the near-term ones. Do you really believe the near term problems will be ignored? If science fiction has taught us anything it's that social change is always slower than predicted. So I think your pessimism on that is unfounded (and unconstructive too). And maybe the slowness of social change will cause pain, but it's more like inertia than indifference.
For other readers I'll point out that Ian is someone for whom I have a great deal of respect as a very productive, insightful and generous member of the Python community.
It is interesting to have someone show up who isn't a veteran of these discussions. To some extent this provides an opportunity to weigh how effective our arguments are.
Ian, I'm not sure this is responsive to your point here, but this is what you brought to my mind.
I agree that we don't need to focus on the very distant future as in the clathrate problem. It's an interesting sidelight and not the main story. The question is very much about how much we should be concerned about our impacts in the future though, where the time scale is long compared to political and economic time scales (but very short compared to the time scales of natural changes).
In addition to violating the usual understanding of what does or doesn't constitute a crisis, our problem has a couple of features not broadly understood: first its cumulative nature (it's not the rate of emissions that matters most, it's primarily the total emissions over a time shorter than the time it takes to form limestone) and second its intrinsic delay (the ocean as a thermal buffer). Third, one of the most serious possible consequences, a thermal/mechanical failure of a major ice sheet leading to meters of sea level rise, intrinsically ahs a delay built into it, wherein the moment when it is inevitable may predate the moment when the consequences are felt by decades.
Admittedly the consequences of climate change so far are marginal, possibly except for polar bears, though the detectability question isn't any more. You'd think this advance from barely observable to casually observable would have strengthened the argument for action more than it has.
It's not THAT speculative. As James Hansen puts it, there's a gap between what's known (by specialists) and what's understood (by the public). We aren't successful at conveying the weight of evidence, partly because we aren;t very good at it, partly becuase we aren;t paid to do it, and partly because others ARE paid to undermine our arguments.
Consequently, there are real questions about the morality of present benefits and future costs that don't seem to be handled well either by economic formalisms or by the intuitions of the general public.
I am and have always been pessimistic that this particular problem will be handled well, but I have always felt some obligation to try to overcome it.
And I admit it's a hard sell, but planets aren't shoes or cars or software packages. I don't get to work for a different planet if this one's marketing department isn't up to snuff. The product we've got is the product we are going to have to stick with.
We have gotten good at worming our way out of short-term threats. Now long term threats loom. We have no experience with them, so we apply the wrong intuitions. It's not a pretty picture.
Probably the best brief presentation is Hansen's 2005 one, the one which really got him in trouble with the Administration-shaped-object in DC.
Mildy off-topic:
I'm not so sure about this singularity thing as a concept in and of itself, but that aside...
Am I the only one bothered by the hijacking of the term singularity? I'd prefer it stay in astrophysical science where it belongs.
I appreciate the ability to make analogies from biology and physics as a tool for understanding complex issues. But I get nervous when those metaphors and analogies become permanent fixed vocabulary.
This is frequently the case with crystal healing and other sorts of new-age "alternative medicine" quackery. They glom onto some term like "free radicals", or "quantum states" and sell magnet bracelets to people in trailer parks.
A singularity literally belongs anywhere there is a zero in a denominator, or in similarly locally ill-behaved mathematical functions.
I'm not sure that helps the new millenialists all that much but there it is. It's not strictly an astrophysical quantity though it seems to me that in mundane physical situations such singularities seem to find a way not to come up.
I think "singularity" is a reference to a black hole event horizon. In a black hole certain basic ideas of cause and effect break down when the forces reach such a high level. The idea of an AI singularity is that when intelligence can improve intelligence, presumably indefinitely (or well past our own intelligence to comprehend) basic ideas about thought and discovery break down. It's the idea that there's a cusp between incremental improvements and radical change that cannot be predicted or understood.
Post a Comment