TECHNOLOGISTS AND FUTURISMSteve DeAngelis
had what was for me, an extremely interesting post, entitled "The Future of Computing
". I say, "interesting" because the implications of Steve's post and the issues they in turn raise, range far beyond computers themselves into the evolutionary trajectories of civilization. I would like to elucidate or speculate about a number of Steve's points but I encourage you to first read his post in full
as well as the underlying article from the New York Times
Some excerpts:"Lohr then rhetorically asks, "What's next?" Some of the answers to that question, he reports, were discussed last month during a Washington, DC, symposium held by the Computer Science and Telecommunications Board. At the "2016" symposium, Lohr reports:Computer scientists from academia and companies like I.B.M. and Google discussed topics including social networks, digital imaging, online media and the impact on work and employment. But most talks touched on two broad themes: the impact of computing will go deeper into the sciences and spread more into the social sciences, and policy issues will loom large, as the technology becomes more powerful and more pervasive."A while back, I had a discussion with Dave Schuler regarding "wicked problems" and had this to say on computers and intractable problems:"First of all, not all "wicked problems" were created equally wicked. We must differentiate between those problems that are intractable from those that are merely hard or prohibitively expensive. The latter involves a significant degree of human value choice while the former is effectively beyond any direct solution within our present power to efface. Many cutting edge scientific questions are temporarily intractable until, say for example, computing power increases by a given order of magnitude"
At a certain point, quantitative increases in computing power effectively represent qualitative increases by virtue of shifting the boundaries in intractability in scientific fields, permitting an extended experimental range and new discoveries. These in turn spawn new, previously unanticipated problems for investigation.
Steve went on to write:"Lohr then rhetorically asks, "What's next?" Some of the answers to that question, he reports, were discussed last month during a Washington, DC, symposium held by the Computer Science and Telecommunications Board. At the "2016" symposium, Lohr reports:Computer scientists from academia and companies like I.B.M. and Google discussed topics including social networks, digital imaging, online media and the impact on work and employment. But most talks touched on two broad themes: the impact of computing will go deeper into the sciences and spread more into the social sciences, and policy issues will loom large, as the technology becomes more powerful and more pervasive.Policy is always racing to keep up with technology, but Lohr implies that policymakers are going to have to move even faster. We're all aware of privacy issues, but ethical issues are also likely to arise about how data is collected and used."There are dangers here in spurring policymakers to move "faster". While there are exceptions, "policy" people in the sense of political appointees generally do not have scientific training or even a sufficient layman's familiarity with the emerging technologies they are being asked to regulate and their default position may be to slam the brakes on what they do not understand. Secondly, the new computing power may allow for scientific testing that could possibly invalidate long-held and now sacrosanct shibboleths developed under the standard social science model. As the recent academic "show trial" experience of former Harvard University president Larry Summers demonstrated, there is volcanic ideological hostility in America's finest university toward some questions even being asked. The temptation will be to appeal to the government to short-circuit scientific investigation ( as indeed, the Bush administration has already begun to do in regard to stem cell research, cloning and other topics of biological science that offend the sensibilities of religious fundamentalists).
Steve continued:"More and more people are starting to abandon what Robert Frenay refers to as the machine model of the world and adopting a natural model. One thing that all successful natural systems possess is feedback loops, which means information sharing plays a major role in any successful system. Resilient enterprises understand this. Technology has made information sharing easier (so easy, in fact, that many decision makers lament information overload). Natural systems understand what information is critical and they fine tune receptors accordingly. Resilient enterprises do the same thing. Enterra Solutions helps them do this through what we call Transparent Intelligent Interfaces, rich Internet applications that deliver the desired information in the desired format"
Not being a computer guy myself, I'm not sure what a "Transparent Intelligent Interface
" is exactly, but I am absolutely certain that Steve has correctly identified the operative societal paradigm shift from a mechanical to a biological metaphor. Not necessarily a Herbert Spencer/Ayn Rand Social Darwinist ecology, cooperation and altruism appear to be very important variables in complex adaptive systems, but not without the ruthlessness of survival of the fittest either. The part on feedback loops is critical for readers because Steve is discussing a dynamic model, not a static one. Unlike abstract static models, life is dynamic.
Steve goes on to discuss the implications of a surveillance society, a subject of enough importance to deserve its own post and one that I will not deal with here. Instead, I'd like to discuss the futurist aspect.
Aside from being the CEO of Enterra
, Stephen DeAngelis is also Enterra's Chief Scientist and a Visiting Scientist at Oak Ridge National Laboratory
. Technologists tend to make for optimistic, "utopian" futurists and Steve's company and the work he does with Dr. Barnett
is about optimism. "Development-in-a-Box" is about creating a better, more resilient, global system. Ray Kurzweill
is another futurist example of a techno- optimist. Optimists are in the minority however. For every Alvin Toffler
you have a dozen intellectual three-card monty men like Jeremy Rifkin
. For every Steve DeAngelis you have a Bill Joy
saying " the future doesn't need us
" (not that it matters because according to Joy we will be drowning in gray nano-goo anyway).
In terms of futurism, optmism as articulated by DeAngelis, Barnett, Toffler etc. and not preaching the advent of dystopia, is going to be more right than wrong, in my view. Here's why:
Being a futurist is an inherently difficult task. As decision trees branch out from a given premise ( assuming the premise is correct - and they usually are not in the "sky is flling" scenarios) the probability of a potential scenario coming to pass decreases with each successive generation as dependent contingencies multiply.
Dystopians ( like the brilliant John Robb
) zero in on the worst case scenarios in the decision tree. This is important because these scenarios always exist - there is an "eschatological"
end to every system - and by a series of missteps you can accelerate your path on the decision tree toward certain disaster. Cassandras are worth listening to because modifying your trajectory alters your subsequent probabilities. Herman Kahn
was much reviled for " thinking about the unthinkable" and for doing so with irreverence but Kahn was seeking to mitigate worst case outcomes of nuclear war and use these horrifying scenarios to advocate pragmatic defensive measures.
That being said, over the long run, smart futurists will bet on nonzero sum outcomes like Dr. Barnett, Steve DeAngelis and Robert Wright
. Why ? For things to go horribly, horribly wrong - all the steps must go exactly right, so to speak. Chances are, with each step in the decision tree representing a choice (by you, society etc.) you will chose correctly at least some of the time, enough at least to avoid disaster and perhaps enough to open up a world of new possibilities. Getting a complete civilizational collapse a la Jared Diamond
is exceedingly rare.
To quote a Russian proverb, " That which ceases to grow, rots". If we tend to the resilience of the system, feed its inherent dynamism, exapand rule sets that encourage ordered liberty and free exchange of information, we will avoid embracing dystopia.ADDENDUM:
"Learning Evolved, Introduction: The Revolution Against the Standard Social Sciences Model
" by Dan of tdaxp