Obama Uses Visit to Hiroshima to Reward Abe for Remilitarising Japan

In order to understand the world, both natural and social, one needs to look beyond appearances. This is especially the case with President Obama, whose uplifting rhetoric is designed to hide a much grubbier reality.

The much hailed visit to Hiroshima serves as a graphic example of this. Obama’s speech at Hiroshima, as reported in the New York Times, in parts, reads as if it were written by Bertrand Russell,

“Technological progress without an equivalent progress in human institutions can doom us,” Mr. Obama said, adding that such technology “requires a moral revolution as well.”

The New York Times report did make the following point, to its credit.

In a striking example of the gap between Mr. Obama’s vision of a nuclear weapons-free world and the realities of purging them, a new Pentagon census of the American nuclear arsenal shows his administration has reduced the stockpile less than any other post-Cold War presidency

Obama has, and had, no such “vision.” The “vision” of a nuclear weapons free world was announced during his first term, in the so called “Prague speech.” The “vision” basically amounted to public diplomacy at a time when the US sought to achieve its preferred outcomes at the NPT Review Conference. It was necessary to offer a “vision” of nuclear disarmament, given that the nuclear weapon states under the NPT pledge to pursue nuclear abolition, in order to ensure that the Rev Con process would not collapse as it had during the Bush administration.

I could discuss nuclear modernisation in depth here, but I prefer to highlight the following point made in the New York Times article.

Mr. Obama’s decision to visit Hiroshima was in part intended to reward Mr. Abe for his efforts to improve ties and forge a closer military relationship between the two countries.

The Prime Minister of Japan, Shinzo Abe, seeks to remilitarise Japan as a part of Washington’s “pivot to Asia,” which includes the offensive US military doctrine of “AirSea Battle.” China responds by increasing the saliency that it places on nuclear deterrence in its strategic policy.

Think about that.

Obama has used a visit to Hiroshima to reward Abe for his efforts to remilitarise Japan, directed at the leading victim of Japanese imperialism, which increases nuclear danger in Asia.

I can think of only one word to describe this.

Sick.

Posted in Politics and Economics | Tagged , , | Comments Off on Obama Uses Visit to Hiroshima to Reward Abe for Remilitarising Japan

Put the Liberals Last, Then Unsheathe the Strike

As Australia heads to the polls it is worth keeping in mind that all the signal advances made toward a socially just Australia have been accompanied by an upsurge of strikes and labour movement militancy.

The great strikes of the 1890s were key events that ushered in the system of conciliation and arbitration, and also much of the “Deakinite Settlement” that saw Australia become a social laboratory admired from afar.

The surge of strikes and militancy in Sunshine lead to the Harvester judgement of Justice Higgins, which gave us the minimum wage but also, much much more importantly, the idea of social citizenship.

The Keynesian social democratic order, what Stuart McIntyre has called “Australia’s boldest experiment,” is often attributed to the sacrifices of World War Two and Keynesian economic ideas that spread amongst the political classes following the effects of the Great Depression.

What is ignored is the massive upsurge in working class militancy in the 1930s that saw significant strike action, especially sit in strikes. In the United States, for instance, sit in strikes played an important role in ushering in the New Deal. The capital owning classes were especially frightened of these, for a workplace occupation is but a step away from seizing the means of production.

When sit in strikes occur with sufficient scale and frequency society starts to move toward a prerevolutionary situation.

The social, cultural and economic reforms of the late 1960s are often attributed to charismatic leaders, such as Gough Whitlam, or, when charitable, to the new social movements that arose at that time such as the antiwar movement, the antipatriarchal movements and so on. Both of these were surely significant, but left almost out of history was the upsurge in labour movement militancy that also occurred at that time. This provided space and resources for the new social movements to act, and give these new movements added bite.

That rise in strike action has been dubbed a “flood tide” in a most fascinating historical survey of trade unionism from the 1960s to today by Tom Bramble. In this work, Bramble shows that the Clarrie O’Shea general strike led to a flood tide of industrial action as the erosion of the penal powers freed the working class to strike a better bargain with the capital owning classes.

Prior to that, we ought not neglect, the Communist Party of Australia engaged in numerous actions on indigenous rights, multicultural rights and the like when very little within white Australia were prepared to do so. These too played a catalytic role in the events of the late 1960s and early 1970s.

Since then we have had neoliberalism, and the neoliberal era has been defined by an attack not just on the real, tangible, manifestations of social justice but upon the concept itself. Everywhere, including in Australia, neoliberalism has been accompanied by an “ebb tide” in strikes and labour movement militancy.

This has happened because the capital owning classes have used the structural power of globalisation to engage in moment by moment capital strikes, because the coercive powers of the neoliberal state have been used to smash unions, the essence of the neoliberal state is to attack its own population, and, especially in Australia, because class collaborationist peak union bodies have used corporatist industrial relations models to discipline the working class in the interests of neoliberal policy making.

The pattern is clear. Advance Australia Fair is accompanied by strikes and union militancy. Social regression is accompanied by weak, defeated, class collaborationist unions.

Our task as a movement is to defeat neoliberalism; to confine the neoliberal era to the dustbin of history. It is not to stop Tony Abbott. It is not to stop Malcolm Turnbull. It is not to rely on Bill Shorten or any other Labor Party leader to do what they cannot in the absence of a mobilised and determined working class. It is also to be conscious of the structural realities of globalisation.

To end neoliberalism will require an upsurge in strikes. Only when production is threatened or stropped do the capitalists pay attention to “the ignorant and meddlesome outsiders.” Production is the lifeblood of their system. That is why strikes have been, are, and will always remain the most potent weapon of the working class.

Phone banking during election time won’t stop neoliberalism. A purely electoral strategy won’t even come close to doing this. There is no substitute for the strike. The reliance upon electoralism and social movement unionism exhibited by Australian unions arises either because unions are unwilling or unable, perhaps to say both would be more apt, to build an industrial movement striking at the very core of the neoliberal order.

To revive the strike will require, to no small degree, at least two things.

Firstly, the development of a grassroots insurgency within the Australian trade union movement.

One of the lessons of the Campaign for a General Strike to Stop Tony Abbott is that the union movement lacks participatory forums and participatory decision making bodies. The Australian trade union movement is highly hierarchical, and from this hierarchical arrangement comes class fragmentation, the dissipation of class consciousness, and bureaucratic inertia.

An insurgency within the labour movement for a more grassroots participatory unionism is the first step toward ending neoliberalism. The lessons of history loom large here. The flood tide during the Clarrie O’Shea general strike and after happened because there existed a network of militant unionists pushing matters in more proactive directions.

We need to revive such a network again.

This movement would need to address the penal provisions of the Australian industrial relations system that limit strikes, but it also must be prepared to boldly defy those provisions in the spirit of joint action and solidarity.

Secondly, the labour movement needs to come to grips with globalisation for the reasons stated above. One way that this could be achieved is by developing the highly insightful and thoughtful ideas of “the global picket line” promoted by Australia Asia Worker Links. To defeat neoliberalism is to attack it at its source, namely the globalised system of production.

Neoliberalism is just an ideological fig leaf under which the reality of globalisation rests.

We need not just more strikes. We need more strikes that target multinational corporations in solidarity with workers beyond Australia. We need to bring to consciousness the reality that we are all workers existing in an international division of labour. For the working class to be conscious of its existence as a class today requires knowledge of the fact of the international division of labour and the manner in which one as a worker fits into the global machine.

But this too requires grassroots participatory unions, for ensconced union hierarchies have little interest in looking and acting beyond their local fiefs.

This is how we beat neoliberalism. Remember that when you put the Liberals last. Putting the Liberals last is the first thing, at best, we need to do not the last thing.

Posted in Politics and Economics | Tagged , | Comments Off on Put the Liberals Last, Then Unsheathe the Strike

The Riemann Hypothesis as a Counterexample to Coherentism in Epistemology?

Coherentism is the main alternative account of justification to that of foundationalism. With the latter, justification is based on beliefs, much like with axioms in mathematics, which are held to be basic and self evidently true. From those beliefs a system of knowledge is built in linear fashion.

With coherentism, by contrast, what matters is how belief “coheres” with the wider system of knowledge. The image is of a web of knowledge as it were, or the spoke like arrangement of a wheel.The thicker the coherence with the wider system of knowledge the more justified are we in holding that a belief constitutes knowledge.

Although foundationalism takes its inspiration from Euclidean geometry, so thereby is heavily influenced by the manner in which proof is developed in mathematics, it is possible to think of mathematics in coherentist terms.

Many theorems of mathematics are built by first moving from theorems already proven. Of course, this fits into the linear foundationalist picture but it is also possible to imagine all the theorems of mathematics constituting a coherent web of knowledge. Now consider the Riemann hypothesis, perhaps the leading problem of mathematics, in the context of coherentist epistemology.

The Riemann zeta function takes the form, very crudely arrrrgh!!!!!

z(s)= 1/1^s + 1/2^s + 1/3^s …

S is a complex number, i.e s=x+iy. The Riemann hypothesis is the conjecture that all non trivial solutions of the zeta function that have a value of zero, that is all values of zero except when y=0, have a real part, that is x, of ½.

One of the most important reasons why the Riemann hypothesis is the leading problem of mathematics is because, if true, the Riemann hypothesis tells us that there is an underlying order to the distribution of prime numbers, the atoms of number theory. It is not hard to see how a proof of the Riemann hypothesis takes on considerable intellectual significance here.

Most popular discussion on the Riemann hypothesis and its significance focuses on the distribution of primes. However, many theorems of number theory have been developed by first assuming that the Riemann hypothesis is correct. The two considerations are not unrelated, of course, but the latter is interesting with respect to coherentist theories of knowledge.

The Riemann hypothesis strongly coheres with many theorems of number theory. Given this thick coherence are we then to say that, even though the hypothesis has not been proved, that we nonetheless are justified in saying that all non trivial zeros of the zeta function have a real part ½?

The matter of coherence becomes more stark when we consider that the Riemann hypothesis plays an important role in other areas of mathematics, that is not just number theory. Moreover, the Riemann hypothesis is also connected to physics by way of quantum mechanics for complex numbers play an important role in the theory.

That is thick coherence indeed.

Do we not “know” the Riemann hypothesis to be true, because of its thick coherence with number theory, even though we have not “proven” it to be true? A mathematician would say that this would be an odd way of asserting that we are justified in believing the Riemann hypothesis to be true. Justification can only come after a proof. If we side with the mathematicians on this point it seems we must say that coherentist epistemology is false.

That is, the Riemann hypothesis and the way it relates to number theory (but not just there) forms a counterexample to the coherentist theory of knowledge.

But any epistemologist worth her salt knows, with intended pun, that justification and knowledge might turn out to be two different things altogether. Of course, coherentism *is* offered as a theory of justification based on the standard conception of knowledge as *justified* true belief.

Such epistemological considerations are not important for number theorists as they subject Riemann to assault.

However, if the Riemann hypothesis should turn out to be undecidable? What then? Well, epistemological considerations will definitely loom large then!!! I suspect coherentism will get a boost should that be the case.

Posted in Philosophy and Science | Tagged , | 3 Comments

Cooperation and Evolution: The Case of Sympatric Speciation.

I like to keep a tab on current thinking regarding cooperation in biology, especially evolutionary biology.

I do this for two reasons.

Firstly, I find evolutionary biology to be intrinsically fascinating and I am always on the lookout for new types of thinking in evolutionary theory that step beyond the boundaries of standard analysis. I like this because it enriches, indeed deepens, our understanding of evolution.

Secondly, I like to do this because I feel that the advent of a type of crude selfish gene account of evolution in the neoliberal era has been unfortunate, and has been used by ideologues to justify neoliberal policy making. It has even been used to justify a might makes right approach to international relations, which carries with it certain unpleasant historical connotations.

Speciation can be seen as occurring along a spectrum with Mayr’s allopatric speciation, where species evolve from a common ancestor due to the presence of a geographical barrier that prevents gene flow between populations, and sympatric speciation, where species evolve from a single ancestor whilst inhabiting the same geographic region. With sympatric speciation populations diverge even though there exists no barrier to gene flow or interbreeding.

Two key points of controversy regarding sympatric speciation exist, namely what drives it and how frequent is it. Mayr held that natural selection could not explain sympatric speciation, and so therefore did not occur in nature. But we know that it does, so Mayr’s eliminativism does not hold.

The “Darwinian fundamentalist,” competition, adaptation and all that, John Maynard Smith held that sympatric speciation occurs when two population groups occupy differing ecological niches in the same geographic region. One group is more adapted to one niche, and so speciation on standard adaptationist lines occurs.

It is surmised, also, that sympatric speciation is not common.

A news article on sympatric speciation caught my attention. This article reviews a paper by Roberto Cazzolla Gatti that criticises the principle of competitive exclusion, which holds that two species competing for resources cannot stably coexist in the same region as slight advantages of fitness will see one species overwhelm the other over time. This is a type of common sense, as it were, but it is puzzling in a theoretical sense as it is difficult to envision how this principle leads to the rich layers of biodiversity that we observe in nature.

Secondly, we don’t actually see the principle of competitive exclusion at work in nature nearly as pervasively as intuition would have it. The article cites Gatti as stating;

“My model predicts that the coexistence of two species in a sympatric way can happen only if there is low competition or weak competitive exclusion between them and a kind of avoidance of competition that leads to a slight shift of the niche of a meta-population, which accumulated a series phenotypic differences due to genomic inclusions coming from other sources of genes. Thus, eventually, it’s the avoidance of competition and the process that I call endo-geno-symbiosis that drives the expansion of the diversity of living beings.”

Empirical evidence suggests that sympatric speciation is also, it would appear, quite common; the article cites David Marques to this affect;

“We cannot know for sure that the Lake Constance sticklebacks will continue evolving until they become two non-interbreeding species. But evidence for sympatric speciation is growing, from mole rats in Israel to palms on Lord Howe Island, Australia, and apple maggots evolved from hawthorn maggots in North America, leading some evolutionary biologists to think it could be surprisingly common.”

The concept of endo-geno-symbosis, a form of genetic cooperation, is intriguing with respect to a recent book published on the social life of genes.

Pioneers in the nascent field of systems biology, Itai Yanai and Martin Lercher present a compelling new framework to understand how the human genome evolved and why understanding the interactions among our genes shifts the basic paradigm of modern biology. Contrary to what Dawkins’s popular metaphor seems to imply, the genome is not made of individual genes that focus solely on their own survival. Instead, our genomes comprise a society of genes which, like human societies, is composed of members that form alliances and rivalries.

In language accessible to lay readers, The Society of Genes uncovers genetic strategies of cooperation and competition at biological scales ranging from individual cells to entire species. It captures the way the genome works in cancer cells and Neanderthals, in sexual reproduction and the origin of life, always underscoring one critical point: that only by putting the interactions among genes at center stage can we appreciate the logic of life.

At the moment, of course, we are experiencing a significant loss of biodiversity due to human activities or better still a human induced ecological crisis. When biodiversity declines because of the actions of a single species you can probably guess that competition may well be at work. To cite Gatti;

“These theoretical findings, confirmed by empirical approaches, should motivate our species to think before it is too late about how human competition, for the first time in the history of life on Earth, has been systematically leading to the extinction of animals and plants.

This is a point to which I shall return.

One of the puzzles of human evolution concerns not the evolution of Homo sapiens per se, but the all too obvious fact that this rather annoying lump of grey matter, as Bertrand Russell once termed it, remains the only remaining species of the genus Homo. That is to say, why are we the only remaining hominid species?

This lack of biodiversity puzzle may be, to errantly speculate, the result of competition among hominids sharing the African plains. Let us not focus on different hominid species and focus on just one, namely our own. A recent article in New Scientist states

We know that modern humans first arrived in Europe about 45,000 years ago when the continent was still a Neanderthal stronghold. Over the next 30,000 years – archaeological work has revealed – a procession of different cultures, each associated with different artefacts and lifestyles, rose in Europe.

Archaeologists tend to think these sort of cultural shifts reflect the spread of new ideas through an unchanging population. But a new analysis of nuclear DNA taken from 51 ancient Eurasians tells a different story. They actually reflected the spread of different peoples.

Now that’s hardly an example of “sympatric culturation.”

To return to the themes that opened this post. Humanity’s relationship with nature is mediated through social structure. Ours is a social system, both in its political and economic aspects, that has competition at its core and this is accompanied by systems of ideology that celebrate the virtues of competition.

There appear to be ecological limits to competition and ours may well be a species that is starting to hit those limits. A competitive economic system that has no regard for ecology cannot endure, and a competitive system of states that ultimately relies upon nuclear deterrence for its stability is also fraught with danger.

Work on cooperation in evolution is worth thinking about, and keeping an eye out for, as it tends to undermine systems of ideology that celebrate competition.

Posted in Philosophy and Science | Tagged , , | Comments Off on Cooperation and Evolution: The Case of Sympatric Speciation.

Why the Efficient Market Hypothesis is Correct

According to neoclassical free market economics, which has it that markets know best, markets are efficient as they incorporate all available information. In particular, financial markets are efficient and they are not prone to mania, bubbles, and spectacular crashes. This view of markets has been dominant, and has framed, indeed continues to frame, public policy.

We know the efficient market hypothesis to be false; the third world debt crisis, the savings and loans collapses, the Long Term Capital Management affair, the European financial crisis indeed the global financial crisis itself are all, or at least should be viewed as, Kuhnian anomalies that bring economics to paradigmatic crisis. A crisis that can only be resolved by a thoroughgoing analysis of the epistemological assumptions of economic theory and a shift of paradigm that incorporates an analysis of market and financial instability.

It is one thing to ruminate at the intellectual plane on Kuhn and the structure of scientific revolutions, but it is quite another to *live through* a Kuhnian anomaly especially for the victims.

Policy continues to be framed with respect to the neoclassical framework, and to its utter discredit, Stockholm awarded the founders of the efficient market hypothesis Nobel Prizes right in the midst of the global financial crisis. A more disgraceful kick in the teeth one can hardly imagine, perhaps a Nobel Peace Prize for Eichmann would have done. To be sure, among dissidents and heterodox economists, the efficient markets framework, including its underlying epistemological assumptions, has been subjected to fierce criticism but its hold on economics and policy endures.

This is because, in a very important sense, the efficient market hypothesis is actually correct.

Markets, especially financial markets, are very efficient at distributing power and privilege to the architects of policy and the institutions that the architects serve. The “masters of mankind,” to take from Adam Smith, are animated by a “vile maxim,” namely “all for ourselves and nothing for other people.” It is not hard to see how markets are quite efficient here.

The theorems of free market economics, nice pretty little things they are too, are all based on a prime axiom; we have to make the rich happy, because if the rich are happy wonders will follow.

With respect to this axiom markets are efficient. What more is there for me to say but;

Q.E.D.

Posted in Politics and Economics | Tagged , , , , | Comments Off on Why the Efficient Market Hypothesis is Correct

The Sarmat ICBM and Hypersonic Warheads

2016 is supposed to see the first flight tests of the Russian replacement to the SS 18 “heavy” land and silo based SS-18 ICBM, and the missile, known as “Sarmat,” is scheduled to start entering into service in 2018.

The SS 18 is the missile that formed the basis of the fraudulent “window of vulnerability” that the Reagnites used in the late 1970s to beat détente into a pulp.

The Sarmat is presented as an approximately 100 tonne liquid fuelled ICBM, so kinda similar to the SS 19, that will have sufficient, it is said, boost phase speed to outrun currently deployed US ballistic missile defense systems. An article published today in the Russian press attracted my attention because of this statement.

“In this sense, the Sarmat missile will not only become the R-36M’s successor, but also to some extent it will determine in which direction nuclear deterrence in the world will develop,”

In late April this year the Russian’s flight tested the SS-19 ICBM, which the Sarmat will also replace, and a Russian news report carried some information which sheds some light on the above statement.

Russian Strategic Missile Forces have conducted a successful intercontinental ballistic missile (ICBM) launch, testing a hypersonic cruise vehicle, Interfax reported, citing a source familiar with the issue…

…All modern nuclear warheads are delivered on targets using ballistic trajectory that can be calculated, therefore such warheads could be intercepted. Hypersonic warheads currently in design would be capable of manoeuvring by yaw and pitch, eventually becoming impossible to intercept, thus making any existing and upcoming missile defense system impotent.

During the 1980s the Soviet response to Star Wars, which never got off the ground as it were, was the MaRV warhead or Manoeuvrable Reentry Vehicle, which makes programmed maneuverers in flight as the warhead heads toward its designated target. My understanding is that the Soviets saw the SS-19 missile as being the missile for the MaRV warhead, and that the Soviets used the SS-19 to flight test their MaRV programme.

It must be stressed that, if this report is to be believed (Russian news reports on nuclear matters should be taken with a grain of salt), we are not talking here about a MaRV capability.

The new missile, weighing at least 100 tons, will reportedly be capable of carrying a payload of up to 10 tons on any trajectory. This means an attack on a target could be made from any direction, i.e. RS-28 could start from Russia and fly in the direction of Antarctica, make a circumterrestrial flight and hit targets on the other side of the planet from an unexpected direction

The leading open source analyst of Russian strategic nuclear forces writing in English, if not also Russian, Pavel Podvig seems to concur. Of the hypersonic warhead Podvig wrote

Russia first went public with its “hypersonic weapon” more than ten years ago – in February 2004 it tested a warhead that according to the Kremlin “will fly at hyper-sonic speed and will be able to change trajectory both in terms of altitude and direction, and missile defence systems will be powerless against them.”

What is at issue here is a hypersonic warhead with an all azimuth attack capability, that is an ability to attack the target from any direction, and an all azimuth launch capability, that is to launch on any azimuth with the ability to vary attack approaches. A clear motive driving this capability is US ballistic missile defense.

China and the United States are working on something similar.

The stuff about 40 Mt Texas busting warheads can be discarded. The Tsar Bomba test was ~50 Mt and no way will any Sarmat ICBM feature a warhead of such unnecessarily high yield.

Posted in International Relations and Global Security | Tagged , , , | Comments Off on The Sarmat ICBM and Hypersonic Warheads

Blurring the Threshold Between Nuclear and Conventional War

I will be doing everything that I can to attend the Pine Gap peace convergence, near Alice Springs, in late September – early October.

Pavel Podvig, an American analyst of Russian providence, writing for The Bulletin of the Atomic Scientists, reminds us why activism of this type remains important. Podvig warns us of a potential danger subtler than the overt military manoeuvres between Russia and the United States/NATO that garner attention;

But there is a subtler, easily overlooked trend as well, which could make the situation even worse: a gradual blurring of the line – particularly in Russia – that separates conventional weapons and their delivery systems from their nuclear counterparts

I am not able to read the entire article, which is a pity as Podvig is a most knowledgeable and insightful analyst.

One of the more annoying aspects of much commentary on nuclear affairs, at least for a seasoned observer such as myself, is the manner in which nuclear modernisation programmes are presented as responses to the latest, post Crimea, retching up of the geopolitical conflict between Russia and the United States.

There have been proposals for the “modernisation” of nuclear weapons for as long as I can remember; PLWYDs, RNEP, RRW spring to mind. Just about all of the world’s nuclear powers are upgrading their strategic nuclear forces. A related programme, in the US context, has been conventional counterforce, which represents a blurring of the line between conventional and nuclear that has been a Russian concern of long standing.

A key underlying factor at play here is NATO expansion, following on from what the eminent realist international relations theorist, John Mearsheimer, referred to as a US post cold war “imperial foreign policy” waged “by design.” Gorbachev proposed, upon the ending of the cold war, that a common political and strategic space be created in Europe, incorporating Moscow, that would largely eliminate the need for nuclear deterrence.

This was explicitly rejected in favour of NATO expansion.

NATO expansion takes a number of forms; geographic expansion to the borders of Russia; expansion of mission beyond defence; globalisation of the NATO theatre of operations. For the Russians these are serious matters, and Moscow’s actions in the strategic domain, to a significant extent, can be read as a push back against this geostrategic advance. Russia, and NATO, naturally, are nuclear powers so these geopolitical tensions have a very serious nuclear component to them.

It is not hard to see how strategic ambiguity of the type discussed by Podvig makes, in a macabre insane way, sense for the Russians. It is on a par with that well known conception of deterrence, due to Schelling, namely “the threat that leaves something to chance.” If, say, some Iskander missiles are nuclear armed and others are not the threat that leaves something to chance becomes very real, one that NATO planners need to grapple with as they implement the mission handed down to them from on high.

Of course, this is all quite insane but it is an insanity whose underlying causes are ignored by commentators, the media and much of the liberal arms control community as Euro-Atlantic integration and its neoliberal premises are rarely, if at all, questioned.

The Pine Gap peace convergence is a convergence of activists protesting against the militarisation of international relations, and the role that Australia plays in the strategic nuclear war planning system of the United States. It is a political event of great importance.

That the peace movement again is stirring is a positive development, for too long critical analysis of nuclear affairs has been dominated by D.C. connected liberal arms controllers, and this newfound activism needs to be nurtured on a continued basis. The peace movement of the early to mid 1980s was an important political force, and it possessed a vision of an alternative conception of world order.

That alternative conception of world order was based on the principles of common security.

These principles need to be dusted off the shelves and brought to renewed relevance. The peace movement should not just fight against the insane drive to Armageddon, it must also fight against the underlying forces that propel that drive, and offer alternative conceptions of world order rooted in the principles of common security.

To a first approximation this will require actions that deter state actions, and that encourage public policies based on common security.

Ultimately the continued survival of the species will depend on perpetual peace. Nuclear deterrence in a world based on competing centres of concentrated power, both corporate and state, that agglomerate in their hands resources and production is no basis for continued human survival.

Only when the resources of the world and its productive proceeds are held in common can there be perpetual peace.

Posted in International Relations and Global Security | Tagged , , | Comments Off on Blurring the Threshold Between Nuclear and Conventional War

The Gettier Problem in the Philosophy of Science.

Philosophy of science has a curious relationship to epistemology, or the theory of knowledge.

There is little doubt that in the 20th century the philosophy of science took on a life of its own largely independent of epistemology. Interestingly, given that we begin by making an historical observation, some of the most important philosophers of the 20th century, and some of the most important works of the 20th century, were by philosophers of science or of philosophy of science.

Philosophy of science has become an autonomous sub discipline of philosophy, and many undergraduate and graduate philosophy courses teach epistemology and philosophy of science autonomously.

This is curious for scientific knowledge is a species of knowledge. The problems of scientific knowledge pondered by philosophers of science have more than a whiff of traditional epistemology to them. For instance, perhaps the *key* problem of epistemology has been the problem of scepticism. Philosophers of science, to no small degree, are interested in providing some grounding to science given the problem of scepticism.

Scientists, to use the expression of Richard Feynman, stopped worrying about such issues long ago, and simply got on with doing science. However, at times, epistemological angst creeps into the sciences in a serious way. Some of the fiercest debates in theoretical physics today are concerned with the nature of the scientific enterprise itself.

I myself have the impression that philosophy of science, in part, became important because paradigm shifting advances in scientific knowledge, such as highly abstract pure mathematics, relativity, and quantum theory, provoked epistemological angst, as it were, which contributed to the rise of logical positivism and its associated concern with science.

I think sociological and historical reasons also contributed, such as the second industrial revolution and the much closer relationship that was forged between science and the state.

The fall of logical positivism has not improved matters, at least not in philosophy if not the sciences.

It would be interesting to do a historical study of epistemological angst with reference to the Kuhnian framework. Are paradigm shifts accompanied by an upsurge in epistemological work that seeks to put science on a firmer epistemological foundation? It is easy to imagine how it could do so.
Normal science is not a time, one feels, when many are worried by epistemological niceties.

But when the paradigm shifts all is torn asunder. One could, perhaps, fit the rise of logical positivism into such a historical and structural framework.

Anyway, that is not really my concern here.

I am more interested in the Gettier counterexamples. The analysis of the concept, knowledge, has been dominated since antiquity with a tripartite conception of knowledge. That is, to know p is to have a belief that p, that p be true, and the belief that p is justified or warranted.

Much of epistemology has been devoted to accounting for the justification criterion. Theories abound. However, Gettier, in a classic two page paper (which wouldn’t cut the mustard in the neoliberal university), up ended this account of knowledge when he showed that it is possible to have a justified true belief yet not to be in the possession of knowledge. Gettier counterexamples take the form;

Smith justifiably believes that P.
P is false.
Smith correctly infers that if P is true, then Q is true.
So, Smith believes Q, justifiably.
Q is true, but not because of P.
So, Smith has a justified true belief that Q.

When I read philosophy of science I can’t help but get this hunch, this feeling, this gut intuition, that a lot of it focuses on the justification criterion, like in traditional epistemology, but only in the context of scientific knowledge.

But if knowledge is *not* justified true belief, as Gettier held it not to be, then philosophy of science should be just as infected by the Gettier problem as epistemology is. But, because of its autonomous status, philosophy of science is not affected by the Gettier problem to the same degree.

One way of sidestepping the Gettier problem is to adopt one or another stance of epistemology naturalised, to borrow from Quine. Perhaps what is required is a philosophy of science naturalised.

Can you use naturalistic inquiry to defend naturalistic inquiry? Is there, perhaps, a paradox of self reference here? If there is, could not philosophy of science be incomplete in similar fashion to the way we say mathematics is incomplete?

We started with epistemology as a general case and then moved on to philosophy of science as a specific case.

Could we now seek to move, by analogy, from the specific to the general? Namely, if philosophy of science is incomplete because of paradoxes of self reference could not epistemology be shown to be incomplete for similar reasons?

Can you come to know what knowledge is without first knowing what knowledge is?

The study of the philosophy of science, from the early 20th century onward, with reference to the Gettier problem would, at a minimum, be a most fascinating study. At the outer edge of speculative fancy, a formal demonstration of the incompleteness of epistemology would be a significant and original contribution to human knowledge.

It could be that we are a species, through the clear light of reason, that can come to *know* that one cannot *know* knowledge.

Now such a theorem ought be called Socrates’ theorem if any should. One, the tripartite conception comes from him via Plato, and two Socrates knew that he knew nothing.

Posted in Philosophy and Science | Tagged , , | 3 Comments

Who Was David Hume? A Rationalist, of course!

One must grant a belated happy birthday to David Hume (May 07, 1711), one of history’s most insightful thinkers and a favoured philosopher of mine despite the conservatism that is often associated with him.

Gottlieb, in a timely review of a new intellectual biography of Hume, observes, “in 2009, he won first place in a large international poll of professors and graduate students who were asked to name the dead thinker with whom they most identified.”

Gottlieb attributes this to Hume’s naturalism;

Still, it is probably the rise of so-called “naturalism” in philosophy that best explains Hume’s newfound appeal. Naturalism has several components, all of which were prominent in his work

Gottlieb further elaborates,

He treated religion as a natural phenomenon, to be explained in psychological and historical terms—which tended to annoy the pious—and he argued that the study of the mind and of morals should be pursued by the same empirical methods that were starting to cast new light on the rest of nature. Philosophy, for Hume, was thus not fundamentally different from science. This outlook is much more common in our time than it was in his

One of the components of naturalism is empiricism. The reason for this is that the naturalism that became dominant in contemporary philosophy has its origins in, and is an outgrowth of, logical positivism. Hume’s sceptical arguments, most famously his “is-ought” distinction, have thereby been used to support empiricist theses.

For this reason, Hume is most often put in the company of the classical, British, empiricists.

Consider the is-ought distinction in Hume’s own words,

In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.

“Nor is perceived by reason” are critical words here. Let us assume that there are no dedicated faculties of the mind, such as an ethical faculty or language faculty, based on autonomous principles of innate knowledge.

Jerrold Katz observed that, “sophisticated empiricists recognize an autonomous rational faculty as essential for knowledge.” Such an autonomous faculty is an “inferential engine” that furnishes knowledge based on principles of induction and association. The blank slate is not totally blank.

Empiricism is the view that this general faculty of reason provides us with knowledge as we interact with the external world, or which is able to engage in deduction analytically. Hume’s is-ought distinction then becomes a sceptical argument, for if ought statements are not products of reason, that is the inferential engine, then they cannot constitute a type of knowledge.

But there is another view possible here, a view that paints the picture of Hume not as an empiricist but rather as a rationalist. Plainly we make ought statements all the time, and we do draw conclusions with reference to a system of moral rules.

The is-ought distinction can be read as an argument from the poverty of the stimulus for an autonomous faculty of moral cognition based on innate knowledge of moral rules or principles. It is the use of this knowledge that provides us with the ought.

No amount of inference based on induction and association as we interact with the world can furnish us with the ought or moral rules. As Hume stated, “the rules of morality are not the product of our reason.” They are the product not of our “reason” but of an autonomous faculty of “moral reason.”

Hume, of course, himself held that because reason does not furnish us with moral principles they arise from moral sentiments, the passions as it were. These, clearly, are innate and handed down to us by nature. Hume’s position, hence, is a type of nativism. But we can update this conclusion in light of advances in the study of the mind and cognition that have occurred since the days of Hume.

By moral sentiments we should mean a system of moral rules that are innate, autonomous, and constitute a system of knowledge based on a natural genetic endowment. This is a type of rationalism, not scepticism or empiricism.

This conclusion can generalise to other famous arguments of Hume that are viewed in a sceptical vein.

To return to Gottlieb’s question; who was David Hume?

David Hume was a rationalist.

Posted in Philosophy and Science | Tagged , , , | Comments Off on Who Was David Hume? A Rationalist, of course!

Uniqueness and Mediocrity: The Multiverse and the Conflict between the Copernican and Anthropic Principles

The concept of the multiverse takes advantage of two principles, namely the Copernican Principle and the Anthropic Principle. This is intriguing for Carter introduced the Anthropic Principle as a reaction to the Copernican Principle.

The Copernican Principle states that the Earth, Sun, humans, life, do not occupy a special or unique vantage point or status. In the cosmological version it states that the universe is isotropic and homogeneous. The Earth was held once to have occupied a special place; we know that it does not. We thought in terms of one special galaxy or nebula, but we now know that our neck of the woods is nothing special. Nor is our local cluster of galaxies any special and so on.

The ultimate application of the Copernican Principle is to the universe itself. Our universe is not special, but one of many. One of many, many, many.

The Anthropic Principle is an observation selection effect. It states that the values of the physical constants, such as Planck’s constant, and other fundamental physical parameters are consistent with the evolution of life. If they took any other value there would be no life to observe them, but because we do observe them they must be as they are.

The idea here is to account for why the parameters take the value that they do without positing a fundamental physical mechanism, mainly because we don’t have any viable hypothesis or hypotheses as to why they take the value that they do. They don’t automatically spring from theory. We know the values through experiment, that is observation.

The problem with string theory is that there are many solutions to the theory, perhaps as much 10^500, which is problematical for a theory long seen as providing a unique and total picture of physical reality. One way of looking at this is to say that each solution describes a different universe with different laws of physics. The Anthropic Principle is invoked to assert that a certain subset of the possible solutions is consistent with the existence of observers or cognition and the physical parameters take the value that they do in our universe because we are here to observe them.

Much trades on what we mean by “life.” We actually don’t know what we mean by “life” so any theory that invokes the notion without adequately explaining what it is should be viewed with caution.

Say life emerges in any universe governed by physical law of any type. Evolution likes to take advantage of what nature puts on its plate. Say there are universal principles of self organised complexity, reproducibility and evolution at work so that life emerges in any universe governed by physical law. The form that life takes will differ as physical law differs, but life and evolution will take advantage of what is given and work its magic therefrom.

In that case there is no special subset of universes that possesses life. One cannot then say that the physical parameters in our universe take the value that they do because life evolved here. Life would have evolved no matter what value the physical parameters take.

Which still leaves the question; why do the values take the form that they do in our universe?

If you ruthlessly apply the Copernican Principle, then you can’t take advantage of Anthropic observation selection effects. If you limit or stop Copernican reasoning at some point, then you can invoke observation selection effects. However, I see no reason why you should and nor do I see why, a priori, it should be limited at some point rather than any other. Say, limit it at the point of the universe and forget about the multiverse altogether for instance.

It seems to me that there is a contradiction at work here. Unless, of course, I am missing something which I might well be as I only though/speculated about this whilst driving in some pretty heavy rain and fog this evening grrrrrrrh

Posted in Philosophy and Science | Tagged , | Comments Off on Uniqueness and Mediocrity: The Multiverse and the Conflict between the Copernican and Anthropic Principles