The Sarmat ICBM and Hypersonic Warheads

2016 is supposed to see the first flight tests of the Russian replacement to the SS 18 “heavy” land and silo based SS-18 ICBM, and the missile, known as “Sarmat,” is scheduled to start entering into service in 2018.

The SS 18 is the missile that formed the basis of the fraudulent “window of vulnerability” that the Reagnites used in the late 1970s to beat détente into a pulp.

The Sarmat is presented as an approximately 100 tonne liquid fuelled ICBM, so kinda similar to the SS 19, that will have sufficient, it is said, boost phase speed to outrun currently deployed US ballistic missile defense systems. An article published today in the Russian press attracted my attention because of this statement.

“In this sense, the Sarmat missile will not only become the R-36M’s successor, but also to some extent it will determine in which direction nuclear deterrence in the world will develop,”

In late April this year the Russian’s flight tested the SS-19 ICBM, which the Sarmat will also replace, and a Russian news report carried some information which sheds some light on the above statement.

Russian Strategic Missile Forces have conducted a successful intercontinental ballistic missile (ICBM) launch, testing a hypersonic cruise vehicle, Interfax reported, citing a source familiar with the issue…

…All modern nuclear warheads are delivered on targets using ballistic trajectory that can be calculated, therefore such warheads could be intercepted. Hypersonic warheads currently in design would be capable of manoeuvring by yaw and pitch, eventually becoming impossible to intercept, thus making any existing and upcoming missile defense system impotent.

During the 1980s the Soviet response to Star Wars, which never got off the ground as it were, was the MaRV warhead or Manoeuvrable Reentry Vehicle, which makes programmed maneuverers in flight as the warhead heads toward its designated target. My understanding is that the Soviets saw the SS-19 missile as being the missile for the MaRV warhead, and that the Soviets used the SS-19 to flight test their MaRV programme.

It must be stressed that, if this report is to be believed (Russian news reports on nuclear matters should be taken with a grain of salt), we are not talking here about a MaRV capability.

The new missile, weighing at least 100 tons, will reportedly be capable of carrying a payload of up to 10 tons on any trajectory. This means an attack on a target could be made from any direction, i.e. RS-28 could start from Russia and fly in the direction of Antarctica, make a circumterrestrial flight and hit targets on the other side of the planet from an unexpected direction

The leading open source analyst of Russian strategic nuclear forces writing in English, if not also Russian, Pavel Podvig seems to concur. Of the hypersonic warhead Podvig wrote

Russia first went public with its “hypersonic weapon” more than ten years ago – in February 2004 it tested a warhead that according to the Kremlin “will fly at hyper-sonic speed and will be able to change trajectory both in terms of altitude and direction, and missile defence systems will be powerless against them.”

What is at issue here is a hypersonic warhead with an all azimuth attack capability, that is an ability to attack the target from any direction, and an all azimuth launch capability, that is to launch on any azimuth with the ability to vary attack approaches. A clear motive driving this capability is US ballistic missile defense.

China and the United States are working on something similar.

The stuff about 40 Mt Texas busting warheads can be discarded. The Tsar Bomba test was ~50 Mt and no way will any Sarmat ICBM feature a warhead of such unnecessarily high yield.

Posted in International Relations and Global Security | Tagged , , , | Comments Off on The Sarmat ICBM and Hypersonic Warheads

Blurring the Threshold Between Nuclear and Conventional War

I will be doing everything that I can to attend the Pine Gap peace convergence, near Alice Springs, in late September – early October.

Pavel Podvig, an American analyst of Russian providence, writing for The Bulletin of the Atomic Scientists, reminds us why activism of this type remains important. Podvig warns us of a potential danger subtler than the overt military manoeuvres between Russia and the United States/NATO that garner attention;

But there is a subtler, easily overlooked trend as well, which could make the situation even worse: a gradual blurring of the line – particularly in Russia – that separates conventional weapons and their delivery systems from their nuclear counterparts

I am not able to read the entire article, which is a pity as Podvig is a most knowledgeable and insightful analyst.

One of the more annoying aspects of much commentary on nuclear affairs, at least for a seasoned observer such as myself, is the manner in which nuclear modernisation programmes are presented as responses to the latest, post Crimea, retching up of the geopolitical conflict between Russia and the United States.

There have been proposals for the “modernisation” of nuclear weapons for as long as I can remember; PLWYDs, RNEP, RRW spring to mind. Just about all of the world’s nuclear powers are upgrading their strategic nuclear forces. A related programme, in the US context, has been conventional counterforce, which represents a blurring of the line between conventional and nuclear that has been a Russian concern of long standing.

A key underlying factor at play here is NATO expansion, following on from what the eminent realist international relations theorist, John Mearsheimer, referred to as a US post cold war “imperial foreign policy” waged “by design.” Gorbachev proposed, upon the ending of the cold war, that a common political and strategic space be created in Europe, incorporating Moscow, that would largely eliminate the need for nuclear deterrence.

This was explicitly rejected in favour of NATO expansion.

NATO expansion takes a number of forms; geographic expansion to the borders of Russia; expansion of mission beyond defence; globalisation of the NATO theatre of operations. For the Russians these are serious matters, and Moscow’s actions in the strategic domain, to a significant extent, can be read as a push back against this geostrategic advance. Russia, and NATO, naturally, are nuclear powers so these geopolitical tensions have a very serious nuclear component to them.

It is not hard to see how strategic ambiguity of the type discussed by Podvig makes, in a macabre insane way, sense for the Russians. It is on a par with that well known conception of deterrence, due to Schelling, namely “the threat that leaves something to chance.” If, say, some Iskander missiles are nuclear armed and others are not the threat that leaves something to chance becomes very real, one that NATO planners need to grapple with as they implement the mission handed down to them from on high.

Of course, this is all quite insane but it is an insanity whose underlying causes are ignored by commentators, the media and much of the liberal arms control community as Euro-Atlantic integration and its neoliberal premises are rarely, if at all, questioned.

The Pine Gap peace convergence is a convergence of activists protesting against the militarisation of international relations, and the role that Australia plays in the strategic nuclear war planning system of the United States. It is a political event of great importance.

That the peace movement again is stirring is a positive development, for too long critical analysis of nuclear affairs has been dominated by D.C. connected liberal arms controllers, and this newfound activism needs to be nurtured on a continued basis. The peace movement of the early to mid 1980s was an important political force, and it possessed a vision of an alternative conception of world order.

That alternative conception of world order was based on the principles of common security.

These principles need to be dusted off the shelves and brought to renewed relevance. The peace movement should not just fight against the insane drive to Armageddon, it must also fight against the underlying forces that propel that drive, and offer alternative conceptions of world order rooted in the principles of common security.

To a first approximation this will require actions that deter state actions, and that encourage public policies based on common security.

Ultimately the continued survival of the species will depend on perpetual peace. Nuclear deterrence in a world based on competing centres of concentrated power, both corporate and state, that agglomerate in their hands resources and production is no basis for continued human survival.

Only when the resources of the world and its productive proceeds are held in common can there be perpetual peace.

Posted in International Relations and Global Security | Tagged , , | Comments Off on Blurring the Threshold Between Nuclear and Conventional War

The Gettier Problem in the Philosophy of Science.

Philosophy of science has a curious relationship to epistemology, or the theory of knowledge.

There is little doubt that in the 20th century the philosophy of science took on a life of its own largely independent of epistemology. Interestingly, given that we begin by making an historical observation, some of the most important philosophers of the 20th century, and some of the most important works of the 20th century, were by philosophers of science or of philosophy of science.

Philosophy of science has become an autonomous sub discipline of philosophy, and many undergraduate and graduate philosophy courses teach epistemology and philosophy of science autonomously.

This is curious for scientific knowledge is a species of knowledge. The problems of scientific knowledge pondered by philosophers of science have more than a whiff of traditional epistemology to them. For instance, perhaps the *key* problem of epistemology has been the problem of scepticism. Philosophers of science, to no small degree, are interested in providing some grounding to science given the problem of scepticism.

Scientists, to use the expression of Richard Feynman, stopped worrying about such issues long ago, and simply got on with doing science. However, at times, epistemological angst creeps into the sciences in a serious way. Some of the fiercest debates in theoretical physics today are concerned with the nature of the scientific enterprise itself.

I myself have the impression that philosophy of science, in part, became important because paradigm shifting advances in scientific knowledge, such as highly abstract pure mathematics, relativity, and quantum theory, provoked epistemological angst, as it were, which contributed to the rise of logical positivism and its associated concern with science.

I think sociological and historical reasons also contributed, such as the second industrial revolution and the much closer relationship that was forged between science and the state.

The fall of logical positivism has not improved matters, at least not in philosophy if not the sciences.

It would be interesting to do a historical study of epistemological angst with reference to the Kuhnian framework. Are paradigm shifts accompanied by an upsurge in epistemological work that seeks to put science on a firmer epistemological foundation? It is easy to imagine how it could do so.
Normal science is not a time, one feels, when many are worried by epistemological niceties.

But when the paradigm shifts all is torn asunder. One could, perhaps, fit the rise of logical positivism into such a historical and structural framework.

Anyway, that is not really my concern here.

I am more interested in the Gettier counterexamples. The analysis of the concept, knowledge, has been dominated since antiquity with a tripartite conception of knowledge. That is, to know p is to have a belief that p, that p be true, and the belief that p is justified or warranted.

Much of epistemology has been devoted to accounting for the justification criterion. Theories abound. However, Gettier, in a classic two page paper (which wouldn’t cut the mustard in the neoliberal university), up ended this account of knowledge when he showed that it is possible to have a justified true belief yet not to be in the possession of knowledge. Gettier counterexamples take the form;

Smith justifiably believes that P.
P is false.
Smith correctly infers that if P is true, then Q is true.
So, Smith believes Q, justifiably.
Q is true, but not because of P.
So, Smith has a justified true belief that Q.

When I read philosophy of science I can’t help but get this hunch, this feeling, this gut intuition, that a lot of it focuses on the justification criterion, like in traditional epistemology, but only in the context of scientific knowledge.

But if knowledge is *not* justified true belief, as Gettier held it not to be, then philosophy of science should be just as infected by the Gettier problem as epistemology is. But, because of its autonomous status, philosophy of science is not affected by the Gettier problem to the same degree.

One way of sidestepping the Gettier problem is to adopt one or another stance of epistemology naturalised, to borrow from Quine. Perhaps what is required is a philosophy of science naturalised.

Can you use naturalistic inquiry to defend naturalistic inquiry? Is there, perhaps, a paradox of self reference here? If there is, could not philosophy of science be incomplete in similar fashion to the way we say mathematics is incomplete?

We started with epistemology as a general case and then moved on to philosophy of science as a specific case.

Could we now seek to move, by analogy, from the specific to the general? Namely, if philosophy of science is incomplete because of paradoxes of self reference could not epistemology be shown to be incomplete for similar reasons?

Can you come to know what knowledge is without first knowing what knowledge is?

The study of the philosophy of science, from the early 20th century onward, with reference to the Gettier problem would, at a minimum, be a most fascinating study. At the outer edge of speculative fancy, a formal demonstration of the incompleteness of epistemology would be a significant and original contribution to human knowledge.

It could be that we are a species, through the clear light of reason, that can come to *know* that one cannot *know* knowledge.

Now such a theorem ought be called Socrates’ theorem if any should. One, the tripartite conception comes from him via Plato, and two Socrates knew that he knew nothing.

Posted in Philosophy and Science | Tagged , , | 3 Comments

Who Was David Hume? A Rationalist, of course!

One must grant a belated happy birthday to David Hume (May 07, 1711), one of history’s most insightful thinkers and a favoured philosopher of mine despite the conservatism that is often associated with him.

Gottlieb, in a timely review of a new intellectual biography of Hume, observes, “in 2009, he won first place in a large international poll of professors and graduate students who were asked to name the dead thinker with whom they most identified.”

Gottlieb attributes this to Hume’s naturalism;

Still, it is probably the rise of so-called “naturalism” in philosophy that best explains Hume’s newfound appeal. Naturalism has several components, all of which were prominent in his work

Gottlieb further elaborates,

He treated religion as a natural phenomenon, to be explained in psychological and historical terms—which tended to annoy the pious—and he argued that the study of the mind and of morals should be pursued by the same empirical methods that were starting to cast new light on the rest of nature. Philosophy, for Hume, was thus not fundamentally different from science. This outlook is much more common in our time than it was in his

One of the components of naturalism is empiricism. The reason for this is that the naturalism that became dominant in contemporary philosophy has its origins in, and is an outgrowth of, logical positivism. Hume’s sceptical arguments, most famously his “is-ought” distinction, have thereby been used to support empiricist theses.

For this reason, Hume is most often put in the company of the classical, British, empiricists.

Consider the is-ought distinction in Hume’s own words,

In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.

“Nor is perceived by reason” are critical words here. Let us assume that there are no dedicated faculties of the mind, such as an ethical faculty or language faculty, based on autonomous principles of innate knowledge.

Jerrold Katz observed that, “sophisticated empiricists recognize an autonomous rational faculty as essential for knowledge.” Such an autonomous faculty is an “inferential engine” that furnishes knowledge based on principles of induction and association. The blank slate is not totally blank.

Empiricism is the view that this general faculty of reason provides us with knowledge as we interact with the external world, or which is able to engage in deduction analytically. Hume’s is-ought distinction then becomes a sceptical argument, for if ought statements are not products of reason, that is the inferential engine, then they cannot constitute a type of knowledge.

But there is another view possible here, a view that paints the picture of Hume not as an empiricist but rather as a rationalist. Plainly we make ought statements all the time, and we do draw conclusions with reference to a system of moral rules.

The is-ought distinction can be read as an argument from the poverty of the stimulus for an autonomous faculty of moral cognition based on innate knowledge of moral rules or principles. It is the use of this knowledge that provides us with the ought.

No amount of inference based on induction and association as we interact with the world can furnish us with the ought or moral rules. As Hume stated, “the rules of morality are not the product of our reason.” They are the product not of our “reason” but of an autonomous faculty of “moral reason.”

Hume, of course, himself held that because reason does not furnish us with moral principles they arise from moral sentiments, the passions as it were. These, clearly, are innate and handed down to us by nature. Hume’s position, hence, is a type of nativism. But we can update this conclusion in light of advances in the study of the mind and cognition that have occurred since the days of Hume.

By moral sentiments we should mean a system of moral rules that are innate, autonomous, and constitute a system of knowledge based on a natural genetic endowment. This is a type of rationalism, not scepticism or empiricism.

This conclusion can generalise to other famous arguments of Hume that are viewed in a sceptical vein.

To return to Gottlieb’s question; who was David Hume?

David Hume was a rationalist.

Posted in Philosophy and Science | Tagged , , , | Comments Off on Who Was David Hume? A Rationalist, of course!

Uniqueness and Mediocrity: The Multiverse and the Conflict between the Copernican and Anthropic Principles

The concept of the multiverse takes advantage of two principles, namely the Copernican Principle and the Anthropic Principle. This is intriguing for Carter introduced the Anthropic Principle as a reaction to the Copernican Principle.

The Copernican Principle states that the Earth, Sun, humans, life, do not occupy a special or unique vantage point or status. In the cosmological version it states that the universe is isotropic and homogeneous. The Earth was held once to have occupied a special place; we know that it does not. We thought in terms of one special galaxy or nebula, but we now know that our neck of the woods is nothing special. Nor is our local cluster of galaxies any special and so on.

The ultimate application of the Copernican Principle is to the universe itself. Our universe is not special, but one of many. One of many, many, many.

The Anthropic Principle is an observation selection effect. It states that the values of the physical constants, such as Planck’s constant, and other fundamental physical parameters are consistent with the evolution of life. If they took any other value there would be no life to observe them, but because we do observe them they must be as they are.

The idea here is to account for why the parameters take the value that they do without positing a fundamental physical mechanism, mainly because we don’t have any viable hypothesis or hypotheses as to why they take the value that they do. They don’t automatically spring from theory. We know the values through experiment, that is observation.

The problem with string theory is that there are many solutions to the theory, perhaps as much 10^500, which is problematical for a theory long seen as providing a unique and total picture of physical reality. One way of looking at this is to say that each solution describes a different universe with different laws of physics. The Anthropic Principle is invoked to assert that a certain subset of the possible solutions is consistent with the existence of observers or cognition and the physical parameters take the value that they do in our universe because we are here to observe them.

Much trades on what we mean by “life.” We actually don’t know what we mean by “life” so any theory that invokes the notion without adequately explaining what it is should be viewed with caution.

Say life emerges in any universe governed by physical law of any type. Evolution likes to take advantage of what nature puts on its plate. Say there are universal principles of self organised complexity, reproducibility and evolution at work so that life emerges in any universe governed by physical law. The form that life takes will differ as physical law differs, but life and evolution will take advantage of what is given and work its magic therefrom.

In that case there is no special subset of universes that possesses life. One cannot then say that the physical parameters in our universe take the value that they do because life evolved here. Life would have evolved no matter what value the physical parameters take.

Which still leaves the question; why do the values take the form that they do in our universe?

If you ruthlessly apply the Copernican Principle, then you can’t take advantage of Anthropic observation selection effects. If you limit or stop Copernican reasoning at some point, then you can invoke observation selection effects. However, I see no reason why you should and nor do I see why, a priori, it should be limited at some point rather than any other. Say, limit it at the point of the universe and forget about the multiverse altogether for instance.

It seems to me that there is a contradiction at work here. Unless, of course, I am missing something which I might well be as I only though/speculated about this whilst driving in some pretty heavy rain and fog this evening grrrrrrrh

Posted in Philosophy and Science | Tagged , | Comments Off on Uniqueness and Mediocrity: The Multiverse and the Conflict between the Copernican and Anthropic Principles

Secular Stagnation and Neoliberalism: Turnbull Sets the Stage for Post Election Slash and Burn

I myself after the handing down of a federal budget like to focus on the macroeconomic discussion and projections that appear in the budget papers, of which I will have more to say in the coming days.

I like to do this because the macroeconomic discussion and projections can provide useful insight into the framework or structure that underpins current and future fiscal policy, indeed economic policy more broadly.

The previous year’s budget forecast that GDP growth this year would be 3.25%, which was the trend rate of growth during the pre GFC “mining boom” years. At the time critics, myself included, argued that the global economy was in a period of “secular stagnation” and that it was unlikely that GDP growth would return to the pre GFC trend rate.

The government now expects GDP growth for this year to be 2.5%, a significant downgrade. For 2018-2020 it is expected that the economy will grow at 3% per year.

So long as secular stagnation obtains among the core economies we should be sceptical about such optimistic growth forecasts in Australia. It is telling that on the same day that the budget was released that the Reserve Bank lowered interest rates to a record low level in an attempt to stimulate more growth.

This means it is not likely that economic growth on its own will boost the budget bottom line.

These are important considerations because the Turnbull government is a neoliberal government committed to bringing the budget into surplus, as soon as possible, and adhering to the strictures of global credit rating agencies whose actions helped bring about the GFC and thereby the very deficit itself.

Because of its neoliberal nature the government won’t be putting into place higher corporate taxes, in fact its cutting these, mining profit taxes, a carbon tax and the like to significantly boost revenue.

The government, upon reelection, would most likely move to cut spending most especially spending directed toward the broader population such as social welfare. Furthermore, pressure would exist to revisit the raising and broadening of the GST, which was a Turnbull policy commitment.

Secular stagnation could also be used for supply side arguments to boost productivity and economic growth through further neoliberal labour market reforms following an election.

One can see in the macroeconomic framework and the reality of secular stagnation the contours of future policy. This is a budget to not offend as many people as possible pre election, but after a July election the reality of secular stagnation, in the absence of a decisive break from neoliberal policy, almost mandates a return to Abbott era policies.

Posted in Politics and Economics | Tagged , | Comments Off on Secular Stagnation and Neoliberalism: Turnbull Sets the Stage for Post Election Slash and Burn

What’s in a Missile Engine? Heaps, if it’s North Korean

Hitherto, among serious analysts, two major technical limitations have been seen as characterising North Korea’s strategic programme.

The first is the drive to develop a nuclear warhead of relatively low throw weight, light enough for an ICBM programme and even light enough for use against South Korea and Japan. North Korea, up until recently, by no means could be said to have demonstrated such a capability.

The last North Korean nuclear test, hailed by Pyongyang as a test of a hydrogen bomb which only obscured the real issues, indicated that North Korea is making progress toward developing more compact nuclear warheads for ballistic missile re-entry vehicles. The development of this capability would give North Korea a strategically significant arsenal of nuclear warheads.

The second relates to North Korea’s ballistic missile programme. The dominant assumption has been that North Korea’s missile programme has been based on extrapolations or scaling up of SCUD missile technology. This has inherent technical limitations, and the reliable clustering of SCUD liquid fuelled engines for multi stage missiles has been a difficult nut for the North Koreans to crack.

The compact nuclear warheads are of limited use if they cannot be delivered to their designated targets, especially beyond the Korean peninsula.

The April 23 test of a submarine launched missile, although ultimately a failure, by North Korea attracted attention as the plume emanating from the missile strongly suggests that the missile is solid fuelled. The test thereby was of some significance. Use of a solid fuelled engine is a step beyond the SCUD based nature of the North Korean missile programme. Usage of solid fuelled missiles would give North Korea a more mobile strategic nuclear capability, both on land and at sea.

At the outer edge of speculation, the successful development of solid fuelled engines might lead to a change of tack on the long range ballistic missile front. Instead of clustering SCUD liquid fuelled engines to develop sufficient thrust the North Koreans might now use solid fuelled engines, which might prove a more fruitful means to proceed.

That is to say, the development of more powerful and less complex engines could more readily provide North Korea with missiles of sufficient range and throw weight to place at risk targets of strategic significance than the prevailing strategy based on the scaling up SCUD technology.

The danger here is that as North Korean capabilities improve the harder it will be for Pyongyang to trade them away as part of any deal designed to demilitarise the Korean peninsula. The North has used its strategic programme as part of a strategy to “coercively bandwagon” with the United States.

Washington, by and large (excluding the tail end of the Clinton administration), has not seriously pursued diplomatic initiatives to end the stalemate on the Korean peninsula, and in fact has scuttled promising agreements. As the North’s capabilities grow so it may drive a harder bargain, limiting the prospects for successful negotiations.

I myself tend to think that we are sleep walking toward some sort of catastrophe on the Korean peninsula. The most likely prospects appear to be, at least to me;

1.) The emergence of some North Korean Gorbachev that will capitulate to the demands of the US as part of a reform programme.

2.) A Cuba style rapprochement by the United States in recognition of the growing dangers.

3.) Some blood bath as the current “neither peace nor war” condition breaks down into open hostilities.

What is happening in Korea is serious stuff, and it is amazing that it does not exercise our minds anywhere near to the extent that it should. In fact, we seem determined to push the envelope right to the edge of war. That we use the march to the abyss as a rationale for ballistic missile defense, which only increases even more significant nuclear dangers elsewhere is just sheer insanity. The insanity is made that much worse when we consider that developing countermeasures to defeat ballistic missile defense are a breeze compared to moving beyond SCUD technology and manufacturing more compact warheads.

Nobody writes much about North Korea, but I fear the time will come when everybody will.

Posted in International Relations and Global Security | Tagged , , , | Comments Off on What’s in a Missile Engine? Heaps, if it’s North Korean

The Social Contract and the Neoliberal State: Problems of Obligation and Resistance

The problem of political obligation is a, if not the, central issue of political philosophy. The problem is to offer rational justifications for the state in the face of the philosophical anarchist charge that authority is not self justified. If philosophical anarchism is correct, as it surely is, any system of authority, including that of the state, which cannot meet the burden of justification must be construed as being illegitimate.

Hence the problem of political obligation for states are the dominant form of political authority.

In classical liberal thought social contract theory was one means by which the state was said to be justified as the existence of a social contract demonstrated that the ruled consented to the authority of the rulers. David Hume famously provided a number of powerful criticisms of social contract theory, one of which was that a supposed contract between the governed and the governor was a historical fiction.

Through one bloody artifice after another the state came into being and only later did the philosophers get on to developing theories of political obligation.

The only state that I can think of that could be said to be based on some social contract is, or better still was, the social welfare state. Critical to the advent of the social welfare state was a social contract between capital and labour. The existence of this contract was critical to the reconciliation of the working class to the state, with which it historically had a most hostile relationship as working class communities had a visceral daily understanding that the state was not theirs. They understood that the state existed, to a significant extent, to keep them corralled into their proper place within the overall framework of order.

A bit like the daily understanding that the poorest and most marginalised communities today regard the state. Whatever may be said within the sandstone seminar rooms of philosophy departments regarding the state those for whom #blacklivesmatter understand that whatever the state is, it is not for them.

My remarks above should not be equated to the view that the social welfare state was justified, for it was quite exclusionary and imperial. The arrangement that existed between the citizens of democratic Athens was admirable, but it hardly justified the violent and slave dependent Athenian state. Furthermore, the social contract that underpinned the social welfare state was not drawn by free and equal social agents existing in something akin to a state of nature but, rather, was reflective of a certain prevailing balance of power. Hence usage of the term reconciliation in the above.

The neoliberal state is based on the abrogation, including through the coercive power of the state, of the social contract between capital and labour that underlay the social welfare state. The neoliberal state is not based on a social contract but, rather, is based on its conscious dismantlement in the interests of corporate power and profit making.

One at this point could invoke the classical liberal defence of property rights, but such an invocation would be based on a fallacy. The classical liberal position here revolved on rights to property not on rights of property. As can be seen with Citizens United, so called free trade agreements, corporate personhood and the like, the neoliberal state enforces not the right to property but of property.

If one is true to the contractarian basis to classical liberalism, one would have to be opposed to the neoliberal state. A consistent upholder of classical liberal thinking must regard the neoliberal state as being illegitimate, and thus one to whom we owe no political obligation. Furthermore, adopting the classical liberal or Lockean position on resistance leads us to the conclusion that resistance to the neoliberal state is justified and itself is an obligation.

The neoliberal state in both its internal configuration and its external behaviour is fundamentally illegitimate. You have the right to rebel against it. In fact, rebellion against the neoliberal state is an obligation.

That all follows from standard classical liberal thought of the type that underpins the declaration of independence.

Posted in Politics and Economics | Tagged , , | Comments Off on The Social Contract and the Neoliberal State: Problems of Obligation and Resistance

Kuhnian Anomalies and Evolutionary Thought

One thing that interests me is the question of anomalies and the structure of scientific revolutions as per Thomas Kuhn.

We tend to associate anomalies and revolutionary science with the physical sciences, most especially physics. This is unsurprising because ours is a society where physics has become the queen of the sciences, so that “paradigm shifts,” as it were, have the greatest revolutionary force when they occur in physics.

When physics sits augustly upon our cognitive throne revolutions in physics become not just revolutions in science but revolutions of science.

Although we tend to focus on the case of the physical sciences it is possible to see anomalies that could be said to have presaged, presage or require, a revolutionary upheaval of thought. For instance, consider the case of philosophy. Russell’s paradox was a paradox but to follow Kuhn one could argue that in its essence it was a Kuhnian anomaly plunging Fregean analysis into crisis, which ultimately was resolved through revolutionary work in mathematics and logic.

The paradox of the one to one correspondence between the analysandum and the analysans might best be seen as an anomaly for analytical philosophy requiring a revolutionary new philosophical methodology beyond analytical philosophy, hitherto the dominant methodology (I ignore continental philosophy, true, and for reasons of charity will leave aside further discussion as to why). We might say that the Gettier counter examples function as a species of Kuhnian anomaly necessitating a revolution in epistemology.

I don’t want to focus on the philosophical cases, nor that of physics, at least not today.

I would like to consider the case of evolution. Christian De Dueve likes to use the term “singularity” to describe events in the history of life that had a singular origin such as, to follow his lead, the emergence of the Eukaryotic cell, the origin of multicellular life and the like. We can include in this what is often referred to as “the mind’s big bang” such as the origin of language in humans.

Say we impute to these a genuine singular origin. It is easy to see how the evolution of biological singularities can be construed as an anomaly for our traditional conception of evolutionary theory.

How might this resolved?

We might argue with respect to the first revolutionary process in evolutionary biology, namely the development of the modern synthesis. The mechanism of heredity was an anomaly for evolutionary thought right from the get go. The anomaly was resolved when a physical mechanism, Mendelian genetics, of heredity was found and subsequently unified with evolutionary theory.

Perhaps this unification between evolution and more basic physical considerations needs to be made deeper still to resolve outstanding anomalies such as that due to singularities. I suspect that we need to consider how it is that physical entities emerge and subsequently evolve through processes of self organisation upon the reaching of certain critical levels of complexity in order to crack the problem of singularities.

In this case what is being developed is a general theory of self organisation under conditions of complexity that applies to all physical processes or arrangements of matter and the biological case of life merely forms a special sub set to this general theory of physical evolution.

That way biology would lose, at a deep fundamental or basic life much like chemistry, its sui generis status as an intellectual discipline.

This would be a revolutionary act in the history of thought.

Should such a state of affairs come to pass notice that this would entrench physics even more firmly upon her throne, although the mathematical Indignados with their Platonic conception of reality always lurk in the wings ready to stoke the insurrectionist embers of revolution.

Posted in Philosophy and Science | Tagged , , | Comments Off on Kuhnian Anomalies and Evolutionary Thought

Peak Knowledge and Social Complexity: Are They Linked?

The New York Times carries a thought provoking op-ed by William Gail on what he refers to as a looming new scientific dark age.

Let us put aside the quibble that the dark ages might not have been as dark as commonly supposed.

Gail makes the argument that as global warming proceeds much of the knowledge that we possess of dynamical Earth processes, that is that knowledge which is largely inductive in nature, will “peak” posing serious problems for civilisation.

Gail observes,

Our foundation of Earth knowledge, largely derived from historically observed patterns, has been central to society’s progress. Early cultures kept track of nature’s ebb and flow, passing improved knowledge about hunting and agriculture to each new generation. Science has accelerated this learning process through advanced observation methods and pattern discovery techniques. These allow us to anticipate the future with a consistency unimaginable to our ancestors.

But as Earth warms, our historical understanding will turn obsolete faster than we can replace it with new knowledge.

Which leads him to contend

Historians of the next century will grasp the importance of this decline in our ability to predict the future. They may mark the coming decades of this century as the period during which humanity, despite rapid technological and scientific advances, achieved “peak knowledge” about the planet it occupies. They will note that many decades may pass before society again attains the same level…
… As the patterns that we have come to expect are disrupted by warming temperatures, we will face huge challenges feeding a growing population and prospering within our planet’s finite resources.

The complexity of civilisation, in part, is based on a stock of knowledge and that stock of knowledge, in part, is inductive knowledge of dynamical Earth processes. If that knowledge should peak, and then decline, as a consequence of global warming one would expect then there to be a concomitant decline in the complexity of society or societies in the sense discussed by Joseph Tainter in the context of ancient societies.

You would get a collapse back into simpler social forms. Notice this is strongly implied in Gail’s observations regarding the challenges of population growth and economic growth.

To be quite speculative, imagine the stock of knowledge grows again over time. If society would remain, in its essence, dedicated toward the pursuit of capital accumulation and is pervaded by externalities then, it is easy to imagine, we would again reach “peak knowledge” as we enter upon another ecological crisis. The growing stock of knowledge will serve capital accumulation and will lead to global externalities.

That is to assume that knowledge of dynamical Earth processes of the type outline by Gail remains inductive.

Here lies another little diddy for those interested in the relationship between knowledge and society. One thing that has long interested me is the relationship between (a) the stock of knowledge, (b) economic growth and (c) ecology. There exists a problem of knowledge here.

Given our rich systems of knowledge how ought we to live?

Posted in Philosophy and Science, Politics and Economics | Tagged , , | Comments Off on Peak Knowledge and Social Complexity: Are They Linked?