Sunday, 5 June 2022

A research programme after Leibniz for a future physics


In "Einstein’s Unfinished Revolution", Penguin Books, 2019, Lee Smolin introduces principles through which to develop fundamental physics. So, they are not mathematical or logical principles but founding elements for thinking about and then formulating physical theories.

Introduce five closely related principles for a future physics:

  1. The principle of background independence 
  2. The principle that space and time are relational 
  3. The principle of causal completeness 
  4. The principle of reciprocity 
  5. The principle of the identity of indiscernibles.
These are all aspects, claims Smolin, of what Leibniz called the principle of sufficient reason (PSR). he interprets the principle as follows: 
Every time we identify some aspect of the universe which seemingly might be different, we will discover, on further examination, a rational reason why it is so and not otherwise.
 This is not quite standard but then, if we even go back to Leibniz, we have a number of versions. We shall examine the formulation and the five principles in light of their origin in the philosophy of Leibniz and then go on to discuss them as principles for developing physics especially in dealing with quantum mechanics and theories of space and time.

The promotion of the principle by Leibniz

Although the PSR is most closely associated with Leibniz there is an earlier formulation by Spinoza
Nothing exists of which it cannot be asked, what is the cause (or reason) [causa (sive ratio)], why it exists.
The version given by Smolin has an epistemic flavour while that of Spinoza is ontological. As Smolin is defending a realist view of physics an ontological formulation may be preferred. Leibniz provided a mixture of logical, ontological and epistemic formulations. The modern recovery of the reputation of Leibniz is based on his achievements as a logician and here is the version from the Monadology (not Leibniz's title)
31. Our reasonings are based on two great principles, that of contradiction, in virtue of which we judge that which involves a contradiction to be false, and that which is opposed or contradictory to the false to be true.
32. And that of sufficient reason, by virtue of which we consider that we can find no true or existent fact, no true assertion, without there being a sufficient reason why it is thus and not otherwise, although most of the time these reasons cannot be known to us. 

Saturday, 4 June 2022

Ergodicity and investment growth

The status of Ergodicity Economics as a minority topic in mathematical economics may be about to change due to the profile it has achieved through a special 'perspective' article published in Nature in December 2019 authored by Ole Peters of the London Mathematical Laboratory.

The article itself provides a clear introduction that presents sufficient mathematics without being overly pedantic. The main point has been well known and put on a rigorous foundation in mathematical physics for several decades. The point being that the time average of a dynamical variable is not equal to the static expectation value over the associated probability distribution over that variable. The claim that this ignored in main stream economics seems incredible given the repeated claims that mathematics has too tight a hold on the subject currently. However the effects that Peters claims does seem to stand up to scrutiny.

The model that is used to illustrate is a simple gamble where with equal probability you can increase your pot (wealth) $W$ by $A$% or lose $B$%. If $A > B$ then the expected outcome is positive and traditionally the gamble is considered acceptable. In general the outcome is
$$<\Delta W> = W \frac{A - B}{2 \times 100} .$$
This simple game of chance could also be thought of as the outcome of a risky investment or a purchase that has associated unknowns.

However when the stakes are high most people are disinclined to accept the such a bet. High stake would mean something like betting your house. A bad outcome would leave you with significantly reduced  total wealth. The rejection of the gamble is often declared irrational as  you are rejecting an expected win.  But something slippery is going on; "expected" is being used in two distinct ways. The ordinary language use means what is likely happen and what is very likely to happen is that either your wealth increase significantly or it decreases significantly. The technical meaning is that it is the average over the probability distribution; Equation for $\Delta W$, above is that average for the gamble discussed.

Now one way to day with a large one off loss is to spread your bets over time. That is iterate the game. So what is the expected win if the game is played $N$ times? To address this we will have to introduce a more formal mathematical model. Let $s(0)$ be the initial stake and let the stake to iteration $n$ be
$$s(n) = \prod_{i=1}^{n} r_i s(0) $$
where $r_i = a$ with probability a half and $b$ with probability a half, where
$$a = 1 - A/100$$

and
$$b = 1 + B/100$$.
A further technical assumption is that each gamble is independent of the previous one, which gives for the expected stake:
$$<s(n)> = \prod_{i=1}^{n} <r_i >s(0) $$
$$<s(n)> = <r_i>^n s(0)= <r>^n s(0) $$
$$<r> =\frac{a+b}{2}$$
where we have used the fact that each gamble has the same probability distribution.
As long as $<r>$ is greater than one the stake will grow. Is the calculated expectation value what is to be expected? To test this we will look at what happens over time to $s(n)$, that is as $n$ increases. For large $n$, noting that the order in which $a$ or $b$ is randomly selected does not affect the value of $s(n)$. Therefore
$$s(n) = a^{m} b^{n-m} s(0) $$
\[\lim_{n \to \infty} s(n)^{1/n} = \hat{s} =\sqrt{ a b} s(0)\].

So this increases exponentially with rate $\sqrt{ a b}$ if $ab > 1$. Let us first consider the example with parameters used by Peters. The result for $a=0.6$ and $b=1.5$ is shown below for $100$ histories. This case shows that for these parameters playing the game leads to almost certain loss, amounting to ruin or almost complete loss of stake, but the expectation value estimate predicts an exponentially growing gain. For these parameters $\sqrt{ab} < 1$ so the analysis has told us to anticipate this. Here anticipation and expectation are not the same.

Lets keep $a=0.6 but now let $ab$ be greater than 1. This gives

 This give most histories growing, just, after $25000$ iterations. As  $\hat{r} =1.0003$ and now $,r. = 1.366$. By making   $<r>$ slightly bigger we get into a situation where the game becomes almost always favourable to the player.

A more realistic gamble such as investment in carefully chosen stock it likely to vary only a few percent between iterations. This is the situation shown below.


Here when $a$ and $b$ are close there is less uncertainty and the expectation value is not such a terrible estimate as in the previous examples. Perhaps the positive return is unrealistically high. An iterated sample with a 4% loss and 5% gain is shown below.


Tuesday, 15 March 2022

Energy Science and Technology priorities to achieve "net-zero" GHG emissions

I am no expert on energy and the environment, having spent the latter half of my career in information systems engineering; especially security, safety and automation. However, I have also worked as a trouble shooting systems generalist and it is this experience that I want to try to bring to the energy challenge in tackling climate change.

Grangemouth refinery
The government is committing to  “Net Zero” greenhouse gas (GHG) emissions by 2050. This is good news but the means of achieving it are critical. To tackle climate change innovation is still urgently needed and it must come quickly because implementation time scales for new technology in complex systems are so slow. Often implementation to operation requires more than a decade to get on-line, as can be seen from the proposed Small Modular Reactor power plant that will not see operation before 2030. However,  even at this stage, given that the best scientific opinion has been clear on climate change for decades, there is still no consensus on the mix of energy solutions required and priorities. There is a continuing environmental movement opposition in principle to nuclear energy, despite its zero GHG emissions when in operation. Ensuring that the identified priorities are the right ones means recognising those environmental and societal factors that contribute to global warming and disentangling them from the real but different concerns on air quality and pollution of oceans and waterways.  The concerns about species extinction would also benefit from the clarity afforded by distinguishing between what is and what is not a due to GHG emissions.

Rolls-Royce consortium concept Small Modular Reactor facility
We need to unblock the system to act with urgency but remain focused on a plan. All this requires an increase in tempo. Suggestions on how to do this have been provided in a report prepared for the Aldersgate Group: Accelerating innovation towards net zero emissions. They identify six (not five) key actions for government policy to accelerate low carbon innovation in the UK :
  1. Increase ambition in demonstrating complex and high capital cost technologies
  2. Create new markets to catalyse early deployment and move towards widespread commercialisation
  3. Use concurrent innovations, such as digital technologies, to improve system efficiency and make new products more accessible and attractive to customers.
  4. Use existing or new institutions to accelerate critical innovation areas and co-ordinate early stage deployment.
  5. Harness trusted voices to build consumer acceptance.
  6. Align innovation policy in such a way that it strengthens the UK’s industrial advantages and increases knowledge spill overs between businesses and sectors
The report says that implementing these lessons will require a further increase in government support for innovation – through both research, development and demonstration and through deployment policies to create new markets. Government doe snot have good record in implementing complex infrastructure projects. As well as policies to create new markets, regulation and market correction should be implemented to channel the the infrastructure and engineering capabilities of the the oil and gas, as well as the defence and petro-chemical industries. As has been argued persuasively by the Nobel Prize winning economist William Nordhaus, many problems should be solved by robust carbon pricing, preferably through a carbon tax. This will release current market potential and create new market opportunities as well as generating funding for innovation.


If cost was no obstacle, land and material resources available, then current renewable energy technologies and storage methods would be sufficient. But this is not the case; there are trade-offs with costs and benefits of any course of action need to be weighed. There are many challenges and perhaps the greatest are political in both needing to convince the populations of almost all countries in the world to sacrifice their current quality of life to mitigate a predicted greater cost but one that will impact grand children and finding ways to ensure that the free loader effect does no disrupt the good intentions achieved through the Paris protocol. However if this political challenges are achieved further innovation will still be required and if the political challenges are not addressed or only partially then innovation become the only hope. However, the International Energy Agency (IEA) identifies a bottleneck in innovation. The investment in R&D for low carbon technology is not growing. IEA are tracking key technologies and of a set of 39 only 7 are on track to meet Paris targets, 20 need remedial action and 13 are off track. Those that are on track are: solar photovoltaics (PV)bioenergy for powerenergy storageelectric vehicles (EVs)raillighting  and data centres. It must be emphasised that these are on track, not complete, further investment and maintained efforts are required. To concentrate on the power domain the following technologies are tracked by the IEA:

Now, not all are equally critical but there are systemic dependencies so that the full benefit of net-zero carbon energy generation can be delivered through efficient power transmission with capacity for storage. Obviously the problem with coal fired power is that we are still using too much of it and this can only be solved by replacement technologies. For wind the key problem is not primarily technical, although there are improvements to be made, but regulation, planning and consultation. In those areas that are difficult to de-carbonise there will need to be compensating carbon capture technology. In nuclear technical work to increase modularisation and scalability is required as well safety and security by design. The major objections to nuclear of waste, safety, security and cost are being addressed but efforts must continue.

We need a short to medium term boost in funding to achieve the needed acceleration. This could be provided as a dividend from imposing a carbon tax or other robust form of carbon pricing. As well as providing revenue for R&D it also provides a steering mechanism because the tax will help correct for the failure of the energy market and provide path to exploitation for the innovations. Speed of exploitation remains a challenge with a quagmire of regulations, permissions and consultations to wade through in addition to the technical challenges that are always present in scaling up form proof of concept to operational system. Eventually an emergency situation will require emergency powers but climate change is a slow motion emergency both in the climatic development in response to increased green house gas concentration and in the climates response to mitigation. In the end it will come down to whether there is the social and political will to carry through an emergency response.

Wednesday, 22 April 2020

Preclusive action: liberty and the state of emergency


Corona, Covid-19, London, Locked, Cancellation


A good few years ago when reading "Terror and Consent" the notion of preclusive action struck me as a key concept Philip Bobbitt's together with the interpretation of security in what he referred to a the Market State. An aspect of the book that is especially relevant today is the treatment it gave of pandemics although its main concern was clearly terror. Except for fringe conspirationalists nobody thinks the current pandemic was terror instigated but Bobbitt also discusses naturally occurring pandemics in the wider context of the role of the state in protecting civilians. 

The outline below adapts freely from sections in Bobbitt's book. In addition to preclusive action the other key notion introduced by Bobbitt is the Market State - the affordable successor to the Welfare State. I hope this slight essay will encourage others to read or re-read his book.

A market state, as the term is used by Bobbitt, take up the challenge of protecting civilians and places that protection at centre stage in the life of the State. This protection  embraces not only policing and defence but also health and security of supply (food security for example). The stakes can be high. We do not know the cost of the current pandemic but an avian flu epidemic—whether engineered by a state and given to terrorists or created by them directly (the genetic code of the 1918 avian flu that killed 50 million persons has now been posted on the Internet) or naturally occurring (as is currently the case) — strikes globally with a velocity that leaves little time for reactive measures. Similarly, leaving pandemics for the moment to consider an extreme terror act, a nuclear device detonated in a major twenty-first century city could dwarf the casualties at Hiroshima. These vulnerabilities have important implications not just for diplomacy, but also for precautionary interventions and anticipatory preemptions. In addition to deploying these preclusive tactics, market states (if our current sluggish nation state ever evolves completely into one) should be able to marshal many assets—relative nimbleness and dexterity in adapting to technological change, devolution, the use of private entities as partners, and global networks of communications and cooperation—that are denied to reaction oriented nation states in their struggle against terror.

At the outset, we should be clear that anticipatory warfare is not the result of the development of WMD or delivery 
systems that allow no time for diplomacy in the face of an imminent reversal of the status quo. That might have been President Kennedy’s justification for threatening a preventative war against Cuba to prevent it from deploying ballistic missiles with nuclear warheads, but it would have been folly to have made a similar argument for action against the Soviet Union. Rather it is the potential threat to civilians — a market state concern — posed by either arming, with whatever weapons, groups and states openly dedicated to mass killing or a naturally occurring rapidly developing catastrophe that collapses the distinction between preemption and prevention, giving rise to anticipatory action whether war or emergency powers. 


Examples of failing to act preemptively and its consequences abound. For example, the USA  could have stopped the genocide in Rwanda had it acted preclusively; by the time the killing was imminent it was too late. Those inspectors who were shocked by the progress of the Iraqi nuclear weapons program in 1991—by some accounts Saddam Hussein was a year away from deployment—must have been quietly thankful for the Osirak raid ten years earlier. Had Saddam Hussein acquired nuclear weapons before he invaded Kuwait, the option of disarming him would have been infeasible. Michael Walzer once asked whether “the gulf between preemption and prevention has now narrowed so that there is little strategic (and therefore little moral) difference between them.” The last Irag war is still devisive but Bobbitt's main point is that if the community of developed states had had the will and means to carry out appropriate preclusive action then the war need never have taken place. Acting well prior to the actual development of WMD or before a full blown pandemic is underway will give rise to weighty moral considerations. As with all moral questions, one would have to know a good deal about the facts of the case under consideration. 

Therefore to understand the changing nature of victory in our wider context we must first review the shift, over the last few centuries, in the constitutional order that determines the war aim. The "war aim" must be extended now with the reach of the state extended to more  general protection of the civilian population.To simplify, the sixteenth century princely state sought to aggrandise the personal possessions of the prince; the seventeenth century kingly state attempted to enlarge the holdings of the ruling dynasty; the eighteenth century territorial state tried to enrich its country as a whole (and its aristocracy in particular) by acquiring trading monopolies and colonies; the nineteenth century state nation struggled to consolidate a dominant national people and sought empire; the twentieth century nation state fought from 1914 to 1990 to establish a single, ideological model for improving the material well-being of its people. To put it in slightly more technical terms, victory achieved in pursuit of these various (and sometimes overlapping) goals could be characterised as perquisitive (princely state), acquisitive (kingly state), requisitive (colonial territorial state), exclusive (imperial state nation), and inclusive (nation state). In all cases what is considered victory is defined by the "war aim". The victory sought by twenty-first century market states will be preclusive because damage to the civilian population is unacceptable as that wil be the nature of the effective contract between the state and the population.

If market states are indeed emerging, such states’ terms for victory in warfare and protection can best be described as preclusive. The market state comes in at least two distinct forms. According to Bobbitt, one is a state of consent and the other is a state of terror and coercion. The goal, whether for the market state of consent or for the market state of terror, is to preclude a certain state of affairs from coming into being. For the state of consent, it is terror itself that must be precluded, chiefly by the protection of innocent civilians. For the state of terror, it is individual self-assurance that must be prevented from spreading within a society, for once enough people refuse to be cowed it will be difficult to return them to a condition of submission. The line can be fine, of course, between protecting someone and preventing them from developing as they wish (as we know in our current situation).

When we consider the paradigmatic case of preclusive humanitarian intervention to prevent genocide or ethnic cleansing, we must consider these two elements, domination and responsibility. To the person, or the peoples, for whom others would assume responsibility, the exercise of that duty sometimes looks like simple authoritarianism and even exploitation (as indeed it often is).  This is one legacy of nineteenth century imperialism; and there were many beneficiaries of the altruistic late-twentieth century humanitarian interventions in Somalia or Haiti or Kosovo who seethed with resentment. The authority (often self appointed) preempted the right of the suffering people to act for themselves. Though this may have been justified on the grounds that without outside intervention these peoples would have had no real alternative in their circumstances, yet a preclusive victory has inadvertently stimulated resistance, even hostility, to the authority. Given the time when it was published, Bobbitt's focus is on terror. The dilemma will not play out in the same way in reaction to state measures to preclude a pandemic but it will put considerable strain on civil liberties and trust. It will also potentially disrupt the international cooperative order.  

To return to the current situation. We may be just at the peak of the Covid-19 pandemic. Reaching that peak has been bought the price of unprecedented modern peace time restrictions on personal liberty. Indeed, in some ways the measures are even more restrictive than during war time because of the nature of the threat. But as the nature of the threat is an evolving but dumb virus there is no need fall back on war time instincts to restrict information or act secretively. To maintain and nurture the consent of the population the state must act with openness, fairness and exemplary competence.


Friday, 8 November 2019

Review: Human Compatible by Stuart Russell

Stuart Russell is a very influential figure in the Artificial Intelligence (AI) community. He is co-author of a widely studied text book on the subject, Artificial Intelligence: A Modern Approach, that was key to moving the subject from being dominated by a formal logic approach to one that focused communities of artificial agents operating and cooperating to maximise rewards in uncertain environments.
https://www.penguin.co.uk/books/307/307948/human-compatible/9780241335208.html
His most recent book, Human Compatible, is written for a general audience. It helps if you are a reasonably informed member of the general public but the book rewards the effort with not only an overview of the current successes of AI but also addressing the technical and ethical challenges with novel, constructive  proposals. It is highly recommended.

Russell takes the challenges to human purpose, authority and basic well being seriously without being unnecessarily alarmist. Even with current and immanent applications of AI there are threats to jobs but also real gains in efficiency and cost effectiveness in medicine and production.

The aim of what Russell calls the standard model of AI has been defined almost since its inception as
Machines are intelligent to the extent that their actions can be expected to achieve their objectives.
However the major challenge comes with an anticipated major step forward. That is the creation of  general-purpose AI (GAI). In analogy with general purpose computers, that is one computer can carry out any computational task,  GAI will be able to act intelligently, that is plan and act to achieve what it needs to achieve, on a wide range of tasks. This would include creating and prioritising new tasks. Currently AI produces highly specialised agents but many of the algorithms  can be put to diverse uses; learning different skills but once learned that skill is exercised narrowly. For example, Deepmind's AlphaGo can play the board game Go very well, better than any human, but that is it. Related underlying techniques, however, can be applied to a number of applications such text interpretation, translation, image analysis and so on.

The major breakthrough that would enable the AI to escape from the constraint of narrow specialism has yet to be made. As indicated above, GAI would be a method that is applicable across all types of problems and would work effectively for large and difficult tasks. It’s the ultimate goal of AI research.  The system would need no problem-specific engineering and could be asked to teach sociology or even run a government department. A GAI would learn what from all the available sources, question humans if needed, and formulate and execute plans that work. Although a GAI does not yet exist, Russell argues that a lot of  progress will results from research that is not directly about building "threatening" general-purpose AI systems. It will come from research an the narrow type of AI described above plus a breakthrough on how the technology is understood and organised.

It is the mistake to think that what will materialise is some humanoid robot. The GAI will be distributed and decentralised; it will exist across a network of computers. It may enact a specific task such as cleaning the house by using lower level specialised machines but the GAI will be a planner, coordinator, creator and executive. Its intelligence would not be human it will have significant hardware advantages in speed and memory it will also have access to an unlimited number of sensors and actuators. The GAI would be able to reproduce and improve upon itself. It would do this at speed. 

And the role for the human? None that the GAI requires. The GAI would be able to construct the best plan to tackle climate change, for example, but on its terms  and in a way that maximised its utility. And to maximise its utility it will find ways to stop itself being switched off.

This danger, according to Russell, comes from the aim that has guided the whole research programme so far. That is, to give an AI objectives that become the AI's own objectives. A GAI could then invent new objectives such as self replication or inventing better information storage and retrieval to speed up its own actions. A GAI could then be said to have its own preferences and effectively is a form of person. It need not be malevolent but equally it need not be motivated by benevolence to humans  either.

In short, the solution proposed by Russell to the problem of human redundancy is to engineer the GAI so it cannot have preferences of its own. It is engineered to enact preferences of individual humans. The final third of the book is devoted to exploring the ramification around a set of principles for benevolent machines:
  1. The machine’s only objective is to maximise the realisation of human preferences. 
  2. The machine is initially uncertain about what those preferences are.
  3. The ultimate source of information about human preferences is human behaviour.
These principles are there to inform design but also regulation and the formulation of research programmes. Russell deals with these and the societal efforts and cooperation that will be required to deal with unintended consequences as well as the actions of some less than benevolent humans who may want to create a GAI that has a goal other than to satisfy human preferences.  The discussion addresses the challenges of multiple artificial agents and multiple and incompatible human preferences. Of course Russell does not definitively solve any of these but indicates routes to resolution.

It is important that this book shows there is an approach in which humanity can reap the benefits of AI research without being subjugated to a superior intelligence or being forces to implement an AI ban. It is a positive vision that I hope informs and shapes our approach to this technology. But there is urgency, because we do not know when or where the break through to GAI will take place. If it happens within the current standard model pattern of research and application then disaster threatens.





Thursday, 18 April 2019

Choosing a voting system

Voting rules range form the very simple such as the plurality rule (commonly known as "first past the post") to a plethora of more or less complex schemes. These schemes attempt to capture notions of fairness such as proportional representation and and majority preference. Voting takes place for many purposes; even choosing a voting system. Whatever the more general position may be, if there is one, proportional representation has been Liberal Democrat policy on voting reform at all levels of government for so long it has become an entrenched view. In its defence, the policy is well worked out in the sense that there is a specific actionable proposal but the major problem has been in getting wider agreement on implementation. The Liberal Democrats went into the 2010 election with a specific position on electoral reform. The 2010 manifesto promised to:
Change politics and abolish safe seats by introducing a fair, more proportional voting system for MPs. Our preferred Single Transferable Vote [STV] system gives people the choice between candidates as well as parties. ...
Is there a liberal counter view. Yes, and it is embedded in a liberal classic. It is to be found in Karl Popper's major contribution to political philosophy "The Open Society and its Enemies". In this book Popper strenuously, but only briefly, defends the two party system and simple majority voting (plurality rule). In 1988 the Economist invited Popper to return to these themes in   The open society and its enemies revisited. In this piece he expands on his defence of the two party state and majority voting with detailed objections to proportion representation. Here I will engage with the presentation of the same arguments in an update provided by David Deutsch in his wide ranging book The Beginning of Infinity. Although, I will conclude that the criticism of proportional representation is not as conclusive as Popper and Deutsch would claim and that they underplay the weaknesses of "first past the post" or plurality voting, the critical engagement with their arguments brings out some points that strengthen the case for a proportional system.

Choices

In Chapter 13 of "The Beginning of Infinity" Deutsch devotes much space to working through examples to illustrate just how difficult it is to design a voting system that all parties agree to be fair. He then moves on to to more formal account using the setting of social choice theory.  This leads him to present and discuss the Arrow no-go theorem.This states that there is no rule that maps the preferences of the individuals in a group on to the preferences of the group as a whole that can satisfy a complete set of five intuitive, desirable and rational properties. These desirable properties (axioms) are:
    1. The rule should define a group’s preferences only in terms of the preferences of that group’s members.
    2. The rule must not simply designate the views of one particular person to be ‘the preferences of the group’
    3. If the members of the group are unanimous about something – in the sense that they all have identical preferences about it – then the rule must deem the group to have those preferences too.
    4. If a given definition of ‘the preferences of the group’, the rule deems the group to have a particular preference – say, for A over B, then it must still deem that to be the group’s preference if some members who previously disagreed with the group (i.e. they preferred B) change their minds and now prefer A too.
    5. If the group has some preference, and then some members change their minds about something else, then the rule must continue to assign the group that original preference.
    Remarkably, Arrow proved this set of 5 axioms is logically inconsistent. That is, no voting system can satisfy them all. This is a blow for a rational foundation to social choice theory but that has not stopped research and the development of a variety of voting rules.

    In the UK the Electoral Reform Society (ERS) accepts the theorem but recognises the need to get on with voting system definition and evaluation, otherwise how would representational democracy work? An alternative, and more radical, reaction is to reject the social choice setting of the problem. This is implicit in "Open Society and its Enemies" but is made explicit by Deutsch.

    The ERS introduce or exploit criteria such as locality and proportionality to rank and make trade-offs between voting systems. As could be anticipated from this approach, the ERS has come out in favour of compromise that includes an element of proportionality and and an element of locality, this is the Single Transferable Vote system, as adopted by the Liberal Democrats. In this system the more proportionality gained the less locality and vice versa, with the number of seats per constituency as the free parameter. In practice, the final position chosen will be a pragmatic trade-off with public opinion and wider political support.  

    Proportional representation

    After presenting Arrow's no-go theorem Deutsch attacks rather than discusses proportional representation (PR) voting systems. It is evident that PR shares weaknesses with all voting rules that are formulated in a social choice setting. Most of Deutsch's specific objections are effectively, if not absolutely conclusively, answered by the ERS in its pamphlet PR Myths. The first objection Deutsch makes is, however, not addressed in the pamphlet. This is:
    ... the ‘More-Preferred-Less-Seats paradox’, in which a majority of voters prefer party X to party Y, but party Y receives more seats than party X.
    Deutsch neglects to mention that this also a weakness of the plurality rule. The system that avoids this particular paradox is Condorcet ranking, of which there are a number of variants, just as there are for STV. It would take us too far from the present discussion to examine Condorcet methods in detail. It has a formal weakness that the originator discovered, which is, basically, that a ranking is not guaranteed to exist. However this risk has recently been mitigated by factoring in preference structures in realistic population models. This recent work was carried out by Partha Dasgupta and Eric Maskin in a very formal version of social choice theory. However, what this objection highlights is that STV and other systems with similar ranking rules will be in conflict with an alternative view of fairness to PR, which is that preference should be given to option X over Y if  the majority prefers X to Y.

    Beyond Social Choice

    At this point Deutsch makes an interesting move against social choice theory itself that gains support from Popper's wider philosophy:
    It [social choice theory] conceives of decision-making as a process of selecting from existing options according to a fixed formula (such as an apportionment rule or electoral system). But in fact that is what happens only at the end of decision-making – the phase that does not require creative thought.
    The creative aspect is what happens before a set of choices are put before the electorate. Arguments, explanations, economic theories, values, public opinion and so on, all contribute to formulating the set of options. It is argued that the quality of the options on offer is more important than the voting rule that provides the preferences. Deutsch argues further that the weakness of the social choice setting and the paradoxes associated with it means that a more fundamental and wider criterion for voting is required. The more fundamental criterion is, according to Deutsch:
    Popper’s criterion that the system facilitate the removal of bad policies and bad governments without violence.
    Whether this is the only or dominant criterion, it is clearly a valuable one. Deutsch adopts Popper's arguments that a plurality voting rule matches this criterion better than any proportional rule. The ERS pamphlet PR Myths seeks to address the Popper objection directly. They frame it differently  as  "PR doesn't let you kick out an unpopular government", which moves it back into the social choice setting but it is close enough to the criterion for the purpose of the present argument. The counter evidence presented by ERS is straight forward observation. Countries that practice PR do not only get changes of government but it is not even a rare occurrence. This is followed up by the further observation that plurality rules have historically failed to remove unpopular governments and removed popular ones. Popper himself originally had the excuse that these observations were not available at the time of the first edition of the "Open Society" but not in 1988 and it is certainly not the case for  Deutsch. It is understandable that in the 1940's that the USA and UK were taken a the outstanding examples of the stable and effective liberal democracies but since then the problems with the two party system underpinned by a plurality voting rule have become evident.

    Seen from the stand point of Popper's criterion both plurality and PR can provide in practice adequate mechanisms for removing policies or governments without resorting to violence but in both cases the importance of other institutions, constitutional checks and balances, and a fertile public sphere of debate and ideas is required. What this discussion is leading towards is the need to manage expectations of what can be achieved merely through adopting one voting rule rather than another. Behind Popper's criterion there is a philosophy closely related to his theory of knowledge. This requires a diversity of theories, conjectures, ideas and, in the political context, policies. This is facilitated by a system that encourages the growth of a diverse number of groupings that can formulate, criticise and propose solutions to the challenges in society. The plurality voting rule tends to lead to two major party blocks. Other sources of ideas and opinion can be safely ignore by these two major groupings. In the past, during the formative years of liberal democracy, the major groupings have been the Conservatives and the Liberals but for the last seventy years it has been the Conservatives and Labour. What also happens is the absorption by the major parties of more extreme positions, as can be seen clearly in the current situation in the UK and to some extent in the USA. In the UK the Conservatives contain a substantial and influential group of English nationalists and currently the Marxist faction leads the Labour party. So, there is some internal diversity but it is a management issue for the main parties not a constructive component of the national debate. Smaller parties are traditionally ignored except when there are small majorities or hung parliaments.

    For a well functioning constitutional democracy it is this suppression of opinion and diversity that is the damning consequence of the plurality voting rule. PR is much stronger in nurturing small parties and opinion groups. What is needed is further reform of institutions that make sure that this diversity leads to more robust debate and effective policies.

    Plurality voting rules should go, but this leaves Condorcet methods as an alternative to PR and there are numerous versions of PR. It is easy to see that Condorcet methods will tend to correct for any formulation of two polarising groups but whether these methods provide for and nurture diversity of opinion awaits further analysis. In a social choice setting the fairness encapsulated in the Condorcet mechanism is just as valid as the proportionality notion with which it is in conflict. Consideration should be given in a case by case basis as to which is the appropriate voting rule; whether in general elections, internal voting in Parliament and other bodies, and at different levels of government. Not only theory but the evidence from practice shows PR to have an established advantage in the creation of a diverse, multi-party public sphere, which together with the right institutions should provide a robust foundation for a liberal order. 

    Thursday, 28 March 2019

    A critical rationalist approach to policy creation


    The starting point for this piece is "What use is Popper to a politician?" by Bryan Magee. Magee is a long standing advocate for Popper and therefore the philosophical position known as Critical Rationalism (CR). Without wishing to take away from Poppers contribution, I prefer to use CR and make clear that this is not advocating the opinions of a person but to argue for a constructive philosophical approach to policy formulation. 

    Magee is that rare thing in the UK a public intellectual who was an elected politician. First as a Labour MP then for the newly formed SDP. The article referred to above was written some 10 years following his career as an MP but is strongly coloured by his social democratic position. He seems blind to the non-social democratic liberal position which is mine. However this is not too grave as the philosophical stance being advocated is available to anyone who is open to rational argument and evidence.  Magee himself mentions Margaret Thatcher as the sort of radical conservative who could open to this approach.

    In much of what follows I will follow Magee quite closely but my formulation is adapted to the creation policy proposals and amendments. The sort of work that takes place prior to a party conference. One of my motivations for this piece is dissatisfaction with both the process and with much of the output from this process.

    CR itself is a subject with its own extensive literature, but you should be able to pick up the essential points relevant to policy formulation in what follows. CR was developed via a critique of positivism and induction in the natural sciences. It presents the scientific enterprise as an exercise in problem identification and resolution with the important caveat that the solutions are provisional and need to be subjected to continuing critical review. Policy formulation is not a natural science but the recognition of the fallibility of proposed policy proposals should be evident.  

    So, in policy too, first identify and formulate the problem with care. This means not jumping to solutions or using the issue to display indignation or personal virtue. The articulation should be as clear and jargon free as possible. For example, in the case of health consequence of diet, it is necessary to formulate the problem, if there is one, as objectively as possible. If people are eating too much; that is their concern. If the are eating too much and damaging their health; that too is their concern. If over eating is leading to strains on the health service, leading to higher taxation, then that is potentially a real policy problem and it is possible to start to address it. But even here we do not stop. Having formulated the problem better we can now quantify it. This is not just looking at the evidence but looking at the quality of prognostics and the assumptions made. There are always assumptions.


    The next step is to formulate policy proposals. It is the creative step, but based on best available knowledge. This is not just data but economic theory, philosophy, science and knowledge of how government works, and whatever else can be brought to the task. Other softer and more value oriented considerations should not be neglected. For example, ask whether it would be legitimate for free individuals to be constrained through a proposal that addresses a problem by managing a statistical distribution across the whole population. It is not possible to derive a solution f
    or the body of knowledge, hence the creative element. In getting proposals formulated anything goes: debate with yourself, with others, writing Op-Eds and getting feedback, etc. The outcome should be a proposal or set of proposals that is clearly articulated, defendable and actionable. By actionable is meant that a policy is a solution to a problem and therefore if acted upon will, or at least intends, to solve that problem.
    Done? No. The proposal needs to be subjected to a further critical analysis. An important mechanism is to try out the proposals against implementation scenarios. This is motivated by the recognition of unintended consequences of a policy. That is, the solution may not be robust to  small changes in the implementation scenario or it may have negative impact if implemented in a certain way. The outcome should be a ranked set of actionable policy proposals with supporting explanations and evidence. But not always.

    It is quite possible that after much hard work and critical analysis no actionable policy proposal emerges. So, have we been too critical. Very unlikely. Remember that fallibility at each step is never eliminated.The problem as formulated may not have an actionable solution or the actual problem has not been identified after all. To return to the impact of diet on the health service;  is the problem perhaps with how the health service is structured. For example the health service has no, or weak, personal responsibility mechanisms. The other realisation is that in the end it may be better to do nothing. The process will not have been a waste because you will know why you are proposing no action. However in many and I would anticipate the majority of cases the approach outline here will indeed help in identifying strong and defendable policy proposals.

    My main motivation for writing this is the often dismal to poor quality of policy proposal writing.  If in turn you are critical of the CR approach or my formulation of it then you too are participating.  Thank you.

    Saturday, 3 November 2018

    Sustainable Energy - without the hot air: 10 years on

    The free to download ebook version of Sustainable Energy — without the hot air was last issued on the 3rd of November 2008. Ten years have passed since this important and influential science based look at sustainable energy was updated. I am sure that if it was not for the early death of David MacKay, he would have built upon this achievement and updated and evolved his position. There have been calls for a collective effort to keep this book up to date, notably from Chris Goodall. This has not gained very much traction but books do tend to need a high level of personal ownership to advance.

    Let's look at the approach MacKay took as it has been particularly robust to intervening events. Rather than focus on what 2008 technology could do he used physics to bound what is in principle achievable. He attempted to answer the following question as honestly as possible: how much energy is there to be harvested practically and in a sustainable form? Questions of social, economic and wider environmental impacts were not ignored. These factors lead to MacKay to propose a number of plans, of which the feasibility and affordability will depend on technical and behavioural developments. They each add up to 70 kWh per person per day (estimated consumption in 2008 was 125 kWh per person per day) and are shown below.

    David MacKay provided a ten page summary and the details of the figure above are explained there. The presence of Nuclear and "Clean Coal" will not please environmental purists but MacKay's approach is pragmatic and clear that dealing with climate change will require major sacrifices, of which green sensitivities are not the most onerous. For any of these plans to be feasible MacKay's analysis points to a 25% improvement in heating efficiency and a 50% improvement for transport and other efficiency savings that must reduce overall energy consumption to about 60% of 2008 levels.. His analysis does not show that it will be easy to achieve this nor that it is achievable then or now through established technology.  He does argue persuasively that there no barriers in physics. This analysis lays bare, although largely implicit in the book, the need for political leadership and imagination that will galvanise humanity to meet the challenge.

    Let's pause to reflect what an achievement the book was, how well it stands up against developments and what a loss David MacKay's early death was. Since 2008 the acceptance of the evidence for climate change and its human generated causes has gained much wider informed acceptance. The case for a technically grounded response to the challenge has never been stronger and MacKays example needs to be followed and acted upon. Science does not only provide the analysis but also the foundation for the technology that the solution to the climate change challenge will require.

    Tuesday, 18 September 2018

    Reclaiming and renewing liberalism: Free Trade



    Free Trade as manifested in the repeal of the Corn Laws was a founding event for the Liberal Party. There are signs however that liberals are becoming wary of the phrase. This a clear symptom of the malady that the The Orange Book was seeking to address fourteen years ago. The ailment is that key liberal ideas have been appropriated and distorted by the Conservative Party and in the case of the phrase "Free Trade" by the right wing of that party. So, by association in the minds of many with a weak grasp of history the phrase has become in the UK a right wing Conservative slogan and tainted for that reason.

    The phrase "Free Trade" on its own is open to a number of interpretations and requires context to clarify which interpretation is meant. On the Conservative right the freedom being emphasised is freedom from regulation. Liberals have long since moved on from a 19th century laissez-faire philosophy. Freedom should be understood as the liberty of all parties to enter into an exchange or not. This means commerce and trade is in harmony with the core liberal value of individual liberty. Domestically, it has long been recognised that the rule of law is required for competitive markets to flourish. Internationally a system of rules is required to allow trade between and across nations to be effective. Agreements that include harmonisation of regulations are necessary but we should be limiting them where possible to concerns like safety in the food chain or of electrical products, for example. Regulations should not be used to skew advantage to one party or a particular group.

    Liberalism is a dynamic and adaptable philosophy. Today's goods and services are of a complexity that makes a laissez-faire approach, suitable for corn, unfit for the exchange of contemporary goods and services.  It is this dynamic and forward looking liberalism that is presented in A manifesto for renewing liberalism. But here too there a sign that the phrase "Free Trade" is not to be used and "Open Markets" used as a replacement. Are we to give in to the loud and strident Conservative right and cease talking directly about Free Trade and its benefits?


    No, while looking forward, it is important to mark and take pride in key historical achievements. To the fore among these is the establishment of Free Trade as the engine of growth in the 19th century and then its resurgence after the 2nd World War based on a system of agreements and rules that recognise and protect human freedoms and rights.  So let us not be bullied out of using the phrase and support Free Trade as the strongest economic mechanism yet discovered for fostering growth and relieving poverty.

    Tuesday, 7 August 2018

    Values: liberty and its supporting mechanisms



    Liberalism is often characterised as a compromise philosophy, with individual freedom being in essential conflict with egalitarian views on justice. The values that are thought of as liberal are often represented as forming some watered down consensus where everything is a compromise for a comfortable life. These values are gathered into lists such as that provided for school projects on "British Values".  According to Ofsted, British Values are:
    • Democracy
    • The rule of law
    • Individual liberty
    • Mutual respect for and tolerance of those with different faiths and beliefs and for those without faith.
    This list seems fine on first reading and serves well as a starting point. Here, I will use it as such. 

    Here I will defend the view that liberalism can form a coherent philosophy with a system values distinct from socialism (democratic or not) and conservatism.

    Personal liberty, a fundamental liberal value and effectively tautologous and so does not need an argument to be included.  It is on the Ofsted list as equivalent to their third bullet, but what is the status of the others on the list. I acknowledge that all are important, with a reservation on the fourth,  and mostly to be cherished but are they equally fundamental or even values? Of the possible meanings of "values" consider (Oxford):
    1. One's judgement of what is important in life; 
    2. Principles or standards of behaviour.
    Each item on the Ofsted list can be interpreted consistently with the first meaning. Only the fourth on the list is clearly consistent with the second meaning but the others could be coaxed into an acceptable format. Equality is omitted from the Ofsted list as is freedom from violence.

    Is it possible to do better with our values than provide a bland list? Can some structure be provided or a theory that explains away the perceived conflicts to provide an coherent mesh of values?

    Here the value adopted to start the critical discussion is personal liberty for all where each individual is limited only to do no harm to others. By personal liberty I mean (provisionally) freedom of choice and action. All, so far, quite uncontroversial and consistent with Mill's position in the classic On Liberty. The question to be addressed here is: what are the other candidates for liberal values and whether any are as fundamental to liberalism as personal freedom? The obvious ones that spring to mind are democracy, equality, freedom from violence, rule of law and economic freedom. Economic freedom is included because some claim it as a fundamental freedom. In addition, under the name Capitalism, economic liberalism is often described as an ideology by its opponents, often communists but significantly milder socialists and radicals too.

    Of the three additional candidates outlined above the one that I think would get the most support, as a value, is democracy. Indeed there are democrats on the extremes (but not always too extreme) of the political left and right who may even value democracy over personal liberty. So is democracy a candidate liberal value or is it something else such as a supporting concept or an enabling mechanism?

    It is a tenet of classical liberalism that democracy, just as any form of government, must be held in check by constitutional measures to avoid effects such as the tyranny of the majority. This in itself is not too different from the constraints on personal liberty but here the reason to constrain democracy is to protect personal liberty. Therefore making liberty prior or a more fundamental value. So, why democracy? The strongest reason for a democratic political constitution is to enable the non-violent removal of bad governments (as argued convincingly by Karl Popper). It is desirable for democratic governments to make good positive decisions but scepticism about there capacity to deliver effectively a wide range of services as opposed to laws and regulations is justified by much evidence. Not adopting democracy as a fundamental value is not to de- or under- value it. Supporting mechanisms are essential for values to make  their effect.

    This means that freedom from violence is also more fundamental than democracy. A constitutional democracy must have mechanisms to return at regular intervals to the opportunity to change government. On democracy as a decision making mechanism there is sufficient analysis on voting systems to show that there is no generally applicable voting rule to obtain optimal decisions. Therefore democracy cannot be defended a generally effective decision making mechanism. So what  the people vote for needs to be constrained based on the more fundamental freedoms of personal liberty and freedom from violence. Although not optimal, voting does provide a way to arrive at pragmatic acceptable outcomes in many cases. This pragmatism justification means there are a number of candidate constitutional arrangement that have evolved or been chosen to implement a democratic order. 

    All this makes democracy an essential enabling mechanism to ensure a satisficing degree of personal liberty and a considerable measure of security from violent acts. It could be argued that freedom from violence is part of personal liberty but they are to considerable extent independent. In a social situation less enlightened than the one we enjoy in western Europe, and some places elsewhere, we can well imagine a preference for freedom from violence over personal freedom and even sacrificing freedom to gain security. Democracy, correctly implemented, provides some guarantees against violence by government against its own population but for more general protection we need to look to the Rule of Law. The Rule of Law is, like democracy, not a liberal value in itself but a mechanism that constrains the freedom of persons or groups but leaves a valued measure of personal liberty to be enjoyed in safety and security.

    Economic freedom is often taken a tradition liberal value.  There is ample historical evidence that, given the opportunity, reasonably free persons will organise spontaneously into cooperative groups that eventually give rise to market structures at their local level. In time groups will interact to produce a wider market order.  Analysis of market mechanisms show that they enable cooperation and creation of wealth without coercion. The wealth creation of the market provides the resources to make practicable the desires and actions of free persons. Thus economic freedom is an enabler or mechanism and that is also the case if emerging economic order is labelled Capitalism. It is an enabling mechanism, not a value let alone an ideology, and as such required regulation and a legal framework. Within this  context of values and mechanisms, individual freedom transform in a concept of individual liberty that is enhanced through equality of opportunity, rule of law and open markets.

    So, to summarise personal liberty along with freedom from violence are distinct if still mutually influencing values. Democracy, Economic Freedom and the Role of Law are enabling mechanisms. So one fundamental value has been added that is not on Ofsted's list. Democracy and the Rule of Law become essential enabling mechanism along this economic freedom as instantiated in the competitive market order. That leaves the last Ofsted value. It looks like an awkward add on and I would argue that it essentially a polite statement of a desirable attitude that is fundamentally derived from the moral equality of individuals. However one aspect of it is fundamental and that is tolerance. Tolerance is what enables the other aspects of liberalism to work together as a system.

    I end by proposing an alternative ranked set of fundamental values to be guaranteed by a constitutional liberal democratic order:
    1. Freedom from violence
    2. Personal Liberty
    3. Equality of opportunity.
    4. Tolerance of other individuals or groups that are not causing harm
    These being supported by the mechanisms of democracy, rule of law and markets.


    Friday, 13 July 2018

    Liberal leadership in ideas: the welfare state


    I have never liked the term "welfare state" with is paternalist tone, preferring "social state" derived from the German Sozialstaat  and fitting with the concept of the social market as a regulated, fair and free economy. It was interesting to read today in The Economist (Repairing the safety net - The welfare state needs updating) that the founding spirit of the UK welfare state, the Liberal William Beveridge, didn't like the term either. However, and more importantly, in that article  and the introductory leader -   - the point is made that it was the liberal philosophy of Beveridge that informed the identification of the need, scope and form of the UK welfare reforms of the  late 1940's. The main thrust of these articles in The Economist is that we must return to liberal values and creative thought on the mutually supporting nature of effective welfare with effective wealth creation to redesign the welfare state.

    As chance would have it, this is a topic I addressed in a recent post, Hayek and the welfare state, in which the argument was made that a principled and human approach to welfare was not only compatible with but should be an essential complement to the market economy. This is  very much the tone of the pieces in The Economist too.

    Years of tinkering and short term politically motivated meddling from left and right have led to the UK social protections becoming a bureaucratic entanglement that is costly and provide at best a second rate service. In addition the weakening of the liberal tradition, Liberal Party and the consequent decrease in influence have contributed to this state of affairs.This is not the time to roll back to 1945 in terms of detail and implementation but just as the market has evolved to become more diverse and international, social protection must also be thought through again. Neither is it a mere matter of picking up The Orange Book, but it does provide a valuable starting point. What should be returned to is the core liberal philosophy with its humane side personified by Beveridge dealing with the “Five Giants”: disease, idleness, ignorance, squalor and want, and it competitive market foundation for the creation of wealth. We can look to the rounded enlightened figures such as Adam Smith and John Stuart Mill as well as the more austere Friedrich Hayek. Nor should the still very pertinent thinking of the German social market economists such as Walter Eucken be neglected, as well as others across the world. One of most recent champions of liberal enlightenment values, Steven Pinker in Enlightenment  Now, was a pains to stress that it is the combination of social care and wealth creation that has given us the progress in global well-being that he documents in such detail.

    Reform is needed to deal with affordability and fairness of a social provision system that is designed to provide the safeguards that foster wealth creation rather than undermine it. The social safety measures must provide quality services that people are proud to use and are proud of the society that provides them. Services such as the NHS do not provide this quality and are currently condemned not to be  capable of delivering it, as outlined in a recent opinion piece by Matthew Parris.

    To repeat the essential points from my earlier post; four principles can be proposed to help design social insurance that can enhance market dynamism and economic freedom in a Free-Market Welfare State:
    • Risk and Entrepreneurship.  As the term “safety net” suggests, social insurance can enhance risk-taking and entrepreneurship by ensuring failure is not catastrophic.
    • Search and Adjustment Costs. Workers who are laid off in periods of market restructuring should be ensured a smooth transition through appropriate wage replacements and active labour-market policies. While your job may not be secure, your employment is. 
    • Benefit Portability. Markets work best when social benefits follow the individual and are detached from any particular firm or market structure. (In the UK many people are often trapped in a firm due to penalties imposed to their pension entitlement. In contrast the German system decouples this, as recommended.)
    • Migration Robustness. Welfare benefits should be payments or services resulting from insurance funds to which people have contributed while working in the host country, migrants who claim such benefits should not therefore be perceived to be a great problem. There needs to be further humanitarian safeguards for refugees and others in dire need; as opposed to economic migrants.


    Conditional probability: Renyi axioms


    In earlier posts the relationship of the material conditional to conditional probability and the role of Leibniz in the early philosophy of probability where discussed. In both posts the case for taking conditional probability as fundamental was made or implied. How far this will resolve the difficulties in combining aspects of propositional logic with probability theory remains to be seen but it is worth taking time to explain a full axiomisation with conditional probability as fundamental. A further consideration is the clarification of distinct role of conditional probability in the epistemic and the objective (ontological) interpretations.

    In his Foundations of Probability  Renyi provided an alternative axiomisation to that of Kolmogorov that takes conditional probability as the fundamental notion, otherwise he stays as close as possible to Kolmogorov. Renyi has provided a direct axiomatisation of quantitative conditional probability. In brief, Renyi's conditional probability space $(\Omega, \mathcal{F} (, \mathcal{G}, P(F | G))$ is defined as follows. The set $\Omega$ is the sample space of elementary events and $\mathcal{F}$ is a $\sigma$-field of subsets of $\Omega$ (so far as with Kolmogorov) and $\mathcal{G}$, a subset of $\mathcal{F}$ (called the set of admissible conditions) having the properties:
    (a) $ G_1, G_2 \in \mathcal{G} \Rightarrow G_1 \cup G_2 \in \mathcal{G}$,
    (b) $\exists \{G_n\}, \cup_{n=1}^{\infty} G_n = \Omega,$
    (c) $\emptyset \notin \mathcal{G}$,
    $P$ is the conditional probability function satisfying the following four axioms.
    R0. $ P : \mathcal{F} \times \mathcal{G} \rightarrow [0, 1]$,
    R1. $ (\forall G \in \mathcal{G} ) ,  P(G | G) = 1.$
    R2. $(\forall G \in \mathcal{G}) , P(\centerdot | G)$ , is a countably additive measure on $\mathcal{F}$.
    R3. $(\forall G_1, G_2 \in \mathcal{G}) G_2 \subseteq G_1 \Rightarrow P(G_2 | G_1) > 0$, $$(\forall F \in \mathcal{F}) P(F|G_2 ) = { \frac{P(F \cap G_2 | G_1)}{P(G_2 | G_1)}}.$$
    What has this won over the more well known Kolmogorov formulation?

    A number of examples have been highlighted by Stephen Mumford, Rani Lill Anjum and Johan Arnt Myrstad in What Tends to Be, Chapter 6. These have been analysed by them using absolute probabilities as fundamental, so a Kolmogorov type framework, and these examples will be revisited here using Renyi's formulation. The critique of Mumford et all is based on a development of the development of an ontological point of view that has the potential to clarify physical propensities as a degree of causal disposition. The explicit clarification of the example within Renyi's axiomisation shows that by adopting it the path is open to mathematically modelling physical propensities as causal dispositions.

    Here is the first example that is thought to indicate a problem with absolute probability (absolute probability will be denoted by $\mu$ below to avoid confusion with Renyi's function $P$).
    P1. Let $\mu(A) = 1$ then $\mu(A | B) =1$, $\mu$ is Kolmogorov's absolute probability
    We can calculate this result from Kolmogorov's conditional probability postulate as follows: since $\mu(A \cap B) = \mu(B)$, $\mu(A|B) = \mu(A \cap B)/\mu(B) = \mu(B)/\mu(B)=1$. Why is this problematic? Not at all if you stay inside the formal mathematics but is if $\mu(A|B)$ is to be understood as a degree of implication. Is it not reasonable that there must exist a a condition under which the probability of $A$ decreases? A consequence of Renyi's theory is that these Kolmogorov absolute probabilities can be obtained by conditioning on the entire sample space
    $$ \mu(A) \equiv P(A|\Omega).$$
    Then $\mu(A)=1$ means $P(A|\Omega)=1$ and again (by R3.)
    $$P(A|B) = { \frac{P(F \cap B | \Omega)}{P(B |\Omega)}}=1 $$
    independently of choice of $B$ which must be a subset of $\Omega$. Thus, giving the same result. However if we are not working within a global conditioning on the entire $\Omega$ but on a proper subset of $\Omega$ called $G$, say, then $\mu(A)=1$ has no consequence for $P(A|G)$ and in a addition it is now possible to pick another conditioning subset of $\Omega$, $G'$, such that $G' \not\subseteq G$ then R3. does not apply and therefore the value of  $P(A|G)$ and $P(A|G')$ have to be evaluated separately. That is, it is a modelling choice. How they are evaluated depends on whether an epistemic or an objective interpretation of $P$ is being used.

    A further problematic consequence of Kolmogorov's conditional probability is when $A$ and $B$ are probabilistically independent
    P2. $\mu (A\cap B)=\mu(A )\mu(B)$ implies $\mu(A|B)=\mu(A)$⋅
    In general Renyi's formulation does not allow this analysis to be carried out. This is because the Kolmogorov conditional probability formula only holds under special conditions, see  R3.  Independence, in Renyi's scheme, is only defined with reference to some conditioning set, $C$ say. In which case probabilistic independence is defined by the condition
    $$ P(A \cap B |C) = P(A|C)P(B|C)$$
    and as a consequence it is only if  $B \subseteq C$ that
    $$   P(A|B ) = { \frac{P(A \cap B | C)}{P(B | C)}} = P(A|C)$$
    that is, only if $C$ includes $B$. Therefore, $P(A|C)$ being large only implies $P(A|B)$ is equally large when $C$ inudes $B$, using the mapping to the material implication in propositional logic as shown in  the earlier post.

    The third example, P3.,  is that regardless of the probabilistic relation between $A$ and $B$, a third consequence of the Kolmogorov conditional probability definition is that whenever the probability of $A$ and $B$ is high $\mu(A|B)$ is high and so is $\mu(B|A)$:
    P3. $(\mu(A \cap B) \sim 1) \Rightarrow((\mu(A|B) \sim 1) \land \mu(B|A) \sim 1))$
    As above this carries over into Renyi's axiomisation only for the case of conditionalisation on the whole sample space. If another conditioning set is used, call it $C$ again, then P3. does not hold in general. It does hold, or its equivalent does, when both $A$ and $B$ are subsets of $C$ but that is then a reasonable conclusion for the special case of both $A$ and $B$ included in $C$.