Tuesday 15 March 2022

Energy Science and Technology priorities to achieve "net-zero" GHG emissions

I am no expert on energy and the environment, having spent the latter half of my career in information systems engineering; especially security, safety and automation. However, I have also worked as a trouble shooting systems generalist and it is this experience that I want to try to bring to the energy challenge in tackling climate change.

Grangemouth refinery
The government is committing to  “Net Zero” greenhouse gas (GHG) emissions by 2050. This is good news but the means of achieving it are critical. To tackle climate change innovation is still urgently needed and it must come quickly because implementation time scales for new technology in complex systems are so slow. Often implementation to operation requires more than a decade to get on-line, as can be seen from the proposed Small Modular Reactor power plant that will not see operation before 2030. However,  even at this stage, given that the best scientific opinion has been clear on climate change for decades, there is still no consensus on the mix of energy solutions required and priorities. There is a continuing environmental movement opposition in principle to nuclear energy, despite its zero GHG emissions when in operation. Ensuring that the identified priorities are the right ones means recognising those environmental and societal factors that contribute to global warming and disentangling them from the real but different concerns on air quality and pollution of oceans and waterways.  The concerns about species extinction would also benefit from the clarity afforded by distinguishing between what is and what is not a due to GHG emissions.

Rolls-Royce consortium concept Small Modular Reactor facility
We need to unblock the system to act with urgency but remain focused on a plan. All this requires an increase in tempo. Suggestions on how to do this have been provided in a report prepared for the Aldersgate Group: Accelerating innovation towards net zero emissions. They identify six (not five) key actions for government policy to accelerate low carbon innovation in the UK :
  1. Increase ambition in demonstrating complex and high capital cost technologies
  2. Create new markets to catalyse early deployment and move towards widespread commercialisation
  3. Use concurrent innovations, such as digital technologies, to improve system efficiency and make new products more accessible and attractive to customers.
  4. Use existing or new institutions to accelerate critical innovation areas and co-ordinate early stage deployment.
  5. Harness trusted voices to build consumer acceptance.
  6. Align innovation policy in such a way that it strengthens the UK’s industrial advantages and increases knowledge spill overs between businesses and sectors
The report says that implementing these lessons will require a further increase in government support for innovation – through both research, development and demonstration and through deployment policies to create new markets. Government doe snot have good record in implementing complex infrastructure projects. As well as policies to create new markets, regulation and market correction should be implemented to channel the the infrastructure and engineering capabilities of the the oil and gas, as well as the defence and petro-chemical industries. As has been argued persuasively by the Nobel Prize winning economist William Nordhaus, many problems should be solved by robust carbon pricing, preferably through a carbon tax. This will release current market potential and create new market opportunities as well as generating funding for innovation.

If cost was no obstacle, land and material resources available, then current renewable energy technologies and storage methods would be sufficient. But this is not the case; there are trade-offs with costs and benefits of any course of action need to be weighed. There are many challenges and perhaps the greatest are political in both needing to convince the populations of almost all countries in the world to sacrifice their current quality of life to mitigate a predicted greater cost but one that will impact grand children and finding ways to ensure that the free loader effect does no disrupt the good intentions achieved through the Paris protocol. However if this political challenges are achieved further innovation will still be required and if the political challenges are not addressed or only partially then innovation become the only hope. However, the International Energy Agency (IEA) identifies a bottleneck in innovation. The investment in R&D for low carbon technology is not growing. IEA are tracking key technologies and of a set of 39 only 7 are on track to meet Paris targets, 20 need remedial action and 13 are off track. Those that are on track are: solar photovoltaics (PV)bioenergy for powerenergy storageelectric vehicles (EVs)raillighting  and data centres. It must be emphasised that these are on track, not complete, further investment and maintained efforts are required. To concentrate on the power domain the following technologies are tracked by the IEA:

Now, not all are equally critical but there are systemic dependencies so that the full benefit of net-zero carbon energy generation can be delivered through efficient power transmission with capacity for storage. Obviously the problem with coal fired power is that we are still using too much of it and this can only be solved by replacement technologies. For wind the key problem is not primarily technical, although there are improvements to be made, but regulation, planning and consultation. In those areas that are difficult to de-carbonise there will need to be compensating carbon capture technology. In nuclear technical work to increase modularisation and scalability is required as well safety and security by design. The major objections to nuclear of waste, safety, security and cost are being addressed but efforts must continue.

We need a short to medium term boost in funding to achieve the needed acceleration. This could be provided as a dividend from imposing a carbon tax or other robust form of carbon pricing. As well as providing revenue for R&D it also provides a steering mechanism because the tax will help correct for the failure of the energy market and provide path to exploitation for the innovations. Speed of exploitation remains a challenge with a quagmire of regulations, permissions and consultations to wade through in addition to the technical challenges that are always present in scaling up form proof of concept to operational system. Eventually an emergency situation will require emergency powers but climate change is a slow motion emergency both in the climatic development in response to increased green house gas concentration and in the climates response to mitigation. In the end it will come down to whether there is the social and political will to carry through an emergency response.

Wednesday 22 April 2020

Preclusive action: liberty and the state of emergency

Corona, Covid-19, London, Locked, Cancellation

A good few years ago when reading "Terror and Consent" the notion of preclusive action struck me as a key concept Philip Bobbitt's together with the interpretation of security in what he referred to a the Market State. An aspect of the book that is especially relevant today is the treatment it gave of pandemics although its main concern was clearly terror. Except for fringe conspirationalists nobody thinks the current pandemic was terror instigated but Bobbitt also discusses naturally occurring pandemics in the wider context of the role of the state in protecting civilians. 

The outline below adapts freely from sections in Bobbitt's book. In addition to preclusive action the other key notion introduced by Bobbitt is the Market State - the affordable successor to the Welfare State. I hope this slight essay will encourage others to read or re-read his book.

A market state, as the term is used by Bobbitt, take up the challenge of protecting civilians and places that protection at centre stage in the life of the State. This protection  embraces not only policing and defence but also health and security of supply (food security for example). The stakes can be high. We do not know the cost of the current pandemic but an avian flu epidemic—whether engineered by a state and given to terrorists or created by them directly (the genetic code of the 1918 avian flu that killed 50 million persons has now been posted on the Internet) or naturally occurring (as is currently the case) — strikes globally with a velocity that leaves little time for reactive measures. Similarly, leaving pandemics for the moment to consider an extreme terror act, a nuclear device detonated in a major twenty-first century city could dwarf the casualties at Hiroshima. These vulnerabilities have important implications not just for diplomacy, but also for precautionary interventions and anticipatory preemptions. In addition to deploying these preclusive tactics, market states (if our current sluggish nation state ever evolves completely into one) should be able to marshal many assets—relative nimbleness and dexterity in adapting to technological change, devolution, the use of private entities as partners, and global networks of communications and cooperation—that are denied to reaction oriented nation states in their struggle against terror.

At the outset, we should be clear that anticipatory warfare is not the result of the development of WMD or delivery 
systems that allow no time for diplomacy in the face of an imminent reversal of the status quo. That might have been President Kennedy’s justification for threatening a preventative war against Cuba to prevent it from deploying ballistic missiles with nuclear warheads, but it would have been folly to have made a similar argument for action against the Soviet Union. Rather it is the potential threat to civilians — a market state concern — posed by either arming, with whatever weapons, groups and states openly dedicated to mass killing or a naturally occurring rapidly developing catastrophe that collapses the distinction between preemption and prevention, giving rise to anticipatory action whether war or emergency powers. 

Examples of failing to act preemptively and its consequences abound. For example, the USA  could have stopped the genocide in Rwanda had it acted preclusively; by the time the killing was imminent it was too late. Those inspectors who were shocked by the progress of the Iraqi nuclear weapons program in 1991—by some accounts Saddam Hussein was a year away from deployment—must have been quietly thankful for the Osirak raid ten years earlier. Had Saddam Hussein acquired nuclear weapons before he invaded Kuwait, the option of disarming him would have been infeasible. Michael Walzer once asked whether “the gulf between preemption and prevention has now narrowed so that there is little strategic (and therefore little moral) difference between them.” The last Irag war is still devisive but Bobbitt's main point is that if the community of developed states had had the will and means to carry out appropriate preclusive action then the war need never have taken place. Acting well prior to the actual development of WMD or before a full blown pandemic is underway will give rise to weighty moral considerations. As with all moral questions, one would have to know a good deal about the facts of the case under consideration. 

Therefore to understand the changing nature of victory in our wider context we must first review the shift, over the last few centuries, in the constitutional order that determines the war aim. The "war aim" must be extended now with the reach of the state extended to more  general protection of the civilian population.To simplify, the sixteenth century princely state sought to aggrandise the personal possessions of the prince; the seventeenth century kingly state attempted to enlarge the holdings of the ruling dynasty; the eighteenth century territorial state tried to enrich its country as a whole (and its aristocracy in particular) by acquiring trading monopolies and colonies; the nineteenth century state nation struggled to consolidate a dominant national people and sought empire; the twentieth century nation state fought from 1914 to 1990 to establish a single, ideological model for improving the material well-being of its people. To put it in slightly more technical terms, victory achieved in pursuit of these various (and sometimes overlapping) goals could be characterised as perquisitive (princely state), acquisitive (kingly state), requisitive (colonial territorial state), exclusive (imperial state nation), and inclusive (nation state). In all cases what is considered victory is defined by the "war aim". The victory sought by twenty-first century market states will be preclusive because damage to the civilian population is unacceptable as that wil be the nature of the effective contract between the state and the population.

If market states are indeed emerging, such states’ terms for victory in warfare and protection can best be described as preclusive. The market state comes in at least two distinct forms. According to Bobbitt, one is a state of consent and the other is a state of terror and coercion. The goal, whether for the market state of consent or for the market state of terror, is to preclude a certain state of affairs from coming into being. For the state of consent, it is terror itself that must be precluded, chiefly by the protection of innocent civilians. For the state of terror, it is individual self-assurance that must be prevented from spreading within a society, for once enough people refuse to be cowed it will be difficult to return them to a condition of submission. The line can be fine, of course, between protecting someone and preventing them from developing as they wish (as we know in our current situation).

When we consider the paradigmatic case of preclusive humanitarian intervention to prevent genocide or ethnic cleansing, we must consider these two elements, domination and responsibility. To the person, or the peoples, for whom others would assume responsibility, the exercise of that duty sometimes looks like simple authoritarianism and even exploitation (as indeed it often is).  This is one legacy of nineteenth century imperialism; and there were many beneficiaries of the altruistic late-twentieth century humanitarian interventions in Somalia or Haiti or Kosovo who seethed with resentment. The authority (often self appointed) preempted the right of the suffering people to act for themselves. Though this may have been justified on the grounds that without outside intervention these peoples would have had no real alternative in their circumstances, yet a preclusive victory has inadvertently stimulated resistance, even hostility, to the authority. Given the time when it was published, Bobbitt's focus is on terror. The dilemma will not play out in the same way in reaction to state measures to preclude a pandemic but it will put considerable strain on civil liberties and trust. It will also potentially disrupt the international cooperative order.  

To return to the current situation. We may be just at the peak of the Covid-19 pandemic. Reaching that peak has been bought the price of unprecedented modern peace time restrictions on personal liberty. Indeed, in some ways the measures are even more restrictive than during war time because of the nature of the threat. But as the nature of the threat is an evolving but dumb virus there is no need fall back on war time instincts to restrict information or act secretively. To maintain and nurture the consent of the population the state must act with openness, fairness and exemplary competence.

Friday 8 November 2019

Review: Human Compatible by Stuart Russell

Stuart Russell is a very influential figure in the Artificial Intelligence (AI) community. He is co-author of a widely studied text book on the subject, Artificial Intelligence: A Modern Approach, that was key to moving the subject from being dominated by a formal logic approach to one that focused communities of artificial agents operating and cooperating to maximise rewards in uncertain environments.
His most recent book, Human Compatible, is written for a general audience. It helps if you are a reasonably informed member of the general public but the book rewards the effort with not only an overview of the current successes of AI but also addressing the technical and ethical challenges with novel, constructive  proposals. It is highly recommended.

Russell takes the challenges to human purpose, authority and basic well being seriously without being unnecessarily alarmist. Even with current and immanent applications of AI there are threats to jobs but also real gains in efficiency and cost effectiveness in medicine and production.

The aim of what Russell calls the standard model of AI has been defined almost since its inception as
Machines are intelligent to the extent that their actions can be expected to achieve their objectives.
However the major challenge comes with an anticipated major step forward. That is the creation of  general-purpose AI (GAI). In analogy with general purpose computers, that is one computer can carry out any computational task,  GAI will be able to act intelligently, that is plan and act to achieve what it needs to achieve, on a wide range of tasks. This would include creating and prioritising new tasks. Currently AI produces highly specialised agents but many of the algorithms  can be put to diverse uses; learning different skills but once learned that skill is exercised narrowly. For example, Deepmind's AlphaGo can play the board game Go very well, better than any human, but that is it. Related underlying techniques, however, can be applied to a number of applications such text interpretation, translation, image analysis and so on.

The major breakthrough that would enable the AI to escape from the constraint of narrow specialism has yet to be made. As indicated above, GAI would be a method that is applicable across all types of problems and would work effectively for large and difficult tasks. It’s the ultimate goal of AI research.  The system would need no problem-specific engineering and could be asked to teach sociology or even run a government department. A GAI would learn what from all the available sources, question humans if needed, and formulate and execute plans that work. Although a GAI does not yet exist, Russell argues that a lot of  progress will results from research that is not directly about building "threatening" general-purpose AI systems. It will come from research an the narrow type of AI described above plus a breakthrough on how the technology is understood and organised.

It is the mistake to think that what will materialise is some humanoid robot. The GAI will be distributed and decentralised; it will exist across a network of computers. It may enact a specific task such as cleaning the house by using lower level specialised machines but the GAI will be a planner, coordinator, creator and executive. Its intelligence would not be human it will have significant hardware advantages in speed and memory it will also have access to an unlimited number of sensors and actuators. The GAI would be able to reproduce and improve upon itself. It would do this at speed. 

And the role for the human? None that the GAI requires. The GAI would be able to construct the best plan to tackle climate change, for example, but on its terms  and in a way that maximised its utility. And to maximise its utility it will find ways to stop itself being switched off.

This danger, according to Russell, comes from the aim that has guided the whole research programme so far. That is, to give an AI objectives that become the AI's own objectives. A GAI could then invent new objectives such as self replication or inventing better information storage and retrieval to speed up its own actions. A GAI could then be said to have its own preferences and effectively is a form of person. It need not be malevolent but equally it need not be motivated by benevolence to humans  either.

In short, the solution proposed by Russell to the problem of human redundancy is to engineer the GAI so it cannot have preferences of its own. It is engineered to enact preferences of individual humans. The final third of the book is devoted to exploring the ramification around a set of principles for benevolent machines:
  1. The machine’s only objective is to maximise the realisation of human preferences. 
  2. The machine is initially uncertain about what those preferences are.
  3. The ultimate source of information about human preferences is human behaviour.
These principles are there to inform design but also regulation and the formulation of research programmes. Russell deals with these and the societal efforts and cooperation that will be required to deal with unintended consequences as well as the actions of some less than benevolent humans who may want to create a GAI that has a goal other than to satisfy human preferences.  The discussion addresses the challenges of multiple artificial agents and multiple and incompatible human preferences. Of course Russell does not definitively solve any of these but indicates routes to resolution.

It is important that this book shows there is an approach in which humanity can reap the benefits of AI research without being subjugated to a superior intelligence or being forces to implement an AI ban. It is a positive vision that I hope informs and shapes our approach to this technology. But there is urgency, because we do not know when or where the break through to GAI will take place. If it happens within the current standard model pattern of research and application then disaster threatens.

Thursday 18 April 2019

Choosing a voting system

Voting rules range form the very simple such as the plurality rule (commonly known as "first past the post") to a plethora of more or less complex schemes. These schemes attempt to capture notions of fairness such as proportional representation and and majority preference. Voting takes place for many purposes; even choosing a voting system. Whatever the more general position may be, if there is one, proportional representation has been Liberal Democrat policy on voting reform at all levels of government for so long it has become an entrenched view. In its defence, the policy is well worked out in the sense that there is a specific actionable proposal but the major problem has been in getting wider agreement on implementation. The Liberal Democrats went into the 2010 election with a specific position on electoral reform. The 2010 manifesto promised to:
Change politics and abolish safe seats by introducing a fair, more proportional voting system for MPs. Our preferred Single Transferable Vote [STV] system gives people the choice between candidates as well as parties. ...
Is there a liberal counter view. Yes, and it is embedded in a liberal classic. It is to be found in Karl Popper's major contribution to political philosophy "The Open Society and its Enemies". In this book Popper strenuously, but only briefly, defends the two party system and simple majority voting (plurality rule). In 1988 the Economist invited Popper to return to these themes in   The open society and its enemies revisited. In this piece he expands on his defence of the two party state and majority voting with detailed objections to proportion representation. Here I will engage with the presentation of the same arguments in an update provided by David Deutsch in his wide ranging book The Beginning of Infinity. Although, I will conclude that the criticism of proportional representation is not as conclusive as Popper and Deutsch would claim and that they underplay the weaknesses of "first past the post" or plurality voting, the critical engagement with their arguments brings out some points that strengthen the case for a proportional system.


In Chapter 13 of "The Beginning of Infinity" Deutsch devotes much space to working through examples to illustrate just how difficult it is to design a voting system that all parties agree to be fair. He then moves on to to more formal account using the setting of social choice theory.  This leads him to present and discuss the Arrow no-go theorem.This states that there is no rule that maps the preferences of the individuals in a group on to the preferences of the group as a whole that can satisfy a complete set of five intuitive, desirable and rational properties. These desirable properties (axioms) are:
    1. The rule should define a group’s preferences only in terms of the preferences of that group’s members.
    2. The rule must not simply designate the views of one particular person to be ‘the preferences of the group’
    3. If the members of the group are unanimous about something – in the sense that they all have identical preferences about it – then the rule must deem the group to have those preferences too.
    4. If a given definition of ‘the preferences of the group’, the rule deems the group to have a particular preference – say, for A over B, then it must still deem that to be the group’s preference if some members who previously disagreed with the group (i.e. they preferred B) change their minds and now prefer A too.
    5. If the group has some preference, and then some members change their minds about something else, then the rule must continue to assign the group that original preference.
    Remarkably, Arrow proved this set of 5 axioms is logically inconsistent. That is, no voting system can satisfy them all. This is a blow for a rational foundation to social choice theory but that has not stopped research and the development of a variety of voting rules.

    In the UK the Electoral Reform Society (ERS) accepts the theorem but recognises the need to get on with voting system definition and evaluation, otherwise how would representational democracy work? An alternative, and more radical, reaction is to reject the social choice setting of the problem. This is implicit in "Open Society and its Enemies" but is made explicit by Deutsch.

    The ERS introduce or exploit criteria such as locality and proportionality to rank and make trade-offs between voting systems. As could be anticipated from this approach, the ERS has come out in favour of compromise that includes an element of proportionality and and an element of locality, this is the Single Transferable Vote system, as adopted by the Liberal Democrats. In this system the more proportionality gained the less locality and vice versa, with the number of seats per constituency as the free parameter. In practice, the final position chosen will be a pragmatic trade-off with public opinion and wider political support.  

    Proportional representation

    After presenting Arrow's no-go theorem Deutsch attacks rather than discusses proportional representation (PR) voting systems. It is evident that PR shares weaknesses with all voting rules that are formulated in a social choice setting. Most of Deutsch's specific objections are effectively, if not absolutely conclusively, answered by the ERS in its pamphlet PR Myths. The first objection Deutsch makes is, however, not addressed in the pamphlet. This is:
    ... the ‘More-Preferred-Less-Seats paradox’, in which a majority of voters prefer party X to party Y, but party Y receives more seats than party X.
    Deutsch neglects to mention that this also a weakness of the plurality rule. The system that avoids this particular paradox is Condorcet ranking, of which there are a number of variants, just as there are for STV. It would take us too far from the present discussion to examine Condorcet methods in detail. It has a formal weakness that the originator discovered, which is, basically, that a ranking is not guaranteed to exist. However this risk has recently been mitigated by factoring in preference structures in realistic population models. This recent work was carried out by Partha Dasgupta and Eric Maskin in a very formal version of social choice theory. However, what this objection highlights is that STV and other systems with similar ranking rules will be in conflict with an alternative view of fairness to PR, which is that preference should be given to option X over Y if  the majority prefers X to Y.

    Beyond Social Choice

    At this point Deutsch makes an interesting move against social choice theory itself that gains support from Popper's wider philosophy:
    It [social choice theory] conceives of decision-making as a process of selecting from existing options according to a fixed formula (such as an apportionment rule or electoral system). But in fact that is what happens only at the end of decision-making – the phase that does not require creative thought.
    The creative aspect is what happens before a set of choices are put before the electorate. Arguments, explanations, economic theories, values, public opinion and so on, all contribute to formulating the set of options. It is argued that the quality of the options on offer is more important than the voting rule that provides the preferences. Deutsch argues further that the weakness of the social choice setting and the paradoxes associated with it means that a more fundamental and wider criterion for voting is required. The more fundamental criterion is, according to Deutsch:
    Popper’s criterion that the system facilitate the removal of bad policies and bad governments without violence.
    Whether this is the only or dominant criterion, it is clearly a valuable one. Deutsch adopts Popper's arguments that a plurality voting rule matches this criterion better than any proportional rule. The ERS pamphlet PR Myths seeks to address the Popper objection directly. They frame it differently  as  "PR doesn't let you kick out an unpopular government", which moves it back into the social choice setting but it is close enough to the criterion for the purpose of the present argument. The counter evidence presented by ERS is straight forward observation. Countries that practice PR do not only get changes of government but it is not even a rare occurrence. This is followed up by the further observation that plurality rules have historically failed to remove unpopular governments and removed popular ones. Popper himself originally had the excuse that these observations were not available at the time of the first edition of the "Open Society" but not in 1988 and it is certainly not the case for  Deutsch. It is understandable that in the 1940's that the USA and UK were taken a the outstanding examples of the stable and effective liberal democracies but since then the problems with the two party system underpinned by a plurality voting rule have become evident.

    Seen from the stand point of Popper's criterion both plurality and PR can provide in practice adequate mechanisms for removing policies or governments without resorting to violence but in both cases the importance of other institutions, constitutional checks and balances, and a fertile public sphere of debate and ideas is required. What this discussion is leading towards is the need to manage expectations of what can be achieved merely through adopting one voting rule rather than another. Behind Popper's criterion there is a philosophy closely related to his theory of knowledge. This requires a diversity of theories, conjectures, ideas and, in the political context, policies. This is facilitated by a system that encourages the growth of a diverse number of groupings that can formulate, criticise and propose solutions to the challenges in society. The plurality voting rule tends to lead to two major party blocks. Other sources of ideas and opinion can be safely ignore by these two major groupings. In the past, during the formative years of liberal democracy, the major groupings have been the Conservatives and the Liberals but for the last seventy years it has been the Conservatives and Labour. What also happens is the absorption by the major parties of more extreme positions, as can be seen clearly in the current situation in the UK and to some extent in the USA. In the UK the Conservatives contain a substantial and influential group of English nationalists and currently the Marxist faction leads the Labour party. So, there is some internal diversity but it is a management issue for the main parties not a constructive component of the national debate. Smaller parties are traditionally ignored except when there are small majorities or hung parliaments.

    For a well functioning constitutional democracy it is this suppression of opinion and diversity that is the damning consequence of the plurality voting rule. PR is much stronger in nurturing small parties and opinion groups. What is needed is further reform of institutions that make sure that this diversity leads to more robust debate and effective policies.

    Plurality voting rules should go, but this leaves Condorcet methods as an alternative to PR and there are numerous versions of PR. It is easy to see that Condorcet methods will tend to correct for any formulation of two polarising groups but whether these methods provide for and nurture diversity of opinion awaits further analysis. In a social choice setting the fairness encapsulated in the Condorcet mechanism is just as valid as the proportionality notion with which it is in conflict. Consideration should be given in a case by case basis as to which is the appropriate voting rule; whether in general elections, internal voting in Parliament and other bodies, and at different levels of government. Not only theory but the evidence from practice shows PR to have an established advantage in the creation of a diverse, multi-party public sphere, which together with the right institutions should provide a robust foundation for a liberal order. 

    Thursday 28 March 2019

    A critical rationalist approach to policy creation

    The starting point for this piece is "What use is Popper to a politician?" by Bryan Magee. Magee is a long standing advocate for Popper and therefore the philosophical position known as Critical Rationalism (CR). Without wishing to take away from Poppers contribution, I prefer to use CR and make clear that this is not advocating the opinions of a person but to argue for a constructive philosophical approach to policy formulation. 

    Magee is that rare thing in the UK a public intellectual who was an elected politician. First as a Labour MP then for the newly formed SDP. The article referred to above was written some 10 years following his career as an MP but is strongly coloured by his social democratic position. He seems blind to the non-social democratic liberal position which is mine. However this is not too grave as the philosophical stance being advocated is available to anyone who is open to rational argument and evidence.  Magee himself mentions Margaret Thatcher as the sort of radical conservative who could open to this approach.

    In much of what follows I will follow Magee quite closely but my formulation is adapted to the creation policy proposals and amendments. The sort of work that takes place prior to a party conference. One of my motivations for this piece is dissatisfaction with both the process and with much of the output from this process.

    CR itself is a subject with its own extensive literature, but you should be able to pick up the essential points relevant to policy formulation in what follows. CR was developed via a critique of positivism and induction in the natural sciences. It presents the scientific enterprise as an exercise in problem identification and resolution with the important caveat that the solutions are provisional and need to be subjected to continuing critical review. Policy formulation is not a natural science but the recognition of the fallibility of proposed policy proposals should be evident.  

    So, in policy too, first identify and formulate the problem with care. This means not jumping to solutions or using the issue to display indignation or personal virtue. The articulation should be as clear and jargon free as possible. For example, in the case of health consequence of diet, it is necessary to formulate the problem, if there is one, as objectively as possible. If people are eating too much; that is their concern. If the are eating too much and damaging their health; that too is their concern. If over eating is leading to strains on the health service, leading to higher taxation, then that is potentially a real policy problem and it is possible to start to address it. But even here we do not stop. Having formulated the problem better we can now quantify it. This is not just looking at the evidence but looking at the quality of prognostics and the assumptions made. There are always assumptions.

    The next step is to formulate policy proposals. It is the creative step, but based on best available knowledge. This is not just data but economic theory, philosophy, science and knowledge of how government works, and whatever else can be brought to the task. Other softer and more value oriented considerations should not be neglected. For example, ask whether it would be legitimate for free individuals to be constrained through a proposal that addresses a problem by managing a statistical distribution across the whole population. It is not possible to derive a solution f
    or the body of knowledge, hence the creative element. In getting proposals formulated anything goes: debate with yourself, with others, writing Op-Eds and getting feedback, etc. The outcome should be a proposal or set of proposals that is clearly articulated, defendable and actionable. By actionable is meant that a policy is a solution to a problem and therefore if acted upon will, or at least intends, to solve that problem.
    Done? No. The proposal needs to be subjected to a further critical analysis. An important mechanism is to try out the proposals against implementation scenarios. This is motivated by the recognition of unintended consequences of a policy. That is, the solution may not be robust to  small changes in the implementation scenario or it may have negative impact if implemented in a certain way. The outcome should be a ranked set of actionable policy proposals with supporting explanations and evidence. But not always.

    It is quite possible that after much hard work and critical analysis no actionable policy proposal emerges. So, have we been too critical. Very unlikely. Remember that fallibility at each step is never eliminated.The problem as formulated may not have an actionable solution or the actual problem has not been identified after all. To return to the impact of diet on the health service;  is the problem perhaps with how the health service is structured. For example the health service has no, or weak, personal responsibility mechanisms. The other realisation is that in the end it may be better to do nothing. The process will not have been a waste because you will know why you are proposing no action. However in many and I would anticipate the majority of cases the approach outline here will indeed help in identifying strong and defendable policy proposals.

    My main motivation for writing this is the often dismal to poor quality of policy proposal writing.  If in turn you are critical of the CR approach or my formulation of it then you too are participating.  Thank you.