Monday, 9 July 2018

Material implication and conditional probability

A simple argument shows that in general the ratio formula for conditional probability cannot be the probability of the material conditional. But there is still controversy over both.


Despite the undoubted success of probability theory in providing tools for inference, statistical analysis and decision making, there remain concerns about its foundations. A major concern is with the status of conditional probability and its relationship with logical implication (indicative conditional). In propositional logic material implication provide the formal concept. Although this is often glossed over in standard texts it is taken seriously by E. W. Adams. However his solution giving primacy to conditional probability is also open to criticism.  These points are of practical importance as status of inference and its foundations in logic, probability and set theory are fundamental to the development of Artificial Intelligence

Adams’ thesis is that the assertability of the indicative conditional $A \to B$ is given by the conditional probability of $B$, given $A$. For example, he writes: “Take a conditional which is highly assertible, say, ‘If unemployment drops sharply, the unions will be pleased’; it will invariably be one whose consequent is highly probable given the antecedent. And, indeed, the probability that the unions will be pleased given unemployment drops sharply is very high”

The default standard foundation of probability theory is the axiomisation of A.N. Kolmogorov. This takes as one of its primitives a function denoting the probability of a set and these sets are called random events. An event is something that happens or has the potential to happen.

In Kolmogorov's theory probability space $\left( \Omega, \Sigma,\mu \right)$ consists of a set $\Omega$ (called the sample space), a $\sigma$-algebra $\Sigma$ of subsets of $\Omega$ (i.e., a set of subsets of $\Omega$ containing $\Omega$ and closed under complementation and countable union, but not necessarily consisting of all subsets of $\Omega$) whose elements are called measurable sets, and a probability measure $\mu:\Sigma \rightarrow \lbrack 0,1\rbrack$ satisfying the following properties:
P1. ${\mu}\left(X \right){\geq 0}$ for all $X \in \Sigma$
P2. ${\mu}\left( \Omega \right){= 1}$
P3. ${\mu}\left( {\bigcup}_{i = 1}^{\infty}{\ }X_{i} \right){= \ }\sum_{i = 1}^{\infty}{\mu(}X_{i}{)}$, if the $X_{i}$'s are pairwise disjoint members of $\Sigma$.
P4. $\mu(A | B) = \frac{\mu(A \cap B) }{\mu(B)}$
Postulate P4 provides an analysis of conditional probability. It is more often referred to as the definition. However conditional probability was current as a concept prior to the axiomisation. In the sense of
The probability of $A$ given $B$,
The probability of "if $B$ then $A$"
or
The probability that $B$ implies $A$.
 In the usage prior to the formalisation of probability $A$ and $B$ are not sets but usually statements or propositions. So a relationship between propositions and sets is needed.

In propositional logic, material implication is a rule of replacement that allows for a conditional statement to be replaced by a disjunction in which the antecedent is negated. The rule states that $P$ implies $Q$ is logically equivalent to not-$P$ (in symbols $\neg P$) or $Q$.
$$ P \to Q \Leftrightarrow \neg P\lor Q$$
where $\Leftrightarrow$ denotes logical equivalence.

There is a straight forward mapping between the sets and connectives in the set based axiomisation and the the propositions and connectives in propositional logic. The correspondence of connectives is:
  • $\cup$ corresponds to $\lor $
  • $\cap$ corresponds to $\land$
  • $\Omega$ corresponds to $\mathbf{t}$ (the single extension of all tautologies)
  • $\emptyset$ corresponds to $\mathbf{f}$ (the single extension of all falsehoods)
  • The set complement ($\bar{A}$ for any $A \in \Sigma$) corresponds to $\neg$ (negation). 
This would mean
  • $\bar{A} \cup B$ corresponds to $P \to Q$, where proposition $P$ pertains to the event represented by $A$ and $Q$ pertains to the event represented by $B$.
So, what is the relationship between $\mu(A | B)$ and $\mu(\bar{A} \cup B)$? A simple analysis shows that they are only equal in a very special case. Consider the partition of $\Omega$ shown in the diagram below.

 From this it follows:
$$ \mu(B|A) = \frac{c}{a+c}$$
and
$$ \mu(\bar{A} \cup B) = b+c+d = 1-a$$
Therefore, in this case,
$$ \mu(B|A) = P(\bar{A} \cup B) \Rightarrow P(A)=1$$.

So the two terms are only equal when $A$ is the certain event. In general, the ratio formula for conditional probability cannot be the probability of the material conditional. In general, the relationship is
$$\mu(\bar{A} \cup B) = 1 - \mu(A)(1- \mu(B|A))$$
This is equivalent to the result stated by E. W. Adams in "The Logic of Conditionals: An Application of Probability to Deductive Logic" page 3.

The morphism between propositional logic and set theory is used extensively in interpreting theories of probability. It preserves structure but does not extend to implication and it does not entail that meaning or ontological status is preserved. It is from the direction of metaphysical analysis of the ontological status of conditionals, both logical and probabilistic, that progress may be made. In a recently published book,  What Tends to Be, Rani Lill Anjum and Stephen Mumford provide a synthesis of this analysis.

In probability theory alternative axiom systems may be the answer and candidates exist from Renyi and Popper. However, the ontological analysis may indicate that the eventual practical answers lie in something more akin to physics rather than logic or mathematics. Future posts will engage critically with this work.

Tuesday, 3 July 2018

Beyond the Evidence Base - the strength of causal explanations

The term "evidence based" is often used in statements in health care or science policy that are intended to indicate respect for a scientific approach. Indeed evidence is essential for testing scientific theories and specific statements but science provides something much more powerful and that is an explanatory theory.

Theories arise in science through a critical process that incorporates much debate and draws on past theories, philosophy (if only implicitly) and, of course, evidence. Having passed several tests and often having gone through several formulations a theory will be accepted quite generally as the best current explanation of the facts in the domain to which it applies. The main point to be made here is that the power of the theory goes beyond, and cannot be derived from, the evidence. This provides the ability to understand and predict states of affairs that are not covered by the current evidence base.

The power of explanatory theory can be used to eliminate, provisionally, courses of action and to guide positive proposals. As an example of how a philosophical analysis can contribute to clarifying these issues there is  a recent paper by Rani Lil Anjum "Evidence Based or Person Centered? An Ontological Debate" that uses the example of health care to analyse the limitations of the "evidence based" approach. This critiques the positivist underpinning of Evidence Based Medicine and provides a strong alternative. Lip service is still paid by some prominent scientists but following the work of Karl Popper and others its limitations are clear. However the work of Anjum with her colleague Stephen Mumford is developing philosophical tools that provide a conceptual framework for developing comprehensive causal explanations founded on a dispositional ontology.

Because scientific theories provide explanations that go beyond  the evidence base, they can make strong statements about situations where the evidence is missing and can be too difficult or expensive to generate. However, there is a risk that Evidence Base arguments will be used to undermine the power that theory provides to spell out the consequences of misguided actions. Climate change provides a simple example of an area where well established theory can make statements of global significance. The well established consequences of adding CO2 to the atmosphere together with the input that mankind has indeed added vast quantities of CO2 provides very strong and simple case for human driven climate change. That is, that human action is a cause of climate change. In greater detail the same theories can quantify and provide testable predictions, and an increased evidence base is the output rather than the input.


Friday, 29 June 2018

Hayek and the welfare state


In the minds of many, the rigorous classical liberalism personified by Friedrich von Hayek is in direct opposition to the notion of the welfare state. But is this necessarily true? Three fundamental pillars to Hayek's thought are:
  1. The price mechanism in the market economy is a decentralised information signalling system.
  2. The competitive market is a spontaneous order (not centrally planned)
  3.  The competitive market is the economic order that supports personal liberty.
Are these compatible with a welfare state? Contrary to what is often claimed, it can be argued that the state's welfare provision may not only be compatible with fair and competitive markets but can enhance their effectiveness.
In “The Free-Market Welfare State: Preserving dynamism in a volatile world.”  by Samuel Hammond, of the Niskanen Center, an attempt is made to bridge the divide between social and economic liberals. Hammond's paper provides a fundamental change to the conventional economic view on welfare as a cost to it being implemented as an economic benefit. The arguments in the paper are set out in an US context but draw on key mechanism from northern European market economies. Notably not the UK, which would be analysed as getting the relationship mostly wrong, but with the Swedish and Danish. In terms of income levels provided by cash minimum-income benefits (Figure 1 in the paper), the UK is one of the more generous nations whereas "the U.S. income security system one of the stingiest in the developed world". So the situation in the later is fundamentally different from the former. In the US nervousness about market disruption has led to very low levels of cash benefits and the  system is poorly implemented, whereas in the UK the problem is mainly poor implementation. The UK implementation deficient in at least two ways:
  1. Welfare is poorly implemented as safety net and worse as a service (and made worse through recent reforms, as well as under funding).
  2. Welfare, as implemented, is a negative for the economy as measured against all three of Hayek's pillars, above. 
Using the statistical evidence that shows Sweden and Denmark scoring highly on personal and economic liberty and examining the welfare mechanisms they use, Hammond proposes four design principles where well implemented social insurance can enhance market dynamism and economic freedom in Free-Market Welfare State. These can be condensed (with some simplification) as follows:
  1. Risk and Entrepreneurship.  As the term “safety net” suggests, social insurance can enhance risk-taking and entrepreneurship by ensuring failure is not catastrophic.
  2. Search and Adjustment Costs. Workers who are laid off in periods of market restructuring should be ensured a smooth transition through appropriate wage replacements and active labour-market policies. While your job may not be secure, your employment is. 
  3. Benefit Portability. Markets work best when social benefits follow the individual and are detached from any particular firm or market structure. (In the UK many people are often trapped in a firm due to penalties imposed to their pension entitlement. In contrast the German system decouples this, as recommended.)
  4. Migration Robustness. Welfare benefits should be payments or services resulting from insurance funds to which people have contributed while working in the host country, migrants who claim such benefits should not therefore be perceived to be a great problem. There needs to be other humanitarian safeguards for refugees and others in dire need;as opposed to economic migrants.
Hammond's paper shows a refreshing ambition to bridge the divide between economic liberal purism (bordering on the libertarian) and social liberalism with its focus on social safety nets.

The above is obviously at best work in progress from the point of view of policy formulation and implementation. The proposal is that, by adopting the four principles for welfare design, Hayek's three pillars of liberty can be maintained and their effectiveness even enhanced.

A constitution for a confederate Europe - beyond the nation state


The traditional nation state is not enough to ensure prosperity, security and freedom in a world of multi-national  interests. The solution is not to grow larger nation states but to move beyond them to associations or confederations where there is a consensus on core values.

Getting this right poses considerable constitutional challenges. The European Union has developed into an unintentional experiment in this process. There is, however lack of clarity on the principles and values guiding the process. The Prometheus Institut  has proposed a Manifest for a confederate Europe.  In this document it argues for  a confederation of constitutionally liberal states.

The core principles of a future EU constitution should be the common market for goods and services and free trade with the rest of the world. The confederate Europe will require a liberal monetary policy that does not divide Europe into different classes within a rigid framework of a common currency. Additionally, states that will not, in the foreseeable future, join the EMU must have a place in the European within a flexible community.

Within Manifest for a confederate Europe there are points to debate but it is strongly focused on EMU concerns and not all the items will find a consensus but it frames the debate on the future of the EU as it should be. That is as how to a achieve the extension of constitutional liberalism beyond the nation state.  

Privacy: your data

At the Liberal Democrats Autumn conference I helped organise an ALDES (Association of Liberal Democrat Engineers and Scientists)  discussion session on privacy and security titled "Your data, your choice" .  The motivation for the session was that:
Government, business and our personal lives are increasingly driven by our personal data. Credit card transactions, location data, and health records have the potential to improve products, provide insight for policy making, and detect security threats. But they also challenge our notions of privacy, intimacy and autonomy. How can results from privacy research be translated into policy?

The session was chaired by Richard Gomer, a researcher in Meaningful Consent. The panellists brought expertise from the areas of publishing, computer science, artificial intelligent and security

  • Yves-Alexandre de Montjoye (Data Scientist and privacy researcher at Imperial College London)
  • Luc Moreau (Professor of Computer Science at King’s College London)
  • Yogesh Patel (Chief Scientist at Callsign)
  • Leonie Mueck (Division Editor at PLOS ONE.)

There are steps individuals, as well as local and national governments, could take to protect privacy through regulation and law on the use personal data, education and digital rights. The intent is to pursue this topic with Liberal Reform and the Liberal Democrat Lawyers Association.
An important aspect of the discussion is the impact of algorithms with access to very large data sets of personal information. This was the topic addressed at the Royal Society discussion over the 2 day (30-31 October): "The growing ubiquity of algorithms in society: implications, impacts and innovations".

This session was almost dominated by legal considerations but served to underline the challenges to regulation and the rule of law that arise due to technical advance due to the availability of personal data, commutation power and algorithmic innovation.  Technology may also provide a defence, especially for those aspect of privacy that are close to data security. The formalisation of secrecy following the seminal work of Claude Shannon during WWII allows a definition of an aspect of privacy to be  formalised in a mathematically precise sense. This can be implemented to give some guarantee that the use of your personal data does not impact (or bounds the impact on) your privacy.