Fifth Workshop on the Philosophy of Information


Abstracts

Wednesday 27th March

  • Sabina Leonelli: Data integration and the management of information in contemporary biology
  • Dr Federica Russo & Dr Phyllis Illari: Information channels and biomarkers of disease
  • Omri Tal: From Shannon Information to a New Sense of Genetic Information
  • Stephen Rainey: The Method of Levels of Abstraction in Pluralism and Governance of Dialogical Interaction
  • Christoph Schulz: The informational strategy of naturalising agency
  • Federico Gobbo & Marco Benini: What Can We Know of Computational Inforgs?
  • David Gamez: Are Information or Data Patterns Correlated with Consciousness?
  • Giuseppe Primiero: Distrust and Mistrust Relations for Privatively and Modally qualified Information Channels
  • Mariarosaria Taddeo: Individual rights in the information age
  • David J. Pym: Towards a Philosophy of Information Security
  • Ignacio Hernández Antón: A scenario where quantitative and qualitative approaches to information modelling converge
  • Yacin Hamami: When is Deduction Informative in Mathematics?
  • Francesco Berto & Jacopo Tagliabue: Either the World is Digital or Not

Wednesday 28th March

  • Luciano Floridi: Maker’s Knowledge and the Synthetic Uninformative
  • Nir Fresco, Aditya Ghose, Patrick McGivern: Types of Information Processed by Cognitive Agents
  • George M. Coghill: On Model-based Systems and Qualitative Reasoning with reference to the Philosophy of Information
  • Orlin Vakarelov:Information Qualities – A Structural Perspective
  • Marco Benini & Federico Gobbo: Measuring Computational Complexity: the Qualitative and Quantitative Intertwining of Algorithm Comparison
  • Andrew Iliadis: What is Information Artifact Ontology?

**Sabina Leonelli: Data integration and the management of information in

contemporary biology**

Contemporary ‘data science’ comprises both qualitative and quantitative aspects and methods, which are very hard to bring together as they typically operate at different levels of abstraction and within different epistemic cultures. In this paper, I argue that the questions raised by data integration provide a fruitful ground for future collaborations between philosophers of science and philosophers of information. In particular, I reflect on what it means and what it takes to integrate data in order to acquire new knowledge about biological entities and processes, focusing specifically on the role of data-sharing tools, like online databases, in facilitating this process. The scientific work involved in data integration is important and distinct from the work required by other forms of knowledge integration, such as methodological and explanatory integration, which have been more successful in captivating the attention of philosophers of science. I first discuss some of the implications of focusing on data as a unit of philosophical analysis, and how we might understand the relationship between data and knowledge. I then look at the conditions under which the quality and evidential value of data posted online are assessed and interpreted by database curators as well as the biologists wishing to use those data to foster discovery. In particular, I consider how biological data about specific model organisms, such as the plant Arabidopsis thaliana, are added, curated, retrieved and interpreted through ‘community databases’ such as The Arabidopsis Information Resource. Data quality is checked at several steps of these journeys, and curators have to mediate between the requirement to provide overarching principles for what counts as ‘good data’ and the diversity of epistemic cultures within which data are interpreted. Depending on the purpose and context of specific cases of data interpretation - sometimes even including different interpretations of the same dataset - the assessment of data quality may vary considerably. I conclude with a set of open questions concerning the notion of data, data quality and the relationship between data and information.


**Dr Federica Russo & Dr Phyllis Illari: Information channels and

biomarkers of disease**

The place of causality in the world and in our knowledge of it is a well-known philosophical problem. Traditionally, in philosophers such as Aristotle or Hume, a theory of causality plays a key role in the interpretation of our everyday and commonsensical perception of reality. More recently, however, attention has shifted to the sciences as the primary focus for a full-fledged theory of causality. This has led to intensive work on causality as integral to our interpretation of the various sciences and of the world they investigate, both in philosophy and in the sciences themselves. In general, science is often interested in identifying the causes and mechanisms of the phenomena under scrutiny. In particular, the need for a better understanding of the nature of causality, in connection with successful methods of causal inference from observable information, has become particularly acute in many methodologically challenging areas stretching right across the sciences and technology.

This paper uses biomarkers research as a test case for an informational account of causality, illustrating how even in a messy case of complex interacting causal factors, in a context where so much of the science is still new, the idea of tracing a causal link can still be vital to the scientific practice. This idea, it will be argued, can be illuminated informationally.

Traditional thinking about causality focused on simple examples from the fundamental sciences. But increasingly in recent years philosophers of science have sought to understand causality in the full complexity of sciences such as the life sciences. This paper seeks to go even further and examine causality in cutting-edge science, where a view cannot be rationally reconstructed after the fact, as the research is still in progress. The challenge is to find an approach to causality that can embrace the full complexity of research underway and illuminate causal thinking therein.

Current research in molecular epidemiology uses biomarkers to understand the different phases of disease from exposure, to early clinical changes, to development of disease. The hope of projects such as the current FP7 project ‘Exposomics’ is to get a better understanding of the causal impact of a number of pollutants and chemicals on several diseases, including cancer and allergies. In a recent paper Russo and Williamson (2011) addressed the question of what evidential elements enter the conceptualisation and modelling stages of this type of biomarkers research, analysing the modelling strategy of the FP7 pilot project ‘Envirogenomarkers’. This paper follows up and investigates the nature of the causal link. Traditional metaphysical accounts (physical processes, mechanisms, powers and dispositions) are considered and it will be explained why they are all unable to provide a sensible account of the nature of the causal link. One problem is that these causal metaphysics are ‘tailor-made’ for some specific scientific contexts (e.g., physics); another problem is that they still don’t specify what does the linking (e.g., mechanisms). It will be argued that an informational account of causality can provide a causal metaphysics that works across different scientific domains, and that biomarkers research is an excellent test case.


**Omri Tal: From Shannon Information to a New Sense of Genetic

Information**

Shannon famously remarked that a single concept of information could not satisfactorily account for the numerous possible applications of the general field of communication theory. I employ some basic principles from Shannon’s work on information theory (Shannon 1948) to develop measures of information for quantifying ‘population structure’ from genetic data. This sense of information is somewhat less abstract than entropy or Kolmogorov Complexity and is utility-oriented. Specifically, I wish to formulate a measure of the internal structure of a collection of genotypes sampled from multiple populations – describing the potential for correct classification of genotypes of unknown origin. Motivated by Shannon’s axiomatic approach in deriving a unique information measure for communication, I first identify a set of intuitively justifiable criteria that any such quantitative information measure should satisfy. I will show that standard information-theoretic measures such as mutual information or relative entropy cannot satisfactorily account for this sense of information, necessitating a decision theoretic approach.


**Stephen Rainey: The Method of Levels of Abstraction in Pluralism and

Governance of Dialogical Interaction**

The European Union is faced with challenges in terms of at least long-term demographic issues, the ongoing financial and economic crisis, and how identity is made, retained and can change in a context of pluralism. In the Europe 2020 strategy, the President of the European Commission outlines how these challenges are going to be addressed. This will be done by following three mutually enforcing priorities: smart growth, sustainable growth and inclusive growth. These three priorities will only be possible to achieve by relying on scientific and technical research, in dialogue with broader society ‘stakeholders.’

A particular limit, as yet unexplored such a dialogical interaction, is that of information quality. Owing to the plural nature of the European polity and the increasingly complex and specialised faces of scientific research, governance of dialogical interactions in general recedes to formalism. It does this in the hope that at a suitable level of formality, differences of opinion, perspective, viewpoint, value etc. among a diverse group will become practically irrelevant in the face of overriding solidarities of interests. Owing to this formalism and this hope, the content of many interactions is assumed to ‘work itself out’ according to rational principles: this has roots in enlightenment views of the self, and to Habermasian analyses of communicative action. This reliance upon the inevitable convergence of differing views upon a common end is not tenable. This limit has its roots in an insufficiently textured view of information quality that no amount of formal, structural tinkering can remedy. The use of the notion of ‘levels of abstraction’ and cognate ideas (Floridi, 2008) in order to ground practicable means of carrying out normative analyses using in high grade, relevant information.

On the one hand, this requires a structural-critical enquiry (assessing forms of interaction ), but it will also require an account of information quality in cases of conflicting viewpoints with multi-stakeholders scenarios in particular (assessing

contents of interactions ). Dialogical approaches in particular remain wedded to conceptual schemes that deal with participants as unitary individuals who are the bearers of ideas, thoughts, beliefs, tradition and so on. These schemes fail to adequately account for the particular self- conceptions, understandings and perspectives of dialogical participants. As a result, the quality of information in these dialogues is compromised: where participants are not conceived of in terms of their role, knowledge, aims and so on, they are conceived of too generally. There is at present no particular way to differentiate among the different levels, types, level of abstraction or experience in dialogical approaches. Present thinking, at least within recent decades of European-funded research, has focussed on how to account for some kind of complete representation of stakeholders’ values, history, beliefs, knowledge and so on at all stages of dialogue (cf. The work of the so-called ‘Louvain School,’ the ETICA and EGAIS projects in the FP7 funding stream) This seems an utterly wrong-headed approach that can only result in muddied waters. Rather than attempting to account for some kind of ‘ res loquens ‘ the idea of ‘levels of abstraction’ (Floridi ibid ) can be used to deepen the analysis of stakeholder input.

We cannot rely upon an assumed nature of the participants in dialogue, nor upon a generalised form of interaction itself. Among participants, we can imagine the various descriptions under which different aspects of their being become salient. Participants are persons, citizens, employees of companies, activists, parents, art-lovers, and so on. At these different levels of abstraction, different things matter. Within the same individual, these levels can compete, stand in tension, complement, reinforce one another, and they can remain inactive, untapped and maybe unexamined. This can have, at least, epistemological, political, social, learning, personal consequences of different types. Left undifferentiated, a term such as ‘stakeholder’ melds these levels together. This makes it unclear, from the perspective of the participant herself as much as anyone else, what information is being deployed in an interaction, thereby obscuring completely what ought to be. Dealing with the form and content of interactions in terms of information unties the knots that make these problems.

Within Habermas’ thought, there is scope for understanding different levels of abstraction insofar as there is an account of ‘spheres of validity,’ (Habermas, 2004) In different spheres, different types of claims are raised, calling for different responses from interlocutors. For instance, claims concerning personal sincerity aren’t well met by responses concerning moral goodness. However, these spheres in Habermas occupy an analytic space not quite part of an active discussion: in media res it is impractical to ascend to a meta- level discussion. Habermas’ account remains distinctly analytical and concerned with explaining dialogue in ways that, in reality, are most apt for post hoc evaluations. In actual dialogical procedures aimed at coming to pressing conclusions on matters of import to diverse groups, something entirely more actionable is required.

This issue of appropriate understanding of perspective in interactions is not new. Anscombe’s (1958) “problem of relevant description” foreshadows the particular interests here in the sense that it points to the idea of any situation as potentially multi-dimensional, unsettled in advance, in need of elaboration and emergent. Using the method of levels of abstraction, the point of an interaction can be laid out in advance in the manner of a gradient of abstraction, with different labels for different groups of levels of abstraction, drawing upon different values, concepts, ideas and priorities.

Interaction quality

Important aspects

—|—

As an event

Adequate seating, good acoustics, accessible location…

As a consultation

Views are heard, noted well, a breadth of participants are present…

As a debate

Issues are well spelt out, the agenda is open-ended, non-domination of minority views…

Similarly, the perspectives of participants can be parsed according to the various sense in which they are representing themselves at different parts of the dialogue.

Participant perspectives

Important aspects

—|—

As a lobbyist

Networking opportunities, opportunities to address audience(s), opportunities to assess participants…

As a policy-maker

Viewpoints are simply expressed, law is always considered, volume of opinion is respected…

As a citizen

Information is presented ‘un-spun,’ citizen rights are uppermost in considerations, citizen views are heard and evaluated in an unvarnished manner…

Rather than leaving these potentially subtle, but vital, distinctions implicit the thrust here is to make explicit what is always already present within any given dialogical interaction. In so doing, this makes the dialogue transparent and hitherto obscure outcomes become practicable (such as dis/agreement, negotiation, re-conceptualisation of issues.) Where problems, disagreements, synergies arise, they can so arise on the basis of clearly tagged information: Participants can agree on P with respect to x in the matter of Q. This is a world away from the assumption of a formal convergence of opinion based in a taken as read solidarity of interests. The difference is in terms of information and the pay-off is in terms of practicality.

References (in order of reference)

  • Floridi, L, ‘The Method of Levels of Abstraction,’ Minds and Machines , 2008
  • Floridi, L, ‘Distributed Morality in an Information Society,’ Science and Engineering Ethics, November 2012
  • Habermas, J, The Theory of Communicative Action , Polity, 2004
  • Anscombe, E, ‘Modern Moral Philosophy’, Philosophy , January 1958

**Christoph Schulz: The informational strategy of naturalising agency **

In this workshop contribution I’m going to outline a cybernetic framework in which the naturalisation of information and the naturalisation of agency can be described in an analogous way. In Fred Dretske’s version of information theory (Dretske (1981)) the process of informing consists of the transfer of information and its digitalisation. A digital piece of information is produced by the extraction of parts of an analogue piece of information that refers to a contingent matter of fact. The process is irreversible since information is lost during digitalisation. The intentional stance of aboutness can be further upgraded into a propositional stance that is characterised by the use of concepts. Dretske’s approach is thus similar to the solution of the Kopenhagen interpretation (e.g. in von Weizsäcker (2006)) of the problem of measurement in quantum mechanics. The result of a measurement is described in classical terms rather than according to the (reversible) Schrödinger-Equation, and therefore produces an irreversible piece of information. The agency-theory of causation can be interpreted in an analogous way: the formula “an event A is a cause of a distinct event B just in case bringing about the occurrence of A would be an effective means by which a free agent could bring about the occurrence of B“ (Price and Menzies (1993)) can easily be misunderstood by interpreting the two instances of “bringing about” in the same way, although they are to be distinguished. The basic experience of an agent is that it has to intervene into its environment to bring about states of affairs that are beneficial and that would not happen without an intervention (non-spontaneous processes). Similar to the role of the process of transferring information through a channel and its digitalisation, which does not require a channel, the “bringing about A” is an immediate action that does not require a mechanism, whereas the transfer of the causal influence from A to B is subject to that physical constraint. In this interpretation the agency theory of causation solves the problem of conceptual circularity or regress that other interventionist theories of causation struggle with (see for example Woodward (2008) for an appreciation of the problem for his objectivist intervention- based account). I will also argue that the similarity between information and causation is no coincidence, since both are basically just different aspects of the relation between an agent and its environment, as a cybernetic outlook shows. Learning about the contingent state of its environment is a prerequisite for an agent of interacting successfully with the environment. This holds true for increasing the number of effective strategies in a game- theoretic context when playing for resources against other players, Norbert Wiener’s so-called “Manichean environment” (Wiener (1988)). E.g., we have more strategies to choose from while bidding in a poker game if we happen to know parts of our opponent’s hand, compared to the bidding strategy based on the default assumption (or see, for a similar description for the game “matching pennies”, Werner (1991)). In the so-called “Augustinian environment”, an agent plays against a natural environment that is subject to the rise of entropy. E.g., the demon in Maxwell’s thought experiment first has to measure the microstate of a distribution of gas molecules in a container to subsequently bring about a physical entropy reduction, which, in this case, is a temperature gradient that can later be exploited for doing work. In both cases, reducing informational entropy (i.e. uncertainty about a contingent state of affairs of the environment, which is transferred through a channel and then digitalised) enables the increase of utility, be it in the context of a rule-based or natural interaction.

References

  • Dretske, F. (1981). Knowledge and the Flow of Information , MIT Press.
  • Price, H. and P. Menzies (1993). “Causation as a Secondary Quality.” British Journal for the Philosophy of Science 44 : 187-203.
  • von Weizsäcker, C. (2006). The Structure of Physics (Fundamental Theories of Physics) , Springer.
  • Werner, E. (1991). “A united view of information, intention and ability.” Proceedings of the Second European Workshop on Modelling Autonomous Agents and Multi-Agent Worlds.
  • Wiener, N. (1988). The Human Use of Human Beings: Cybernetics and Society (Da Capo Paperback) , {Da Capo Press}.
  • Woodward, J. (2008). Invariance, Modularity, and All That: Cartwright on Causation. Nancy Cartwright’s Philosophy of Science. L. Bovens, C. Hoefer and S. Hartmann, Routledge Studies in the Philosophy of Science.

**Federico Gobbo& Marco Benini: What Can We Know of Computational Inforgs?

**

Within the ontology of Informational Structural Realism, an informational organism (inforg) carries a minimal ontological commitment in favour of structural properties of reality, which answers the question ‘what can we know?’ (Floridi, 2011, ch. 15). In this paper, we deal with the answer to this question in the case of computational inforgs (c-inforgs), whose engineered artifact are based on some kind of computing device, typically a Von Neumann machine (VNM) (Gobbo and Benini, 2013, for details). The system-level-model- structure (SLMS) scheme relies on the method of levels of abstraction (LoAs), where a LoA is individuated according to the range of observables analysed within a system, then producing a model that identifies a structure (Floridi, 2011, 15.2.3). When applying the SLMS scheme to the case study of c-inforgs, we should distinguish between two different classes. The first class is populated by open c-inforgs. Members of this class show an important property in their VNM-based counterpart: the openness of the source code—a fundamental Level of Organisation (LoO) in c-inforgs, see Gobbo and Benini (forthcoming). Openness can be loosely defined as the ability to inspect the VNM-based counterpart of the c-inforg to anyone—extending the usual definition of availability of the source code to anybody without barriers. Thus, we may have direct knowledge of the artifact under investigation, although directness does not imply that knowledge is transparent, because it is mediated by the appropriate LoA(s). Following the well-known metaphor of boxes, open c-inforgs are grey to some degree. The second class is the dual to the first one, being populated by closed c-inforgs. Here, the access to the VNM counterpart, especially the source code, is not granted. However, we can still access inputs and output: even if information got hidden into a black box, the logical structure of the VNM-based artifact is still open to inspection. As put effectively by Franklin (1999, 721): ‘despite the best efforts of Microsoft to make all computer programs so large as to be incomprehensible, small surveyable programs are still common items’. In the case of closed c-inforgs we can only infer indirectly their structural properties from the observation of how behaviours change according to inputs, which is an eminently qualitative activity. According to the method of LoAs, in the case of closed c-inforgs we can only observe the feedback given by the system: the model generated in the SLMS scheme is at the 2nd order, in terms of ontological commitment (Floridi, 2011, 15.2.4).

Evidently, the analyses of open c-inforgs can be more fine-grained than the ones made on closed c-inforgs: openness let the analysts use quantitative methods to measure the VNM counterpart, in particular, the source code, which can be inspected using ‘direct metrics’ coming from Software Engineering. Direct metrics do not depend upon a measure of any other attribute, and therefore they are presumed valid per se, according to IEEE Standard 1061 (IEEE, 1998). It is important to notice that direct metrics are necessary but not sufficient, and so other metrics have been proposed and validated in terms of the first ones: it is not by chance that the IEEE Standard 1061 presents the methodology for software quality metrics. We can now instantiate the SLMS scheme of software quality metrics: the ontological commitment at the 1st order is represented by LoAs representing direct metrics, which are inherently quantitative; on the other hand, indirect metrics are more qualitative- oriented LoAs, as they strictly depend on the first ones by definition, and so they are at the 2nd order in terms of ontological commitment.

Software Engineering can give us a concrete field of application. Let us take a complex information system as a closed c-inforg: the VNM-based artifact is made by the complex of hardware, software (source and running code), and network infrastructure, while its biological counterpart is made by software programmers, system administrators, end-users etc.. It is well known in the literature that even a focused goal-oriented engineering practice like the risk assessment of a given complex information system is ‘a subjective process’ (Redmill, 2002), as it depends on the subjective views of the expert. Nevertheless, a formalisation of the risk assessment procedure was recently proposed, introducing the notion of compatible metrics in terms of morphisms between partial orders (Benini and Sicari, 2009).

A future direction of work is to generalise this formalisation originally found in Software Engineering to compare qualitative properties of c-inforgs. For example, Facebook is a very complex c-inforg, which is closed to its end- users. We consider the relation between an end-user (the biological agent) and his/her account in the social network (the VNM-based counterpart considered here). We notice that some users prefer to access their Facebook account through a smartphone, while other prefer their favourite web browser of their own laptop. Now, the possible actions a user can perform on Facebook through a smartphone are not identical to the actions available through the laptop web browser—for instance, options to share one’s posts, or the presence of a photocamera. As a result, we can describe two different LoAs (in terms of graphic end-user’s interface), two corresponding LoOs (the mobile application of Facebook is an application software, while the web site for standard web browser is another one) and finally two corresponding Levels of Explanation (LoEs), expressed in terms of goals or users’ habits, e.g., users keen to smartphones could prefer to give and receive GPS-based information, while users keen to desktops perhaps prefer to write longer texts in posts. Order theory may be used to formalise subjective judgements like ‘x is better than y’ or ‘A prefers z to that w’. Our aim is eventually to describe a user’s preference profile in terms of partial orders, so that we can algebraically measure the distance between the propensities of different biological users within the same c-inforg (in our example, end-users in Facebook).

In conclusion, the test case of c-inforgs shows us there is always the mediation of at least one LoA, which presents at least a minimal degree of quality even if expressed in quantitative terms (liminal realism). Therefore, the working philosopher of information cannot avoid the continuous interplay between qualitative and quantitative point of views of information, especially after the computational turn, when informational organisms become more and more complex.

References

  • Benini, M. and Sicari, S. (2007), ‘Risk assessment via partial orders’, Advances in Computer Science and Engineering 3(1), 19–46.
  • Floridi, L. (2011), The Philosophy of Information, Oxford University Press, Oxford.
  • Franklin, J. (1999), ‘Structure and domain-independence in the formal sciences’, Stud. Hist. Phil. Sci. 30(4), 721–723.
  • Gobbo, F. and Benini, M. (2013), ‘The Minimal Levels of Abstraction in the History of Modern Computing’, Philosophy & Technology pp. 1–17.
  • URL: http://dx.doi.org/10.1007/s13347-012-0097-0
  • Gobbo, F. and Benini, M. (forthcoming), ‘Why zombies can’t write significant source code: The knowledge game and the art of computer programming’, Journal of Experimental & Theoretical Artificial Intelligence .
  • IEEE (1998), ‘Standard for a software quality metrics methodology’, IEEE Standards Dept.Std. 1061-1998.
  • Redmill, F. (2002), ‘Risk analysis: A subjective process’, Engineering Management Journal 12(2), 91–96.

**David Gamez: Are Information or Data Patterns Correlated with

Consciousness? _ _**

1. Introduction

Experimental work on the correlates of consciousness is attempting to identify the relationship between phenomenal and physical states without making a premature commitment to any particular metaphysical theory of consciousness. A number of potential correlates have been identified, including neural synchronization, recurrent connections, quantum features and electromagnetic waves, and Tononi [1] has proposed that the pattern of integration and differentiation in the brain’s information states, known as ‘information integration’, could be linked with consciousness.1 A number of algorithms have been developed to measure information integration [3-5] and the theory has been tested in some preliminary experimental work [6-8].

  1. See Tononi and Koch [2] for a review of some of these potential correlates.
  2. Some suggestions about this are made in [10].

While information integration is currently the only explicitly informational theory of consciousness, other algorithms could be used to identify information patterns in the brain that could be correlated with consciousness. This paper will look at the philosophical and experimental issues that need to be addressed by any information-based approach to the correlates of consciousness.

2. Dedomena, Data and Information

Information patterns that are correlated with consciousness can only be identified when we have a clear definition of information. Floridi’s [9] distinction between dedomena, data and information is a promising way of addressing this problem:

  1. Dedomena. Changes or patterns in the world that exist prior to human measurements.
  2. Data. A lack of uniformity in the world that is measured by defining a level of abstraction - for example, neuron firing events are one level of abstraction in the brain.
  3. Information. Well-formed meaningful data. The question of what makes data meaningful is difficult; one of Floridi’s (2009) suggestions is that meaningful data is a combination of data and queries. For example, the proposition “The earth only has one moon” can be interpreted as a piece of meaningful data in which the semantic content is the question “Does the Earth only have one moon?” and the answer “yes” is a single bit of data.

Dedomena are the most objective aspect of a physical system and the most plausible candidate for a correlates-based approach to consciousness. However, since dedomena cannot be directly measured, we have to look instead for data patterns in the brain that might be correlated with consciousness. These include the patterns identified by Tononi’s information integration approach, which is more accurately described as a theory of data integration [10]. The question about whether information patterns could be correlates of consciousness will be set aside in this paper because of the difficulty of measuring meaningful data in the brain.

3. Levels of Analysis

While there appears to be an objective fact of the matter about whether a person is in a particular conscious state, the data that is measured in a system is the outcome of an experimenter’s subjective choice of a level of abstraction. There are a number of ways of reconciling the subjectivity of data sets with the fact that consciousness must be correlated with an objective property of the physical brain:

  1. Consciousness could be correlated with data at one particular level of the brain, for example, with patterns of neuron firing events, but not with patterns at any other level of abstraction. In this case the correlate of consciousness would be a pattern of neuron firing events and there would not be data correlates of consciousness.
  2. Data patterns at different levels of abstraction could coincide – for example the pattern of differentiation and integration at the level of atoms might map onto the pattern of differentiation and integration at the level of neurons. In this case, it would not matter which level of abstraction was selected because they would all lead to the same result.
  3. A data algorithm could be defined that applies across all possible levels of the system (Tononi [11] suggests this approach). This has the problem that the number of levels is potentially infinite, and the lower levels of the system cannot be accurately measured because of Heisenberg’s uncertainty principle.

4. Data Pattern or Physical Correlate?

There is potential ambiguity between these two claims:

  1. A pattern of data is correlated with consciousness.
  2. A pattern of an aspect of the physical world is correlated with consciousness.

One way of demonstrating that data patterns are the actual correlates is to use an algorithm that finds a maximum across multiple levels of abstraction. If one level correlated with consciousness at one time and another level correlated with consciousness at another time, then it could be claimed that the data patterns are correlates of consciousness, and not the physical patterns at one particular level. This would only work if the levels did not coincide.

5. Causal Powers of Data Patterns

When we describe our consciousness it seems reasonable to claim that the report is about consciousness because consciousness caused the report. If consciousness is a particular pattern of data, then this data pattern must be capable of causing reports about consciousness. This constrains the types of data pattern that are candidate correlates of consciousness.

6. Unconscious Data Patterns

If the current information/data algorithms [3, 5] were applied to the unconscious brain, they would be likely to return a positive result, which would contradict the observation that we apparently have no consciousness when we are unconscious. Algorithms measuring data patterns that are candidate correlates of consciousness should return zero when they are applied to the unconscious brain.

7. Experimental Issues

A major problem with a data-based approach to consciousness is that we have very limited access to the brain. Furthermore, even if we could measure the brain’s 80 billion neurons with high spatial and temporal resolution, we would be unable to analyze this data with our current computer power.

References

  • [1] G. Tononi, “Consciousness as integrated information: a provisional manifesto,” Biol Bull, vol. 215, pp. 216-42, Dec 2008.
  • [2] G. Tononi and C. Koch, “The neural correlates of consciousness: an update,” Ann N Y Acad Sci, vol. 1124, pp. 239-61, Mar 2008.
  • [3] D. Balduzzi and G. Tononi, “Integrated information in discrete dynamical systems: motivation and theoretical framework,” PLoS Comput Biol, vol. 4, p. e1000091, Jun 2008.
  • [4] D. Gamez and I. Aleksander, “Accuracy and performance of the state-based Φ and liveliness measures of information integration,” Consciousness and Cognition, vol. 20, pp. 1403-24, Dec 2011.
  • [5] G. Tononi and O. Sporns, “Measuring information integration,” BMC Neurosci, vol. 4, p. 31, Dec 2 2003.
  • [6] M. Massimini , et al. , “A perturbational approach for evaluating the brain’s capacity for consciousness,” Prog Brain Res, vol. 177, pp. 201-14, 2009.
  • [7] F. Ferrarelli , et al. , “Breakdown in cortical effective connectivity during midazolam-induced loss of consciousness,” Proc Natl Acad Sci U S A, vol. 107, pp. 2681-6, 2010.
  • [8] U. Lee , et al. , “Propofol induction reduces the capacity for neural information integration: implications for the mechanism of consciousness and general anesthesia,” Consciousness and Cognition, vol. 18, pp. 56-64, Mar 2009.
  • [9] L. Floridi, “Philosophical Conceptions of Information,” Lecture Notes in Computer Science, vol. 5363, pp. 13-53, 2009.
  • [10] D. Gamez, “Information and Consciousness,” _Etica & Politica / Ethics & Politics,, _vol. XIII, pp. 215-234, 2011.
  • [11] G. Tononi, “Information integration: its relevance to brain function and consciousness,” Arch Ital Biol, vol. 148, pp. 299-322, Sep 2010.

**Giuseppe Primiero: Distrust and Mistrust Relations for Privatively and

Modally qualified Information Channels**

The literature on the characterization of trust relations between interactive agents is growing, in terms of both conceptual ([19], [8], [1], [5]; for an overview see also [20]) and formal analyses ([10], [4], [13], [2], [12]). Contexts of application range from distributed systems in digital domains to decision making processes requiring physical relations and beliefs assessment. In [22] and [21] trust is defined as a second-order relation, characterizing first-order ones among agents. In particular, for the epistemic context generated by an information channel, trust qualifies the communication between the receiver and the source of a certain information content. In [18] this understanding of trusted communication is formalized by a modal type theory which accounts for the two epistemic states involved: verification-terms on propositions for directly known contents; partial-terms for communicated but not verified (hence,to be trusted) contents.

In the present paper, the model of trust as a second-order relation grounds the conceptual analysis of two important counterpart notions:distrust and mistrust. Also in this case, attention is increasing towards both conceptual and formal analyses, see e.g. [7], [3], [9] and [14]. A number of these approaches, rely on quantitative measures of successful previous communications to establish propagation functions of both trust and distrust, while often mistrust is a neglected notion. We start instead from the context of a semantic theory of information, which offers a qualitative approach where misinformation can be understood as unintentionally false information and disinformation as intentionally false information, see [6, p.260]. We exploit such definitions to offer a qualitative understanding of both mistrust and distrust and lay the basis for a formal model extending the one for trust offered in [18].

We understand a (complete) communication act about A as the expression of the information procedure P functional to the achievement of a goal G establishing that A is valid for the current information channel. This corresponds to upgrade information content A to knowledge, provided no node of the current channel can falsify it, see [15]. Our analysis then starts by establishing a property shared by distrust and mistrust: a communication channel characterized by mistrust or distrust determines uncertainty in the receiver R’s epistemic state in view of source S inducing { either intentionally or unintentionally { an error state in R with respect to information content A. On the basis of the taxonomy presented in [16], we can formulate a detailed analysis of the possible error conditions under which the communication of hP; Gi can produce an uncertain epistemic state. We shall then define distrust as a second-order modified relation over an information channel between S and R such that whenever S communicates hP; Gi, R applies a privative operator to obtain one of the possibilities to categorically falsify the pair hP; Gi. The clue here is that distrust is R’s epistemic attitude about communications from an S thought to generate intentionally false information, or disinformation. The intentional element is reflected by the use of a privative operator to falsify the received information. The semantics of such an operator has been studied in [17] and can easily be applied to the trust model at hand. On the other hand, we shall define mistrust as a second-order modified relation over an information channel between S and R such that whenever S communicates hP; Gi, R applies a modal operator to obtain one of the possibilities to modally falsify the pair hP; Gi. The clue here is that mistrust is R’s epistemic attitude about communications from an S thought to generate unintentionally false information, or misinformation. The unintentional element is reflected by the use of a modal operator to weaken the received information to a contingent falsity. The semantics of such an operator has been studied in [11] and can easily be adapted to the trust model at hand. We will also explore some additional properties of mistrust and distrust as second-order modified relations.

References

  • [1] A. Baier. Sustaining trust. In A. Baier, editor, Moral Prejudices. Harvard University Press, 1994.
  • [2] Antonis Bikakis and Grigoris Antoniou. Distributed defeasible contextual reasoning in ambient computing. In Emile Aarts, JamesL. Crowley, Boris Ruyter, Heinz Gerhuser, Alexander Paum, Janina Schmidt, and Reiner Wichert, editors, Ambient Intelligence, volume 5355 of Lecture Notes in Computer Science, pages 308{325. Springer Berlin Heidelberg, 2008.
  • [3] Christian Borgs, Jennifer Chayes, AdamTauman Kalai, Azarakhsh Malekian, and Moshe Tennenholtz. A novel approach to propagating distrust. In Amin Saberi, editor, Internet and Network Economics, volume 6484 of Lecture Notes in Computer Science, pages 87{105. Springer Berlin Heidelberg, 2010.
  • [4] Jan Broersen, Mehdi Dastani, Zhisheng Huang, and Leendert W. N. van der Torre. Trust and commitment in dynamic logic. In Proceedings of the First EurAsian Conference on Information and Communication Technology, EurAsia-ICT ‘02, pages 677{684, London, UK, UK, 2002. Springer-Verlag.
  • [5] C. Castelfranchi and R. Falcone. Trust Theory. A Socio-Cognitive and Computational Model. Wiley, 2010.
  • [6] L. Floridi. The Philosophy of Information. Oxford University Press, Oxford, 2011.
  • [7] R. Guha, R. Kumar, P. Raghavan, and A. Tomkins. Propagation of trust and distrust. In WWW2004 - Proceedings of the 13th international conference on World Wide Web, pages 403{412, 2004.
  • [8] J. Hardwig. The role of trust in knowledge. The Journal of Philosophy, 88:693{708, 1991.
  • [9] W. T. Harwood, J. A. Clark, and J. L. Jaco. Networks of trust and distrust: Towards logical reputation systems. In Dov M. Gabbay and Leendert van der Torre, editors, Logics in Security, 2010.
  • [10] WesleyH. Holliday. Dynamic testimonial logic. In Xiangdong He, John Horty, and Eric Pacuit, editors, Logic, Rationality, and Interaction, volume 5834 of Lecture Notes in Computer Science, pages 161{179. Springer Berlin Heidelberg, 2009.
  • [11] B. Jespersen and G. Primiero. Alleged assassins: Realist and constructivist semantics for modal modification. In G. Bezhanishvili et al., editor, TbiLLC 2011, volume 7758 of Lecture Notes in Computer Science. Springer Verlag, 2013.
  • [12] S. Kramer, R. Gor_e, and E. Okamoto. Computer-aided decision-making with trust relations and trust domains (cryptographic applications). Journal of Logic and Computation, 2012.
  • [13] Emiliano Lorini, Laurent Perrussel, and Jean-Marc Thevenin. A modal framework for relating belief and signed information. In Proceedings of the 12th international conference on Computational logic in multi-agent systems, CLIMA’11, pages 58{73, Berlin, Heidelberg, 2011. Springer-Verlag.
  • [14] D. Harrison McKnight and Norman L. Chervany. Trust and distrust definitions: One bite at a time. In Proceedings of the workshop on Deception, Fraud, and Trust in Agent Societies held during the Autonomous Agents Conference: Trust in Cyber-societies, Integrating the Human and Artificial Perspectives, pages 27{54, London, UK, UK, 2001. Springer-Verlag.
  • [15] G. Primiero. Offline and online data: on upgrading functional information to knowledge. Philosophical Studies, 2012.
  • [16] G. Primiero. A taxonomy of errors for information systems. Minds & Machines, forthcoming, 2013.
  • [17] G. Primiero and B. Jespersen. Two Kinds of Procedural Semantics for Privative Modifi cation. volume 6284 of Lecture Notes in Artificial Intelligence, pages 252{71, Berlin, Germany, 2009 2009. Springer Verlag.
  • [18] G. Primiero and M. Taddeo. A modal type theory for formalizing trusted communications. Journal of Applied Logic, 10:92{114, 2012. Manuscript. Submitted.
  • [19] Carles Sierra and John Debenham. An information-based model for trust. In Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, AAMAS ‘05, pages 497{504, New York, NY, USA, 2005. ACM.
  • [20] M. Taddeo. Defining Trust and E-trust: Old Theories and New Problems. International Journal of Technology and Human Interaction (IJTHI), 5(2):23{35, 2009.
  • [21] M. Taddeo. An information-based solution for the puzzle of testimony and trust. Social Epistemology, 24(4): 285 {299, 2010.
  • [22] M. Taddeo. Modelling Trust in Artificial Agents, a First Step toward the Analysis of e-Trust. Minds & Machines, 20(2): 243 {257, 2010.

**Mariarosaria Taddeo: Individual rights in the information age **

The information revolution has radically changed the way in which we perceive and interact with other agents, both human and artificial, and with the environment. Two aspects of such a revolution are noteworthy for the analysis developed in this paper: the rise of the digital domain and the progressive blurring of the digital with the physical domain within each other. Such a blurring is affecting several aspects of our contemporary life, from the way we establish and maintain social contacts to the way in which we ensure the safety of our societies. Floridi (Floridi 2010) describes this phenomenon well when he calls it the onlife , referring to a life without any significant divide between online and offline activities.

These changes have reshaped the way we work, entertain, travel and interact. They have also redefined the way we perceive ourselves as individuals and construct our identities (Floridi 2011), (Johnson 1997), to the point that online activities play a substantial role in the achievement of a healthy and rewarding life, as they provide “affordances and spaces for self-expression and self-poiesis” (Floridi 2011). When considered from an ethical perspective, these changes raise two questions concerning (i) whether the transformations engendered by the information revolution create the need for individuals to claim new rights for themselves as agents living the onlife; and (ii) what such rights should be. This paper addresses these two questions.

The analysis is developed in two parts. The first one addresses question (i) and argues that, as online experiences are deemed to be as relevant for the well-being of individuals as offline ones, then individuals have the rights for their online activities not only to be protected but also to be supported in order to achieve a good-life online and, more generally, to foster their well-being. In this respect, it is argued that the right to privacy, while remaining fundamental for the protection of personal information, should not be considered sufficient to ensure the well-being of individuals. The scope of rights that individuals should claim online should be extended.

The scope of these rights is analysed in the second part of the paper, which focuses on question (ii) and describes two categories of rights, which should be respected to guarantee the possibility of achieving a good life for individuals living in the information age. The first category concerns the right of being in control of the data and information concerning ourselves. Such a right rests on the understanding of data and information as greasy, i.e. accessible, portable, which can be mined by third parties and may reveal sensitive personal information (Moor 1997). It is argued that the informational nature of the online sphere facilitates surveillance and controlling measures to the extent that it becomes feasible to wonder whether personal data and information could be accessed and manipulated bythird parties possessing adequate technologies. In this context, the right to control refers to the right to access, manage and dispose of first order data and information concerning ourselves. This is to say that, for example, one has the right to know and to decide the way in which data concerning ‘his preferences for pizza’ (Moor 1997) are used by third parties.

The second category of rights refers to the informational nature of the digital domain and takes into account the relevance of accessing and sharing information for the well-being of individuals. Such rights protect the aspects of the onlife that are deemed necessary to experience a good life. According to this analysis, it is maintained that individuals have the right to enjoy:

  • computing resources: these refer to computational cycles that are necessary for individuals to experience the onlife.
  • storage resources: these are the resources necessary to express, recall, and store personal data and information (storing space as a utility).
  • networking: which refers to the right to act and interact with the rest of the online sphere, agents, and environment.

The paper concludes by considering the proposed informational rights in relation to human rights. It is first recalled that the right to information is already regarded as a liberty right in Art. 19 of the Universal Declaration of Human Rights (Mathiesen 2008), (Coliver 1995). It is then stressed that the informational rights described in this article should be considered as part of the second generation of human rights (Vasak 1982), concerning social and cultural rights, since such rights are devoted to defending and fostering human lives in the contemporary information age and are not just about accessing information.

References

  • Coliver, S. (1995). “The Right to Know: Human rights and access to reproductive health information.” Edited by ARTICLE 19 and the International Centre Against Censorship.
  • Floridi, L. (2010). “The Digital Revolution as The Fourth Revolution.” Invited contribution to the BBC online program Digital Revolution.
  • Floridi, L., Ed. (2011). The Construction of Personal Identities Online,.
  • Floridi, L. (2011). “The Informational Nature of Personal Identity.” Minds & Machines 21 (4): 549-566.
  • Johnson, D. G. (1997). “Ethics Online, Shaping social behavior online takes more than new laws and modified edicts.” Communication of the ACM 40 (1): 60-65.
  • Mathiesen, K. (2008). Censorship and Access to Informaton. Handbook of Information and Computer Ethics. K. E. Himma and H. T. Tavani. New York, John Wiley and Sons.
  • Moor, J. H. (1997). “Towards a theory of privacy in the information age.” SIGCAS Computer and Society 27 (3): 27–32.
  • Vasak, K., Ed. (1982). The International Dimensions of Human Rights , 2 Volumes, UNESCO.

**David J. Pym: Towards a Philosophy of Information Security **

1. What is Information Security?

Information security is concerned with the protection of the attributes of items of information that are of value to the owners, users, and stewards of that information and of the systems that process information and those systems. It seeks to ensure that just the right information is available to just the right agents, in just the right place at just the right time. Information is invariably represented as some form of data, be it text on a page, bit patterns in computer memory, configurations of neurones in human memory, or lines in the sand on a beach. The attributes of interest in information security can usefully be categorized as confidentiality, integrity, and availability (often referred to as CIA'). Confidentiality is concerned with the degree of exposure of information to its environment. Integrity is concerned with the degree of accuracy of the information. Availability is concerned with the degree of accessibility of the information. All of these attributes have both spatial and temporal aspects. Note that emphprivacy is concerned with an agent's preference for confidentiality. In our information-rich, massively interconnected world, the meaning and nature of privacy can only be understood in the context of the intended and achieved protection of confidentiality afforded by information-processing systems. Many authors, including Parker [26] among many, choose to enrich the CIA framework with other concepts, such as authentication’ and non-repudiation'. As we shall see below, those are category errors, deriving from a need to analyze rigorously, on the one hand, the declarative objectives of information and, on the other, the operational mechanisms that are used in order to achieve them. Information security often slightly carelessly conflated with cyber-security’ has developed as an engineering discipline [1] within informatics. It also deep connections with logic [12] and, in recent years, has benefitted greatly from the perspectives provided by sociology, psychology, and economics. The concepts of information can also usefully be applied to other security situations, when considered from the perspective of information flow. A specific example here is airport security [7, 14], which the availability (accessibility) of the aircraft to passengers must be considered alongside the maintenance of integrity properties of the airside areas of the airport (including the aircraft). Of course, passengers using an airport must give up their privacy/confidentiality. I suggest that it may now be appropriate to consider the concepts, tools, and uses of information security |a discipline that, as we have seen, provides a way to understand much of the activity of the information age| from a philosophical perspective. The context provided by Floridi [17] provides an appropriate point of departure. As a starting point, it is essential to distinguish clearly the objectives of a security architecture, the operational mechanisms by which those objectives are achieved [7, 14], the methodology for making decisions about preferences and trade-offs in the design and delivery of security architectures [7, 14], and the properties of the underlying systems themselves [13, 10]. But this perspective suggests also new ways to approach questions of the meaning of identity and privacy. If an agent’s identity effectively exists in the cloud, who controls it, owns it, what is its value? What is the meaning of privacy in such a world?

2. Declarative and Operational Concepts

Confidentiality, integrity, and availability are declarative concepts: they express the objectives of an information security policy. They are perhaps the highest useful level such characterization, and it is often appropriate and convenient to work with declarative sub-concepts. Examples include staff productivity and server availability (availability), levels of data-entry error, disk corruption, or bit-rot (integrity), and levels of database breaches or losses of portable back-up storage media (confidentiality).

Corresponding to each of the declarative concepts are operational concepts that can be used to deliver the declarative objectives. For example, authentication (e.g., passwords, etc.) and access control models (e.g., top secret, secret, unclassified and who can read/write at what level) are used to protect confidentiality; back-ups and check-sums are used to protect integrity; raided files systems, redundant processors, and cloud-based services protect availability. The key challenges for the system designer/manager and/or policy-maker are the following: to identify, quantify, and prioritize their declarative objectives and to identify, determine the effectiveness of, and cost the operational mechanisms to be used to implement their objectives. This requirement to understand and manage a transition from essentially qualitative objectives to quantified measures of operational effectiveness is a characteristic challenge of information security. The key practical technique in this respect is derived from economicsin particular, utility theory as it is deployed in macroeconomics, as a predictive simulation modelling tool [13, 15], in which the values of competing attributes must be weighted and balanced according to the preferences of the policy-makers.

3. Utility and Trade-offs

We have defined information security (in x 1) as being concerned with the value of security attributes to its owners, users, and stewards. Utility theory (see, for ex- ample, [21, 27]), particularly as developed in the contexts of macroeconomics and financial economics, provides a highly expressive framework for representing the values (preferences) of the managers of a system. This economic framework can be deployed in the context of information security (see, for example, [6, 19, 20, 8]), where concepts - such as confidentiality, integrity, and availability that lie within competing declarative categories can be seen to trade-off against one another as the relevant controls - such as system configurations or investments in people, process, and technology system configurations - vary. An organization that deploys information security measures exists in an economic and/or regulatory environment. This environment places constraints upon the systems and security architectures available to the organization’s managers. The managers can formulate a utility function that expresses their policy preferences, which will depend upon the nature of their organization. For one example, state intelligence agencies and online retailers will have quite different priorities among confidentiality, integrity, availability, and cost; see, for example, [19]. For another, consider the trade-offs that must be made between effcient the `information’ flow of checked-in passengers to the aircraft and adequate assurance that the aircraft will not contain dangerous objects.

In such highly complex security architectures, it will typically not be possible to formulate system equations in the way that is usually possible in, for example, macro-economic modelling. Typically, though, the key control variables, such as system interconnectivity or investment in various aspects (people, process, and technology) of security operations, will be identifiable. Instead, however, an executable system model [13, 10], using the key control variables, can used in order to simulate the dynamics of the utility function. Thus we are led to identify, and find new ways to address, questions such as the following: what is the value of an identity? what is the value of privacy?

4. Towards a Logic of Information Security

The systems that information security seeks to protect can be understood conceptually in terms of the classical theory of distributed systems (see, e.g., [16, 5]). They execute processes, which manipulate resources around the system’s locations, and respond to, and supply, events from their environments. To understand and model such systems mathematically, our starting points are Milner’s synchronous calculus of communicating systems (SCCS) [24], perhaps the most basic of process calculi, the resource semantics of bunched logic, and a basic notion of location [11, 9]. The basic idea is that processes evolve by performing actions, and in so doing modify the current location and the resources available. This co-evolution of locations,

resources, and processes is described by a modification function , which is specifies that the basic action a, when applied to resources R at location L, returns resources R0 at location L0. This is written as . Processes are constructed from basic actions using combinators for action prefixing, concurrent composition, non- deterministic choice, local resources, and recursion. The meaning of the combinators is given by a structural (natural deduction-style) operational semantics. Along with this calculus of processes comes a logic of state. This logic is most naturally formulated as a modal, substructural system that is closely related to the logic of bunched implications [25, 11, 10]. The key judgement of the logic is given by the satisfaction relation, L;R;E j= Ø, which specifies that property Ø holds of the system state - or world [22, 23], or situation; the inherent partiality of the model of the system provided by (L;R;E)-worlds suggests that situation theory [4, 3, 5] may give a useful perspective. The logic includes classical/intuitionistic additive connectives, quantifiers, and modalities and classical/intuitionistic multiplicative connectives, quantifiers and modalities that are able to express naturally wide range of delicate relationships between resource manipulation and process execution [24, 18]. Do these logical tools provide us with new ways to reason about identity, about privacy, and even responsibility [2]?

5. Towards a Philosophy of Information Security

If, following Floridi [17], we take the philosophy of information to be concerned with ‘(a) the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilization and sciences; and (b) the elaboration and application of information-theoretic and computational methodologies to philosophical problems’, then it seems clear from the preceding discussion that considering explicitly the philosophy of information security raises a number of issues

within the philosophy of information itself and identifies a number of points of contact with concepts from logic, economics, and socio-technical aspects of systems engineering. I suggest that an examination of the concepts of information security, their declarative and operational meanings, and the concept of information flow within the context of the declarative-operational interplay might be both challenging and useful.

References

  • [1] Ross Anderson. Security Engineering (Second Edition). Wiley, 2008.
  • [2] A. Baldwin, D. Pym, M. Sadler, and S. Shiu. Information stewardship in cloud ecosystems: Towards models, economics, and delivery. In Proc. 2011 Third IEEE International Conference on Cloud Computing Technology and Science (Cloud-Com 2011), pages 784{791. IEEE Digitial Library, 2011.
  • [3] J. Barwise. Situations, facts, and true propositions. In The Situation in Logic, CSLI Lecture Notes 17, 1989.
  • [4] J. Barwise and J. Perry. Situations and attitudes. MIT Press, 1983.
  • [5] J. Barwise and J. Seligman. Information Flow: The Logic of Distributed Systems. Cambridge University Press, 1997.
  • [6] A. Beautement, R. Coles, J. Griffn, C. Ioannidis, B. Monahan, D. Pym (Corresponding Author), A. Sasse, and M. Wonham. Modelling the Human and Technological Costs and Benefits of USB Memory Stick Security. In M. Eric Johnson, editor, Managing Information Risk and the Economics of Security, pages 141{163. Springer, 2008.
  • [7] A. Beautement and D. Pym. Structured systems economics for security management. In T. Moore, editor, Proc. WEIS 2010, Harvard, 2010. http://weis2010.econinfosec.org/papers/session6/weis2010_beautement.pdf.
  • [8] Y. Beres, D. Pym, and S. Shiu. Decision support for systems security investment. In Proc. Business-driven IT Management (BDIM) 2010. IEEE Xplore, 2010.
  • [9] M. Collinson, B. Monahan, and D. Pym. A logical and computational theory of located resource. Journal of Logic and Computation, 19(b):1207{1244, 2009.
  • [10] M. Collinson, B. Monahan, and D. Pym. A Discipline of Mathematical Systems Modelling. College Publications, 2012.
  • [11] M. Collinson and D. Pym. Algebra and logic for resource-based systems modelling. Mathematical Structures in Computer Science, 19:959{1027, 2009. doi:10.1017/S0960129509990077.
  • [12] M. Collinson and D. Pym. Algebra and logic for access control (erratum). Formal Aspects of Computing, 22(2 (3-4)):83{104 (483{484), 2010. Preprint (incorporating erratum): http://www.hpl.hp.com/techreports/2008/HPL-2008-75R1.html.
  • [13] Matthew Collinson, Brian Monahan, and David Pym. Semantics for structured systems modelling and simulation. In Proc. Simutools 2010. ACM Digital Library.
  • [14] Matthew Collinson, David Pym, and Barry Taylor. A framework for modelling security architectures in services ecosystems. In Proc. ESOCC 2012, volume 7592 of LNCS, pages 64{79. Springer, 2012.
  • [15] Core Gnosis. http://www.hpl.hp.com/research/systems_security/gnosis.html.
  • [16] George Coulouris, Jean Dollimore, and Tim Kindberg. Distributed Systems: Concepts and Design. Addison Wesley; 3rd edition, 2000.
  • [17] Luciano Floridi. The Philosophy of Information. Oxford University Press, 2011.
  • [18] M. Hennessy and R. Milner. Algebraic laws for nondeterminism and concurrency. J. ACM, 32(1):137{161, 1985.
  • [19] C. Ioannidis, D. Pym, and J. Williams. Investments and trade-offs in the economics of information security. In Roger Dingledine and Philippe Golle, editors, Proceedings of Financial Cryptography and Data Security ‘09, volume 5628 of LNCS, pages 148{166. Springer, 2009. Preprint available at http://www.cs.bath.ac.uk/~pym/IoannidisPymWilliams-FC09.pdf.
  • [20] Christos Ioannidis, David Pym, and Julian Williams. Information Security Trade-offs and Optimal Patching Policies. European Journal of Operational Research, 2011. doi:10.1016/j.ejor.2011.05.050.
  • [21] R.L. Keeney and H. Raiffa. Decisions with multiple objectives: Preferences and value tradeoffs. Wiley, 1976.
  • [22] S. A. Kripke. Semantical considerations on modal logic. Acta Philosophica Fennica, 16:83{94, 1963.
  • [23] S. A. Kripke. Semantical analysis of intuitionistic logic I. In J. N. Crossley and M. A. E. Dummett, editors, Formal Systems and Recursive Functions, pages 92{130. North-Holland, Amsterdam, 1965.
  • [24] R. Milner. Calculi for synchrony and asynchrony. Theoretical Computer Science, 25(3):267-310, 1983.
  • [25] P.W. O’Hearn and D.J. Pym. The logic of bunched implications. Bulletin of Symbolic Logic, 5(2):215{244, June 1999.
  • [26] Donn Parker. Fighting Computer Crime: A New Framework for Protecting Information. Wiley,1992.
  • [27] Yoav Shoham and Kevin Leyton-Brown. Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press, 2008.

**Ignacio Hernández Antón: A scenario where quantitative and qualitative

approaches to information modelling converge**

Introduction.

The Russian cards problem (RCP) is an interesting informational game where informer and informees are modelled as card players. This game is closely related to information security where two goals are aimed, namely: be informative enough to the legitimate agents but, at the same time, not be informative enough to the intruder. This should result in a differential impact of information. One of the surprising things here is that it is possible to reach those goals without encyphering procedures. Another point is that dynamic epistemic logic (DEL) comes into play as a useful tool to model informational states and information transmission. Basically, DEL can be viewed as a formal qualitative approach to information modelling due to non explicit quantification of information is provided but an accurate way of modelling agents’ informational states and their changes.

On the other hand, a primitive quantitative possibilistic approach to information, based on Hartley’s proposal, is considered to play a numerical flipside role of the logical treatment. The possibilistic approach differs from Shannon’s stance of information in a way that it is based not on the concept of probability but on the possibilities space, connecting to the modal approach of possible worlds.

This paper presents an in-progress work on the convergence of these two fields via a card game. The talk will focus on presenting some modal epistemic and dynamic rudiments as well as some quantitative measures over the possibilities space. Secondly, a short presentation of the Russian card problem and its logical modelling paralleling quantitative study of the informational states of agents involved and its dynamics. Some problems are presented as a future challenges currently being faced in this field. It is obvious that this kind of study has a philosophical assumptions about the ultimate nature of information, modelling cognitive states and related issues. The author is open to those questions but in this work he intentionally puts those debates apart. Therefore, here he focuses on the concept of information as what it does.

The informational-epistemic scenario. In detail, the Russian card problem is presented as:

From a pack of seven known cards (for instance 0-6) two players (A, B) each draw three cards and a third player (C) gets the remaining card. How can the two first players (those with three cards) openly (publicly) inform each other about their cards without cyphering the messages and without the third player learning from any of their cards who holds it? Communication is done by truthful and public announcements; no private or untruthful message passing is considered so far. We can wonder about what constitutes a secure announcement in this scenario. A secure announcement should keep c (the “intruder”) ignorant throughout the whole communication and guarantee the common knowledge of this agent’s ignorance. According to that approach, a good protocol comprises an announcement sequence that in particular verifies that:

Informativeness 1: Principals a and b finally know each other cards.

Informativeness 2: It is common knowledge, at least for the principals, that they do know each other cards.

Security 1: The intruder, c, remains ignorant always.

Security 2: It is common knowledge for all agents the intruder remains ignorant.

Knowledge-based: Protocol steps will be modelled as public announcements.

Public announcement: here, a public announcement for an agent a is a set of a’s possible handsets to hold. If the announcement is successful it will modify the agents’ informational states. Additionally, if it is also a good (safe and informative), it will also modify agent’s informational states according to the expectations from above.

We split the informativeness and security requirements into two parts. Accordingly, we define Knowledge-based protocol: a knowledge-based protocol using public announcements is a finite sequence of instructions determining sequences of announcements. Each agent a chooses an announcement conditional on that agent’s knowledge. The protocol is assumed common knowledge among all agents.

An example solution for RCP. One knowledge-based protocol that constitutes a solution for the riddle is as follows. Suppose that the actual deal of cards is that agent a has {0, 1, 2}, b has {3, 4, 5} and c has {6}.

  • a says: My hand is one of {012, 046, 136, 145, 235}.
  • Then, b says: c’s card is 6.

After this, it is common knowledge to the three agents that a knows the hand of b, that b knows the hand of a, and that c is ignorant of the ownership of any card not held by itself.

Discussion. Using the conceptual arsenal associated to the modal operator, Ki, I would like to present and discuss how to model information and its transmission using dynamic epistemic logic and using the next information possibilistic measures:

H(E) = log2 E

IH(E,E′) = log2

E
E′

where E is the space of possibilities, H(E) a bit-normalized quantification of information and IH(E,E) tries to capture the informational gain after informational actions as announcements. Basically, I parallel logic and information theory for searching a point that can witness an intersection between the qualitative and quantitative approaches.

References

  • [1] Ignacio Hern´andez-Ant´on, Fernando Soler-Toscano and Hans van Ditmarsch Homer J. Simpson. Unconditionally Secure Protocols with Genetic Algorithms.In Advances in Intelligent and Soft Computing, 2012.
  • [2] Ignacio Hernández Antón, Fernando Soler Toscano. Algoritmos Gen´eticos para Generaci´on de Protocolos Incondicionalmente Seguros. Liber Amicorum ´ Angel Nepomuceno. Sevilla, Espana. Fenix Editora. 2010. Pag. 61-68
  • [3] Ralph Hartley. Transmission of Information. Bell System Technical Journal, 1928.
  • [4] Shannon, C. E. A mathematical theory of communication. Bell system technical journal, vol. 27, 1948.
  • [5] B. Walliser. Pieter Adriaans (Editor), Johan F.A.K. van Benthem (Editor), Dov M. Gabbay (Series Editor), Paul Thagard (Series Editor), John Woods (Series Editor). Handbook of Philosophy of Information. Elsevier, 2008.
  • [6] R. Fagin and J.Y. Halpern and Y. Moses and M.Y. Vardi. Reasoning about Knowledge. MIT Press. Cambridge MA, 1995.
  • [7] H. van Ditmarsch. The Russian cards problem. Studia Logica, vol. 75. 2003.
  • [8] H. van Ditmarsch and W. van der Hoek and B. Kooi. Dynamic Epistemic Logic. Synthese Library, vol. 337. Springer. 2007.

Yacin Hamami: When is Deduction Informative in Mathematics?

Introduction and Motivations.

To provide a philosophical account of the Information obtained by Deduction (IoD) has been one of the central issues in the philosophy of information. Triggered by Hintikka so-called Scandal of Deduction [9, 10], several authors [1, 2, 11, 17] have recently proposed different philosophical and logical accounts of how we obtain information by deduction, aiming in particular to overcome the limitations of Hintikka’s original proposal. However, all these approaches have been developed in a restricted context: (i) they have focused on logical deduction and (ii) they have not taken in account the more general epistemic activities in which deduction is taking place. A similar observation has been made by van Benthem and Martinez in a short paragraph of [18]: Proof theory is often considered the most upper-class part of logic, dealing with crystalline mathematical structures where all traces of real human reasoning have been removed. But a return to the real world occurs as soon as we ask what good the information does which is created and transformed by inference. [. . . ] Deduction seems informative only for people engaged in certain tasks [. . . ]. This agent-orientation seems the right setting for understanding how an inference can inform us, and sometimes even surprise us, contradicting previous beliefs. (my emphases) [18, p. 275]

This not only constitutes an important observation, but also an interesting proposal for developing an account of IoD which would (i) look at real world deduction and (ii) take in account the particular epistemic task in which the agent is engaged in. The premise of this talk is that mathematics constitutes the archetypal human activity in which such a real-world, activity-sensitive, notion of IoD is operating.2 To investigate it, one can turn to the important philosophical literature on mathematical proofs [5, 6, 12], one of the central topics of study in the philosophy of mathematics. We shall then see in this talk that a rich variety of notions of IoD has emerged in the philosophy of mathematical proofs, offering thereby several interesting notions of IoD to be studied and conceptualized within the framework of the philosophy of information.

Aim and Approach.

The specific aim of this talk is thus to identify different notions of IoD in the specific context of mathematical proofs in mathematical practice. The first and obvious one is of course the situation in which deduction is used for obtaining knowledge of a theorem that has not yet been proved. Even though this case seems trivial at first sight, it is indeed quite subtle. The reason is that different theorems are not equally informative in the course of mathematical inquiry. This point can be traced back to Poincaré’s philosophy of mathematics [13, 14], and we shall see that a reconstruction of Poincaré’s epistemology of proof proposed by Detlefsen [4] offers a first grasp on this issue. We will then look at situations in which deduction is informative in mathematics even though it does not consist in proving previously unknown theorems. More specifically, We will focus in turn on three important phenomena that have been extensively discussed in the philosophy of mathematical proofs—re-proving already proved theorems, failing proof attempts, providing formal proofs of already proved theorems—and we will see that a variety of notions of IoD is lying within these different phenomena. We will conclude the talk by wrapping up the different notions of IoD identified, and we will discuss whether a general characterization scheme can be provided. As a first attempt, we will examine the following proposal: A deduction is informative in mathematics if and only if it contributes to a mathematical activity.

§1. Information from Proving Theorems. Not any theorem, and therefore not any deduction, is equally informative in the course of mathematical inquiry. This point was already noted by Poincaré [13, 14], and in order to make it explicit in his reconstruction of Poincaré’s epistemology of proof, Detlefsen introduced in [4] the notion of a mathematical architecture and stated that: “knowledge of an architecture is thus something like knowledge of a strategy for playing a game” [4, p. 364]. In this part of the talk, we will present Poincaré’s view that deduction is informative if and only if it contributes to a mathematical architecture, and we will see that, for Poincaré, whether deduction is informative in mathematical inquiry is a strategic issue.

§2. Information from Re-Proving Theorems. Dawson [3] has provided an extensive study of the many reasons why mathematicians re-prove theorems. This is particularly interesting since, from a naive perspective, one could expect that there is no information to gain from re-proving an already known theorem. Dawson lists eight different reasons, and we shall see that each of them leads to a particular notion of IoD.

§3. Information from Failed Proof Attempts. In his seminal paper [15], Rav developed several examples of attempts to prove theorems that fail to meet the original goal, but yet produce important advances in mathematics. According to Rav, one of the main interests that sometimes emerge in failed proof attempts is the development of new methods. This is an important notion of IoD in mathematics: in this case, the information provided by the deduction does not lie in proving the considered theorem, since it simply fails to do so, but rather into the capacity of the deduction to offer a new method that can then be used to establish other theorems.

§4. Information from Formal Proofs. Formal verification is an emerging field at the crossroad of mathematics and computer science that aims to provide completely formalized proofs of mathematical theorems which can then be checked mechanically by a computer.3 How could it be that a formal proof—a completely formal deduction—provides additional information to its informal counterpart? We shall see that the notion of IoD at stake here has to do with: (i) managing the complexity of the development of mathematics and (ii) increasing the certainty in mathematical theorems.

§5. Towards a General Characterization? In this last part, we will wrap up the different notions of IoD identified in the talk. The natural question will then be: is there a general characterization scheme under which these different notions of IoD can be subsumed? We will first look at the different proposals to capture the IoD in the case of logical inference that has been proposed by Hintikka [10], Sequoiah-Grayson [17], D’Agostino and Floridi [2] and Jago [11].

We will then argue that none of them can provide such a general characterization scheme due to the fact that they do not not take in account the general activity in which deduction is taking place. We will then discuss the following characterization proposal: a deduction is informative in mathematics if and only if it contributes to a mathematical activity. The issue will then be to answer the two following questions: (i) what is a mathematical activity? (ii) in which sense can a deduction contribute to a mathematical activity?

See the special issue of the Notices of the American Mathematical Society [7, 8] for an overview of these developments.

Conclusion. The philosophy of information and the philosophy of mathematics have not yet met. We hope to show in this talk that the philosophy of mathematics, through its investigation of mathematical proofs, offers a rich variety of notions of IoD waiting to be studied and conceptualized within the framework of the philosophy of information. The notions identified in this talk are very likely to be the tip of the iceberg, and many others remain to be identified and explored.

References

  • [1] P. Allo. On Logics and Being Informative. PhD thesis, CLWF, Vrije Universiteit Brussel,2007.
  • [2] M. D’Agostino and L. Floridi. The enduring scandal of deduction. Synthese, 167(2):271–315, 2009.
  • [3] J.W. Dawson. Why do mathematicians re-prove theorems? Philosophia Mathematica, 14(3):269–286, 2006.
  • [4] M. Detlefsen. Poincaré against the logicians. Synthese, 90(3):349–378, 1992.
  • [5] M. Detlefsen. Proof and Knowledge in Mathematics. Routledge, 1992.
  • [6] B. Gold and R.A. Simons. Proof and Other Dilemmas: Mathematics and Philosophy. Mathematical Association of America, 2008.
  • [7] T.C. Hales. Formal proof. Notices of the AMS, 55(11):1370–1380, 2008.
  • [8] J. Harrison. Formal proof–theory and practice. Notices of the AMS, 55(11):1395–1406, 2008.
  • [9] J. Hintikka. Information, deduction, and the a priori. Noûs, 4(2):135–152, 1970.
  • [10] J. Hintikka. Surface information and depth information. In J. Hintikka and P. Suppes, editors, Information and Inference, pages 263–297. Synthese Library, New York, 1970.
  • [11] M. Jago. The content of deduction. Journal of Philosophical Logic, pages 1–18, 2012.
  • [12] P. Mancosu. The Philosophy of Mathematical Practice. Oxford University Press, 2008.
  • [13] H. Poincaré. Les mathématiques et la logique. Revue de Métaphysique et de Morale, pages 294–317, 1906.
  • [14] H. Poincaré. La Valeur de la Science. E. Flammarion, 1908.
  • [15] Y. Rav. Why do we prove theorems? Philosophia Mathematica, 7(1):5–41, 1999.
  • [16] S. Sequoiah-Grayson. The scandal of deduction. Journal of philosophical logic, 37(1):67–94, 2008.
  • [17] S. Sequoiah-Grayson. A positive information logic for inferential information. Synthese, 167(2):409–431, 2009.
  • [18] J. van Benthem and M. Martinez. The stories of logic and information. In J. van Benthem and P. Adriaans, editors, Handbook of the Philosophy of Information, Handbook of the Philosophy of Science. Elsevier, 2007.

**Francesco Berto & Jacopo Tagliabue: Either the World is Digital or Not

**

In recent important works Luciano Floridi (2009, 2011) has raised an ingenious challenge to the tautological status of the proposition expressed by our title. The argument is at the core of Floridi’s two-stage project in informational ontology: a pars destruens highlighting the shortcomings of digital ontology , and a pars construens putting forward a new informational picture of reality’s fundamental layer, called “Informational Structural Realism” (ISR). Digital ontology is based on the claim (DO) that “The ultimate nature of reality is digital” (Floridi 2009, p. 151). For Floridi, DO is not entailed by information-inspired ontology as such: by showing that the former is “not a promising line of research” (Ibid, p. 176), we can properly set the stage for the development of Floridi’s ISR. DO as such may not imply pan-computationalism : the view that the physical universe is a big Turing machine-like device (see Piccinini, 2010, § 3.1). Floridi’s strategy is notable: he does not aim at showing that reality is not a cellular automaton; nor does he claim that we just cannot address the issue effectively, due to insurmountable cognitive limitations. Rather, he directly attacks the (alleged) tautology expressed by the title, according to which the world is digital or the world is not digital (henceforth: D v ~D).

Floridi’s argument has two main steps:

(F1) He provisionally grants D v ~D, but shows that an agent can never know that DO is true.

(F2) He claims that the result in (F1) is not just an epistemic limitation, but is due to the very nature of the concept at hand. Digital and analogue are features of modes of presentation of reality, not of reality, and the initial concession in (F1) must be withdrawn. (see Floridi 2009, p. 160).

Floridi’s full-fledged argument for (F1) and (F2) is an ornate 11-page long thought experiment involving four angels. Here we summarize the crucial step in the experiment with the following claim (see Floridi 2009, p. 169-170):

It is possible to use digital-to-analogue and analogue-to-digital converters to switch the nature of the layer presented to the observer of the Universe, disregarding completely the question whether the Universe itself was digital or analogue in the first place.

We raise two difficulties for Floridi’s metaphysical picture:

1) The objection from counting : the D v ~D-issue may boil down to a question about the number of things. As such Floridi very correctly presents it. In his step (F1) (see Floridi 2009, Section 3.1), he describes Michael’s (his first angel’s) epistemic position, which “enjoys a God’s eye view of reality in itself” (p. 162). He shapes the stuff constituting reality by applying a total ordering relation, so the result can be represented as a line. Then Michael uses his maximally sharp sword to Dedekind-cut the line, i.e., to figuratively map reality via Dedekind cuts to the real number line; what Michael is doing, emblematic representation aside, is counting: working towards establishing the number of things. Take the totality of things in the (physical) universe . If such a totality is in one-to-one correspondence with R, then the world is not digital, ~D (Michael’s sword always hits some thing ): reality is dense and continuous. If not, so that sooner or later Michael’s sword does not hit any thing , then the world is digital, D: either it’s not dense, or it is but still it’s enumerable, that is, equinumerous with Q. Now the D v ~D-issue boils down to this: either there are finitely-or-denumerably-many things, or not. How can this be a satzklang? “D v ~D” can be a category mistake, only if it makes no sense to speak of the number of things in the (physical) world. How can it be nonsensical to ask the question of how many things there are? Thus, the assumption D v ~D in (F1) ought not to be retracted. The question of the number of things makes perfect sense, and so does, thus understood, the question of whether the world is digital or not. And so D v ~D is not a wannabe-tautology or a satzklang , but a real tautology. And so it is true.

2) The objection from mereology : it has long been recognized that mereology (alone, or in conjunction with topology) allows for a neat, rigorous formal treatment of the ontology of the spatiotemporal world and its properties (see for example Casati, Varzi 1999). Once the usual axioms for the parthood relation are introduced (anti-simmetry, transitivity, reflexivity), one can add other mereological principles to further specify the intended model for the interpretation of the formal theory; in particular, one can ask whether there are objects in the domain which have no proper parts (the atomicity axiom), allowing us to precisely distinguish analogue/digital universe within the theory. Floridi’s own ontology is thus committed to one of the following claims: either parthood itself has no place at all in Reality (but then it becomes hard to understand the “patterns” ISR talks about), or some axioms of mereology quantify over objects, others quantify over representation of objects (which appears to be very implausible).

Finally, as concluding remarks we sketch a pair of possible ways out for the structural ontologist – all of which include unpalatable changes to the logic of identity – and suggest that the study of digital ontology may indeed be a worthy, multidisciplinary enterprise involving philosophy, information sciences, A.I. .


Luciano Floridi: Maker’s Knowledge and the Synthetic Uninformative

In this presentation, I begin by discussing the three standard distinctions used to qualify propositional knowledge: analytic vs. synthetic , a priori vs. a posteriori , and necessary vs. contingent. The ultimate goal is to understand what kind of knowledge is the so-called maker’s knowledge, as when Alice (knows or rather) is informed (holds the information) that Bob’s coffee is sweetened because she just put two spoons of sugar in it. In the course of the presentation, I shall argue that:

  1. we need to decouple a fourth distinction, namely informative vs. uninformative , from the previous three, in particular from its implicit association with analytic vs. synthetic and/or a priori vs. a posteriori ;
  2. such decoupling facilitates, and is facilitated by, moving from a propositional to an agent-oriented approach: the distinctions qualify a proposition, a message, or a set of well-formed, meaningful and truthful data not just in themselves but with respect to an information agent;
  3. the decoupling and the agent-oriented approach enable a re-mapping of currently available positions (Classic, Innatist, Kant’s and Kripke’s) on these four dichotomies; and
  4. within such a re-mapping, a fifth position, capturing the nature of a maker’s information in terms of these four dichotomies, is best described as the synthetic uninformative.

**Nir Fresco, Aditya Ghose, Patrick McGivern: Types of Information

Processed by Cognitive Agents**

Our paper deals with the role that information plays in cognition. More specifically, we ask ‘What types of information are required for cognition?’. Three answers are presented though not on a par.

  1. Cognition requires factual information and instructional information (Floridi, forthcoming, 2011, 2012).
  2. Cognition requires, at the very least, instructional information and descriptive information, which includes factual information and hypothetical information.
  3. Cognition requires cognitive information, direct regulative information and direct affective information (Burgin, 2010).

According to the first answer, factual information, which is understood as well-formed, meaningful and truthful data, upgrades to knowledge- that iff it is correctly accounted for (Floridi, forthcoming, 2012). Instructional information, which is understood as well-formed, meaningful data that are prescriptive rather than descriptive, is the basis for knowledge- how (Floridi, 2011). Therefore, cognition requires factual information and instructional information.

According to the second answer, which is endorsed in this paper, whilst instructional information and factual information are certainly required for cognition, so do other forms of hypothetical information. Cognition requires instances of factual information that depends on external truth conditions (e.g., ‘This smoke means there is/was fire’) for knowledge-that. It also requires instructional information as the basis for knowledge-how. However, cognition also requires hypothetical information, such as assumptions, presuppositions and conjectures, whose truth-value is less pertinent to the behaviour of the cognitive agent. Hypothetical information is characterised in terms of available evidence. Conjectures, particularly in science, are often put forward without conclusive evidence for trial. Similarly, presuppositions are often made in either a line of argument or some course of action without conclusive evidence. The same applies in the case of cognitive agents in the course of belief revision. According to the third answer, which classifies information as a property rather than as semantic data (or a thing ), there are three types of information required for cognition (Burgin, 2010).

The first type is cognitive information, which is any information that induces changes in the agent’s system of knowledge. The second type is direct regulative information, which is any information that changes the content of the agent’s system of will and instincts. The third type is direct affective information, which is any information that changes the content of the agent’s system of affective states (e.g., emotions, moods and motivations). Cognition requires these three types of information. Will, for example, controls emotions and reasoning; and positive emotions help in learning and remembering. These three answers are neither exhaustive nor mutually exclusive. The second answer is an extension of the first one adding another subcategory of descriptive information, namely, hypothetical information. The third answer approaches information differently from the two preceding answers: it associates the information type with the changes the information induces in different subsystems of the cognitive agent. The first two answers, on the other hand, associate information with semantics, since at the core of all the types of information identified is semantic content (i.e., meaningful structured data). Our primary objective is to emphasize that an ontological distinction between types of information alone is insufficient to determine their contribution to cognition. Rather, equal focus should be given to the way information is used by cognitive agents. This is the approach adopted by both the second and third answers. Further, an inquiry that is limited exclusively to either exogenous sources of information (i.e., perceptual information) or endogenous sources of information (e.g., the agent reflecting on its current state of belief) is not rich enough for determining how the information affects the agent’s cognitive behaviour. Here too, the second answer encompasses both exogenous and endogenous sources of information. We note that, strictly, the scope of the analysis underlying the first answer is confined to knowledge (rather thancognition broadly).

To support our view of the types of information needed for cognition we show how they partake in formal models of belief change in rational agents. We present some aspects in formal models of belief change to underscore the importance of hypothetical information in modelling cognitive processing. Despite many developments in belief change in the last three decades, the AGM model (named after its inventors Carlos Alchourrón, Peter Gärdenfors and David Makinson (1985)) remains the standard model to which all other models of are compared. On this model, there are three basic types of changes of belief: expansion, revision and contraction. Expansion is a simple way of modelling the addition of a new belief based on new data to the agent’s current belief set where no inconsistency is identified. Revision is a way of modelling belief change when new data contradicts at least one belief currently held by the agent. Either the previously accepted belief A is rejected or the previously accepted belief ¬ A is changed to A being accepted. Lastly, contraction is the case where at least one belief in the agent’s belief set is retracted without adding a new belief and minimizing information loss.

The take-home message is that natural cognitive agents do not use only perceptual information as the basis for rational behaviour. The AGM model is certainly not problem-free (Hansson, 2003). For example, belief sets of people, who are assumed to have finite cognitive capacity, cannot always be closed under logical consequence, since they can be infinite (Wassermann, 2001, p. 349). The requirement for closure under consequence may be dropped. Importantly, information flows in two ways: we use perceptual information, but also previously stored information when engaging with our environment. Some of this stored information may be questioned when we conjecture about some aspect of a phenomenon without sufficient evidence or when we consider the consequences of some presupposition for what we already believe. This type of information, which we call hypothetical information, should not be discounted nor classified as merely pseudo-information.

References

  • Alchourrón, C. E., Gärdenfors, P., & Makinson, D. (1985). On the Logic of Theory Change: Partial Meet Contraction and Revision Functions. The Journal of Symbolic Logic , 50 (2), 510–530.
  • Burgin, M. (2010). Theory of information: fundamentality, diversity and unification. Hackensack, NJ: World Scientific Pub Co Inc.
  • Floridi, L. (forthcoming). Perception and Testimony as Data Providers. Logique et Analyse.
  • Floridi, L. (2011). A Defence of Constructionism: Philosophy as Conceptual Engineering. Metaphilosophy , 42 (3), 282–304.
  • Floridi, L. (2012). Semantic information and the network theory of account. Synthese , 184 (3), 431–454.
  • Hansson, S. O. (2003). Ten Philosophical Problems in Belief Revision. Journal of Logic and Computation , 13 (1), 37–49.
  • Wassermann, R. (2001). On Structured Belief Bases. In M. A. Williams & H. Rott (eds.), Frontiers in belief revision. Dordrecht; Boston: Kluwer Academic Publishers.

**George M. Coghill: On Model-based Systems and Qualitative Reasoning with

reference to the Philosophy of Information**

  • Qualitative is nothing but poor quantitative. – Ernest Rutherford
  • Quantitative is nothing but poor qualitative. – Christopher Zeeman
  • The purpose of Computing is insight, not numbers. – Richard Hamming

Introduction

How one views the importance or usefulness of qualitative information depends on one’s perspective and the context within which one is operating. The first two quotations at the beginning of this abstract highlight this. The first was written by a nuclear physicist in a context where precision of models was of paramount importance. The second was stated by a topologist, an area where the qualitative features identified in a theory are the most important.

There is a view that within science the term ‘qualitative’ has a pejorative feel. This may be seen, for example, in the move from functional genomics to quantitative systems biology. Even within the Philosophy of Information the impression is sometimes given that more abstract or qualitative information is somehow deficient in various ways [9, 10] In this abstract I shall address some of these issues and suggest that, contrary to the perceived view, qualitative information is complementary to quantitative information and can sometimes provide more insight than quantitative information. The final quotation above highlights the unifying factor in all this: regardless of the level abstraction at which it is presented, the output of a simulation or information should provide some insight. This is preliminary work that I hope can learn from, and possibly contribute to, the Philosophy of Information (as well as the Philosophy of Modelling and Simulation).

Model-Based Systems and Qualitative Reasoning

Model Based Systems and Qualitative Reasoning (MBS&QR) has evolved as a topic within AI, in part to overcome some of the perceived problems with Expert Systems: in particular the shallow, associational nature of the reasoning. It has been variously called, amongst other things, Naive Physics, Deep Knowledge Based Systems, Second Generation Expert Systems. A key feature of MBS&QR is abstraction and approximation. This manifests itself both in regard to the the domain of the variables that, in the most abstract case may take values from the set of the signs (+, 0, or –), and the relations in the models themselves, in particular the constitutive relations, where, again in the most abstract case, they simply identify that there is a monotonic relation between two variables (represented, for example, as M +( x; y )).

This abstraction gives rise to qualitative models of various kinds [13, 14]. One example of these are Qualitative Differential Equations (QDEs) [14], which are an abstraction of ODEs. In fact these two kinds of differential equation may be seen as forming the two ends of a spectrum of model abstractions. We have developed a system named, Morven, [2, 3] that enables the construction of models at any point on this spectrum (qualitative, fuzzy qualitative, interval and numerical).

MBS&QR has also contributed in a variety of ways to the process of modelling in general. One approach, Compositional Modelling [11], has provided a direct basis for a general modeling framework: CML [12] as well as an indirect influence on the development of the award winning Systems Biology tool, Little B [16].

Sometimes less is more

The imprecision inherent in QR is its strength. The result of certain operations is ambiguous. That is, they yield more than one, in fact all, qualitatively distinct possibilities. Each behaviour and state generated by a QR engine will contain less information than their more precise cousins. However, the ambiguity available permits one to gain a global picture of the way a system may behave, albeit at a qualitative level. This can be extremely useful in a number of circumstances. For example, in a system as simple as a single tank (or wash hand basin if you prefer) there are three possible behaviours (excluding the possibility of overflow): the level of fluid may rise to a steady state, it may already be at steady state or it may fall to the steady state, depending on whether the initial level of fluid was below, at, or above the equilibrium level. A single pass through the inference engine will provide all three, qualitatively distinct, possibilities. On the other hand, given precise values for the parameters and initial values, and an exact form for the relations in the model, the result of numerical simulation is a single precise trajectory (that will exhibit the shape of one of the qualitative behaviours).

Given these outcomes, it seems reasonable to think of qualitative and numerical simulation as complementary, and able to address different question about system models. That is, depending on whether a global or local picture is desired; and that is a contextual issue. So for example, in a design situation where the possibility of entering a dangerous state is of interest, a global picture may contain more relevant information than a precise one, whereas predicting when a particular state will be reached will require precise information.

Another issue related to this regards the accuracy, or correctness of simulations and predictions. When matching a simulation against measured time series, there are inevitably errors, or inaccuracies, between the two, if for no other reason than that there is no such thing as a perfect model. In such situations one may overcome this (to some extent) by abstracting the prediction from numerical to interval, fuzzy or qualitative. That is one may increase the correctness by reducing the precision [15]. This result resonates with the sentiments of Zeeman quoted above and articulated so clearly by René Thom in a recent lecture [19]. The abstraction in shifting the domain of the variables from rational numbers to qualitative values seems to map to one aspect of Floridi’s Levels of Abstraction, and MBS&QR has provided a useful framework for reasoning about models in general by means of model dimensions, exploring relational and structural approximations [4, 15]. It is also known that a single model is not suitable for all situation, which leads one to use these dimensions to generate multiple models, and identify model switching strategies for moving between them [1, 5]. It will be interesting to investigate the relation between Model Switching and the Gradient of Abstraction [10].

Applications

MBS&QR has been applied to an extensive range of domains2. The most obvious application is simulaton, as has been noted already. The inverse operation to simulation is ab initio model learning; this is the qualitative analogue to system identification whereby time series data is used to learn a qualitative model that can account for and explain the data [6, 18]. This research has been carried out in the context of systems biology, where, despite concerted efforts to obtain significant amounts of precise time series data, it still remains the case that there are many situations where some observations of sub-cellular entities do not provide a particularly high degree of precision; and even where they do it is often the case that the time series data is sparse, making full system identification rather difficult to achieve. The resulting qualitative models may then be used both as standalone models or as a first step towards identifying quantitative models as and when more data becomes available.

For large scale process plant, on the other hand, precise measurements are relatively easy to obtain, though they may be quite expensive and a fortiori the construction of quantitative models awkward to the extent that there is no incentive to develop them. In such cases it is possible that qualitative models of the relevant systems can be built relatively straightforwardly, whether by hand or automatically. These models may then be used, for example, to perform Model-based Diagnosis when things go wrong [7] and even provide an abstract form of parameter estimation: Qualitative Parameter Estimation [8].

While these applications provide information in the form variable values, models, or changes in models, they do so in a way that requires human linguistic interpretation for communication and criticism. However, communication normally takes place through the medium of natural language, and the information contained in the predictions from a qualitative models provide the means of giving Natural Language Generation explantations of the data based on the structural relations in the model [17].

Finally, to reiterate, this nascent work suggests that there is something to be gained from continuing to explore the full range of model types, and their relation to Philosophy of Information. Ultimately, of the three statements at the beginning of this abstract only Hamming’s can be counted as generally accurate.

Acknowledgement

GMC is supported by the CRISP project (Combinatorial Responses In Stress Pathways) funded under the BBSRC Systems Approaches to Biological Research (SABR) Initiative (BB/F00513X/1).

References

  • [1] S. Addanki, R. Cremonini and J. S. Penberthy, Graph of Models Artificial Intelligence , 51,1991, 145 – 177.
  • [2] A. Bruce and G. M. Coghill, Parallel Fuzzy Qualitative Reasoning. In B Rinner, M Hofbaur & F Wotowa (ed), Proc. 19th International Workshop on Qualitative Reasoning (Graz, Austria), Pp 110–116, 2005.
  • [3] G. M. Coghill Mycroft: A Framework for Constraint Based Fuzzy Qualitative Reasoning. Ph.D. Thesis, Department of Computing and Electrical Engineering, Heriot-Watt University, 1996.
  • [4] G. M. Coghill, Model-based Reasoning as Smart Adaption, International Journal of General Systems , 33(5), Pp 485–504, 2004.
  • [5] G. M. Coghill and Q. Shen, Towards the specification of models for diagnosis of dynamic system, AI Communications , 14(2), 93 – 104, 2001.
  • [6] G. M. Coghill, A. Srinivasan, and R. King, Qualitative system identification from imperfect data, Journal of Artificial Intelligence Research (JAIR) , 32, 825 – 877, 2008.
  • [7] G. M. Coghill, and G. Wu, Incremental Model-based Diagnosis, Advanced Engineering Informatics , 23, 309–322, 2009.
  • [8] G. M. Coghill and H. Al-Ballaa, Diagnosis via Qualitative Parameter Estimation, Proc. 11th UK Workshop on Computational Intelligence, (UKCI-11) , Manchester, 7-9 Sept. 2011(CD)
  • [9] L. Floridi The method of levels of abstraction, Minds and Machines , 18.3, 303-329, 2008.
  • [10] L. Floridi The Philosophy of Information , OUP, 2011.
  • [11] B. Falkenhainer and K. D. Forbus, Compositional modelling: finding the right model for the job, Artificial Intelligence , 51, 1991, 95 – 143.
  • [12] B. Falkenhainer, A. Farquhar, D. Bobrow, R. Fikes, K. Forbus, T. Gruber, Y. Iwasaki, and B. Kuipers. CML: a compositional modeling language. Stanford University, Technical Report KSL-94-16 1994.
  • [13] K. Forbus Qualitative Process Theory, Artificial Intelligence , 24, 1984, 85–168.
  • [14] B. Kuipers Qualitative Simulation Artificial Intelligence , 29, 1986, 280 – 338.
  • [15] R. Leitch, Q. Shen, G. M. Coghill and M. Chantler, On choosing the right model, IEE proceedings D: Control Theory and Applications , 146(5), 435 – 449, 1999.
  • [16] A. Mallavarapu, M. Thomson, B. Ullian and J. Gunawardena Programming with models: modularity and abstraction provide powerful capabilities for systems biology, J. R. Soc. Interface , 6 257–270, 2009.
  • [17] D. Matheson, S. Sripada, G. M. Coghill, Integrating Natural Language Generation and Model-based Reasoning for explanation generation, Proc. 12th UK Workshop on Computational Intelligence (UKCI-12) , Heriot Watt University, 2012.
  • [18] W. Pang, and G. M. Coghill, An immune-inspired approach to qualitative system identification of biological pathways, Natural Computing , 10(1), 189–20, 2011.
  • [19] R. Thom, http://www.philosophy.hku.hk/courses/sciences/qual/thom13a.htm

Orlin Vakarelov:Information Qualities – A Structural Perspective

By information quality I mean any dimension of evaluation of information that is not a measure of quantity, whether Shannon style or other. Some dimensions of evaluation may be based on discrete or continuous categories, some may be represented by numbers (as magnitudes), and some may be represented by other mathematical objects, such as vectors, tensors, sets or others.

By a structural perspective to information I mean the view that all aspects of information are definable completely in terms of the possible “informational” transformations of the system containing the information. Here information is taken to consist of data that (may) have semantic and pragmatic characterization; that is, data that are meaningful and usable (and truthful). That meaning and pragmatics should be evaluated purely relationally/transformationally is not controversial. The structural is most novel as a theory of data. It assumes that the data itself is determined by the transformations (possible read and write operations). In most general terms, it is assumed that the information carrier is a system that may be in one of many possible states – the micro-states of the system. The information vehicles of the system, the possible data, are sets of micro-states. Those macro-states are determined by the transformations. A system like this is called an information medium. The data of the medium is determined by its relation to other information media, within a larger information media network. The structural perspective is that, ultimately, the structure of the network, its connection to other (non-information) systems and its use by agents determine its informational properties.

Most analyses and classifications of information qualities are rather haphazard. One reason for this is the vague nature of the notion of information quality. Another is the lack of a broader systematic framework to investigate information in its many facets. The structural perspective more generally, and the information media network approach more specifically, can offer more structure to investigating different information qualities. This talk will demonstrate how many dimensions of evaluation of information fall out naturally from the structure of relations within the network, the interaction between information media and the physical world (at the level of micro-states), and the integration of information media in the behavior of cognitive agents (usually humans, whose cognitive operation can also be viewed as an operation of an information network). Of special interest will be the phenomenon where the information of a medium is evaluated based on the ability to transform the information in another form (to another medium) and apply another quality (or quantity) measure in the other medium. It will be shown that many interesting information qualities are based on translations of qualities from other media.

The talk will aim to offer primarily a methodology of analysis, rather than a complete classification of information qualities discussed in the literature. It will be one of the consequences of the translation analysis of qualities, that an indefinite dimensions of evaluation of information may be defined. It is a separate question which are useful.


**Marco Benini& Federico Gobbo: Measuring Computational Complexity: the

Qualitative and Quantitative Intertwining of Algorithm Comparison** **
**

This paper wants to discuss an aspect of information in Computer Science (CS) that is quantitative and qualitative at the same time: measuring. It is a fact that measuring is the basic act of modern science first and engineering later. Also in the common use of the word, measuring is often described as ‘describing a phenomenon by a number’. Nevertheless, in CS, this meaning is not always respected, in particular in the field of computational complexity—one of the essential parts of the discipline. The theory of computational complexity is the branch of theoretical CS which studies the efficiency of algorithms in terms of time, i.e., the number of steps to perform a complete computation, and space, i.e., the quantity of memory cells which are needed to obtain a result. Of course, an algorithm A is more efficient than an algorithm B when it uses less the resource we want to minimise—either time or space. Here, a relevant problem arises: the time (and the space) needed to perform a computation depends on the input. Researchers, see, e.g., (Papadimitriou, 1994), in CS have abstracted over the input by considering only its size: therefore, choosing between two algorithms A and B means to consider the worst performance of an algorithm A for an input of size n, the worst performance of an algorithm B for an input of the same size, and to extend this comparison to every n. Thus, A is more efficient than B if, for every n, the time A spends to compute the result on the worst input of size n is less than the time B uses to perform the same task on its worst input of size n. The abstraction means that the complexity of an algorithm A becomes a function, assigning to each number n the time needed by A to compute the result on its worst input of size n. When we ask to compare two algorithms, we really want to compare their complexities; the long-term behaviour is then captured by saying that one function definitely dominates the other one, using the jargon of mathematical analysis. In this way, experts really measure efficiency of algorithms using functions instead of numbers—in other words, the nature of the information produced hereby is not completely quantitative, if we follow the definition stated at the beginning of this paper.

As a result, in most real-world cases algorithms result to be incomparable in practice, as each one may be more efficient than the other one for some values of n. So, researchers turned around this problem by observing that what really matters is the long-term behaviour of an algorithm. The fundamental question is: when n is “big enough”, which algorithm under consideration performs better? We consider the act of measuring the complexity of a pair of algorithms as an act of producing information, where ‘information’ is intended in the sense used within the Philosophy of Information (PI).1 The act is performed by an informational organism (inforg), where the biological agent is the algorithm expert, the engineered artifact being the measure of complexity of the algorithms under scrutiny, which are their primary Level of Abstraction of interest. It is worth noticing, that this algorithmic inforg eventually generates an indefinite number of computational inforgs, i.e., inforgs whose artificial counterpart is a kind of computing machine based on the algorithm chosen, typically based on Von Neumann’s model (Gobbo and Benini, 2013). However, the software produced over the algorithms is not part of their 1st order ontological commitment (i.e., their knowable observables, in terms of relational structures) but possibly to a 2nd order ontological commitment (Floridi, 2011, 351–352). Furthermore, the pieces of information collectively produced by the algorithmic inforgs gives a new Level of Explanation (LoE), a sort of second-order qualitative change in the classification of algorithms themselves. According to their ontological commitment, algorithmic inforgs observe, compare, and finally classify algorithms, i.e., their complexities, an activity which can be carried on only by the human part of the inforg (Gobbo and Benini, forthcoming). It emerged that sometimes the complexity associated to some algorithms exhibit a behaviour which is similar to the one of a polynomial; in some other cases, the complexity grows much faster, as the exponential function or even more. This fact introduces the new LoE, which classify algorithms in two parts: the ones which are ‘feasible’ (polynomial) and the ones which are ‘practically uncomputable’ (exponential). Again, this LoE is both qualitative and quantitative. This distinction is codified in the so-called extended Church-Turing thesis which says that the only practically computable problems are the ones admitting a polynomial algorithm able to compute them.

The search for the borderline between feasible and unfeasible algorithms is still unclear. In fact, each polynomial algorithms operating on a non- deterministic Turing machine can be easily simulated on a deterministic machine by trying all the possible non-deterministic paths: the complexity of the simulated algorithm is exponential. However, a subclass of those algorithms, the ‘decision procedures’, is particularly relevant. This class is called NP in literature and the problem if NP is distinct from P, the class of polynomial algorithms on a deterministic machine, is the most important and well-known open problem in theoretical CS, thus an eminently qualitative problem whose solving attempts have generated an immense amount of knowledge and ideas in the last 40 years, see, e.g., (Savage, 1998).

In the sequel of this paper, we will explore some consequences of the conducts of algorithmic inforgs, which go beyond the scope of CS. In fact, as effectively put by Barry Cooper: ‘algorithms, as a way of traversing our four dimensions, have been with us for literally thousands of years. They provide recipes for the control and understanding of every aspect of everyday life (Cooper, 2012, 776).’

References

  • Cooper, S. B. (2012), ‘ Incomputability after Alan Turing’ , Notices of the AMS 59(6), 776–784.
  • Floridi, L. (2011), The Philosophy of Information , Oxford University Press, Oxford.
  • Gobbo, F. and Benini, M. (2013), ‘The Minimal Levels of Abstraction in the History of Modern Computing’, Philosophy & Technology pp. 1–17. URL: http://dx.doi.org/10.1007/s13347-012-0097-0
  • Gobbo, F. and Benini, M. (forthcoming), ‘Why zombies can’t write significant source code: The Knowledge Game and the Art of Computer Programming’, Journal of Experimental & Theoretical Artificial Intelligence .
  • Papadimitriou, C. H. (1994), Computational Complexity, Addison-Wesley.
  • Savage, J. E. (1998 ), Models of Computation: Exploring the Power of Computing , Addison-Wesley.

Andrew Iliadis: What is Information Artifact Ontology? **

**

I consider the field of Information Artifact Ontology (IAO) a sub-field of Philosophy of Information (PI). IAO is concerned with the informational genesis of technological objects.

The most relevant work to be done in IAO comes from thinking the relationship of a metaphysics of information to an ontology of the technical object. This is where I situate my work on the philosopher Gilbert Simondon (1924 – 1989). Relatively unknown during his lifetime outside of his native France, Simondon prepared two theses on this subject that articulate a process where the consequences of information produce technological change and that, I argue, have much more to contribute to our understanding of technological genesis. These works are L’individuation à la lumière des notions de Forme et d’Information and Du mode d’existence des objets techniques , both defended in 1958. Often, Simondon’s concepts are typically misread and grouped into a combative category of thought to which I do not think they entirely belong. Many have tried to situate Simondon as opposed to the mathematical theory of communication to the extent that his theory bares absolutely no connection to those of Shannon and Wiener. This would be a mistake. While Simondon was very critical of both Shannon and Wiener, I think it would be incorrect to situate him as being diametrically opposed. Rather, I believe Simondon thought information as an entity in very much the same way as Shannon and Wiener; however he described the entity that information is in terms of a different type of process. The difference is not that Simondon saw information as a “thing” differently from Shannon and Wiener, but that he envisioned it’s interoperability in a different sense. If I continuously close and open one eye and then the other it will produce each time a new effect where my affective ocular sensibility changes with each “click” (this back and forth of perspective is famously known as “parallax”). The objects in my visual field clearly do not change when I perform this activity, but something else certainly does, namely, the affect produced by each new percept. But does this mean that these two pairings of affect/percept are two distinct entities? Not at all. All that has changed is a mode of processing information. I understand Simondon’s relationship to the mathematical theory of communication in very much in the same way. The difference lies not in the “thing” but in its process, its interoperability, and its functionality.

So how does the interoperability of information lead us to artifacts, to technological objects, and finally to theorizing technological genesis? I understand technology in terms of technique. If opening and closing my eyes is a technique, then it is a type of technology. But in this example there is no type of long-form genesis. How to explain the long-form genesis of technical objects? Here Simondon proves eminently useful, but I believe it is necessary to retain a word in the original French. Simondon’s concept of concrétisation , I claim, is more useful than the well-known concept of individuation. Concrétisation is not quite like the English transitive verb “concretization”. Concrétisation is an indefinite process that does not indicate a “transfer” as if something had gone from one state (abstract) to the next (physical), as concretization does. Concretization defines a specific result. It is used in the way that I can say, simplistically, that I have “given form to an idea”. Concrétisation, on the other hand, describes a certain type of “pull”; it indicates what Simondon described as the “life” or “being” of the technological object. But it is not a type of emergentism. The reason it is not a type of emergentism is that the “sum” of concrétisation is not greater than its parts; it does not connote something that at one point never existed. To put it simply, it’s concrétisation “all the way down”. Concrétisation is the engine that drives individuation.

So, what are the inherent qualities of concrétisation? There are two. The first is that the technological object tends toward self-sufficiency. All this means is that concrétisation is not an additive process, and that the technological object tends to get smaller as it re-purposes elements within itself. When I say that concrétisation is not additive and that it becomes self-sufficient, this is due to Simondon’s second and more nuanced point, that technological objects re-purpose themselves by an interoperability that is achieved through the transduction of two regimes of information. What does this mean? If I have a technical object “ab”, and I want it to do something else, then I have to add “c” to it. This is not concrétisation but an additive process. Concrétisation operates more along the lines of an algebraic equation, not in the direction of the “plugging in” of numbers that happens when we substitute variable functions with known quantities, but the reverse, when we reduce the equation down to its simplest, abstract form. In this sense, concrétisation is a rather counter-intuitive process. It does not tend toward the “real” or concrete “thing” so much as it does toward the essence of the technical object, which Simondon, contrary to claims made by many commentators, described in terms of totality. Simondon provides countless examples of just such a transcendental transductive principle throughout Du mode d’existence des objets techniques , moments in history where parts in the technological object become useful in more ways than one, re-purposed, or achieve a higher state of interoperability, and as a result help to move the technological object along in its concrétisation toward a more abstract state of being. But it should not be forgotten, and people do not talk about this nearly enough, that information plays a fundamental role in this concrétisation. If concrétisation is the engine that drives individuation, then information is the gas that keeps concrétisation working.

IAO, then, sees all things as real, yet it acknowledges along with Simondon that information is the methodological skeleton key that allows us to inquire into “objects” in the first place.