Fourth Workshop on the Philosophy of Information
Abstracts
PATRICK ALLO
Tentative inference, open worlds and the informational conception of logic
Standard refinements of epistemic and doxastic logics that avoid the problems of logical and deductive omniscience cannot easily be generalised to default reasoning. This is even more so when defeasible reasoning is understood as tentative reasoning; an understanding that is inspired by the dynamic proofs of adaptive logic. In the present paper we extend the preference models for adaptive consequence with a set of open worlds to account for this type of inferential dynamics. In doing so, we argue that unlike for mere deductive reasoning, tentative inference cannot be modelled without such open worlds. We use this fact to highlight some features of the informational conception of logic.
BERT BAUMGAERTNER
A Defence of Modeling Vagueness with Discrete Degrees
There is a temptation to model gradient in semantic representations with a continuum of information theoretic degrees (e.g., a continuous probability distribution of events in the interval (0,1)). In the first part of this talk I will argue that following this line of thought does not lead one out of the problem. The proposed semantic gradient will turn out to have hyper-sharp cutoffs in the representation rather than ‘smooth’ transitions. This is because any point in the semantic gradient will be maximally informative, since it is distinct from any other (non-equal) point. This conflicts with the fact that some points in a representation are not always distinguishable to an agent that deploys the representation. In colour classification tasks, for example, subjects can distinguish that two colour palettes differ, but cannot distinguish them semantically (i.e., as far as their conceptual repertoire goes, the two colours are semantically identical). In the second part of this talk, I will argue that semantic constancy (our capacities torepresent word uses as the same despite variations in, e.g., context) requires a systematic kind of information ‘loss’. Such information gaps, I suggest, are not only an essential part of meaning, but also explain how sharp cutoffs in a discrete representation go undetected.
ANTHONY BEAVERS
Transcendental Philosophy in the Age of Information: Floridi’s Neo-Kantian Epistemology
Transcendental philosophy is typically acknowledged as a product of Kant and exemplified in his Critique of Pure Reason (KrV). While definitions of the word “transcendental” and “transcendent” differ greatly across philosophy generally, Kant employs the terms narrowly. The term “transcendental” names “all knowledge which is occupied not so much with objects as with the mode of our knowledge of objects in so far as this mode of knowledge is to be possible a priori” (KrV A11, B25). In KrV, transcendental arguments take the form of isolating the necessary conditions for a given Y, and then deducing that if X is a necessary condition for Y, and Y is the case, then X is the case. The term “transcendent,” on the other hand, refers to that which lies beyond the realm of the knowable or, at least, beyond experience (A296, B351). Though the two terms have different definitions, in KrV they become intertwined in problematic ways that were recognized even by earlier critics of the work, including Schopenhauer (1818/2010, appendix) and Adickes (1929). For instance, while KrV attempts to isolate the necessary conditions for the possibility of knowledge, any attempt to do so must of necessity make reference to the transcendent while simultaneously asserting that knowledge of it, even knowledge that it “exists,” is impossible. Indeed, to argue that anything follows of necessity is itself impossible to ground, since the concept of necessity itself is a necessary condition for knowledge and is therefore also deduced by transcendental argument. Herein lies the problem of transcendental philosophy in general; it somewhat resembles Wittgenstein’s famous “ladder.” If KrV is correct, it could not have been written. The same concern haunts other approaches to transcendental philosophy, as witnessed in Husserl’s descriptive transcendental phenomenology (see his Cartesian Meditations (1929/1960) and Ideas (1931/1962)) and Heidegger’s (abandoned) existential and methodologically regressive version in Being and Time (1927/1962); it might for the purpose of this essay thus be called “the transcendental problem.” At the outset, though the transcendental problem appears in many versions of transcendental philosophy, it does not appear in Floridi’s epistemology. Two reasons for this, I believe, are that his adoption of computer science methodologies allows us to use the notion of recursion instead of regression and that his transcendental application of levels of abstraction (LoAs) from computer science is constrained by data in a way that would violate Kant’s methodology.
Initially, it seems that Kant would like to have tried something similar, but his conceptual architecture led him into metaphysical problems, even though he tried to remain in the realm of epistemology. This is evident in his turn from the argument of the Dissertation, where he held that ideas are related to the independently (i.e., not mind dependently) real, to the transcendental doctrine of KrV. On 21 February 1772, Kant wrote to Hertz, “In the Dissertation I was content to explain the nature of these intellectual representations [concepts of the understanding] in a merely negative manner, viz. as not being modifications of the soul produced by the object. But I silently passed over the further question, how such representations, which refer to an object and yet are not the result of an affection due to that object, can be possible” (Kemp Smith, p. 219-220). Presumably, KrV was composed to answer this question, but at the cost of introducing the transcendental problem discussed above.
The problem is visible in KrV from the Transcendental Analytic onwards and ultimately results in a set of collisions in the Dialectical Inferences of Pure Reason (The Paralogisms, Antinomies and Ideas) that are “sophistications not of men but of pure reason itself” (KrV A339/B397). That Kant confronts them as “pseudo-rational” problems resulting from unavoidable dialectical inferences structured into pure reason itself may mean to some that he has given an adequate account of them. To me, however, it signals a fundamental transcendental inconsistency, clever though it is, that is paradoxical because of Kant’s methodology. Indeed, Wolff notes that “when [Kant] separates the manifold produced in the interaction of the transcendental self and the thing- in-itself from the empirical manifold of perception arising from physiological causes, he seems to drift into the region of speculative metaphysics” (1973, p. 171-172). Try as he may to show the limits of reason and reign in metaphysics, ontological commitments seem unavoidable in KrV. Additionally, there are other problems to worry about.
One of these concerns the relationship between the Table of Categories and the Table of Judgments in the Transcendental Analytic, which motivates the concern over necessity I mentioned above, among other things. To step outside of Kantian language for a moment, the relationship that is needed to pertain between the two tables is intrinsically important to Kantian epistemology. In short, the categories lay out the “rules,” as it were, whereby the world is ontologically mapped in advance of the judgments that as a consequence fit with it. In other words, the world is constituted in such a way that our judgments about it can be true because we have arranged things that way. The attempt is brilliant, especially when viewed as a response to Hume, but the attempt can only succeed if the two tables can be shown to be correlates of each other. To attempt so, Kant maintains that “the same function which gives unity to the various representations in a judgment also gives unity to the mere synthesis of representations in an intuition” (A79, B104). Wolff notes that if this is true, then “to each function of unity in judgment, there will correspond a function of synthesis, or category” (p. 69). If Kant can argue his point, then the link between the understanding and the appearance of an object will be complete. But, according to Wolff, “Kant gives no proof at all for the assertion that analytic and synthetic unity arise from the same operations, and that the first can therefore be used as a key to the second. The argument, as Kant states it, depends on the claim that both kinds of unity are attributable to the same faculty, namely understanding, but Kant himself assigns synthesis to the imagination” (p. 69). Wolff goes on to say that the argument is “arbitrary in the extreme” and that it is the “weakest link in the entire argument of the Analytic” (p. 77). It is difficult to understand what Kant is doing here, and it is equally difficult to find the argument to which Wolff is referring. (I can only find the single assertion presented above). But, there is no doubt that the issue is critical. Without a satisfactory link between the two tables, it is impossible to arrive at the conclusion that we ontologically map the world in advance of knowing it and, further, that we can know the world of experience because we have structured it to be knowable in the first place.
If this sounds like a difficulty for Kant, the problem is magnified if we try to say it without arriving at the transcendental problem above. Indeed, if Kant would have abided by Wittgenstein’s notion that “Whereof one cannot speak, thereof one must be silent” (1921/1922, 7), he might have avoided the problem. But then, there could have been no KrV. This is aptly pointed out by Körner, who takes Kant to task on the distinction between phenomenon and noumenon. The latter, a synonym for Kant’s famous ding an sich, is posited late in KrV as a focal point, a limiting concept only, valuable to the extent that it curbs the pretensions of the intellect and stops it from wandering unjustified into the domain of morals and religion (A849/B877). But the claim that it is only a limiting concept clearly cannot be correct. As Körner rightly suggests, “If we … conceive, with Kant, a noumenon or thing in itself as being not only a non-phenomenon but something which affects our senses, then our concept is no longer merely negative. Kant’s assertion that in the Critique of Pure Reason he uses the concept of a noumenon only as a negative and limiting concept is thus incompatible with its actual use” (1955, p. 95). Indeed, KrV cannot even get off the ground without positing noumena as positively, if not necessarily, existing. Hence, the transcendental problem presented above.
The question that this lengthy introduction puts before us is whether this problem is restricted only to certain types of transcendental philosophy or to all of it. As should be apparent from the above, my suggestion here is that it is not a general problem for transcendental philosophy, but that it is limited to particular methodologies, Kant’s attempt to reverse engineer reason itself being among them. Floridi, I would like to argue here, provides a good example of a transcendental philosophy that does not fall prey to the transcendental problem. In the remainder of this abstract, I will argue the point, beginning, of course, with an argument that Floridi’s Philosophy of Information (2011) is transcendental properly and methodologically situated in the context of computational philosophy more generally. Because of this, it does not fall prey to the problems discussed above.
Indications of a transcendental underpinning to Floridi’s work are apparent at least as early as his 1999 introduction to computing and philosophy, though naturalized and applied, rather than theoretical. He writes, “The more compatible an agent and its environment become, the more likely the former will be able to perform its task efficiently. The wheel is a good solution only in an environment that includes good roads” (p. 214). This theme, which remains with Floridi still (while taking a turn also into epistemology along the way, as we shall see) suggests that technology is not only added to the world, it also necessarily restructures it to fit. At first, this looks like Floridi is a long way from Kant, but the next sentence starts to close the gap: “Let us define as ‘ontological enveloping’ the process of adapting the environment to the agent in order to enhance the latter’s capacities of interaction” (p. 214). When we consider that Kant’s categories do the same for our judgments, that is, they ontologically map the landscape so that our judgments may pertain appropriately, it is easy to see that Floridi offers a reduplication of the Kantian enterprise, but on the practical level.
Other ideas resonant similarly earlier in the book that are even closer to Kant. Floridi writes:
The specific construction of a microworld “within” a computerized system represents a combination of ontological commitments that programmers are both implicitly ready to assume when designing the system and willing to allow the system to adopt. This tight coupling with the environment (immanency) is a feature of animal and artificial intelligence, its strength and its dramatic limit. On the contrary, what makes sophisticated forms of human intelligence peculiarly human is the equilibrium they show between creative responsiveness to the environment and reflective detachment from it (transcendency). This is why animals and computers cannot laugh, cry, recount stories, lie or deceive, as Wittgenstein reminds us, whereas human beings also have the ability to behave appropriately in the face of an open-ended range of contingencies and make the relevant adjustments in their interactions. A computer is always immanently trapped within a microworld. (pp. 146-147)
In 2002, I tried to take Floridi to task in a misguided attempt to show phenomenology’s relevance to artificial intelligence by noting that, following the lead of Kant and Husserl, humans are also immanently trapped in a microworld, though a rather large one, because we, too, ontologically envelope the world in advance of knowing and experiencing it. My argument, in short and at the time, was that Floridi did not go quite far enough in his Kantianism. It is now apparent to me that a full-scale re-appropriation of Kant or Husserl for artificial intelligence (or cognitive science) purposes presents a Cartesian egology from which there is no escape and runs straight into the transcendental problem discussed above. (It is also worth noting that research in embodied and embedded cognition also shows the untenability of the approach I was advocating in 2002. See Clark, 1998, for example.) Something else is needed to preserve the insights of transcendental philosophers while avoiding their pitfalls.
In Floridi and Sanders (2004) and Floridi (2008), a viable solution starts to emerge that will culminate in an “Information Structural Realism” that differs from Kantian and other neo-Kantian forms of Structural Realism. Floridi writes, “In Kant, knowledge of reality is indirect because of the mind’s transcendental schematism. After the downfall of neo-Kantism and Cassirer’s and C. I. Lewis’s revisions of the transcendental, an approach is needed that is less infra-subjective, mental (if not psychologistic), innatist, individualistic and rigid” (2011, p. 347). The solution Floridi presents involves borrowing the method of levels of abstraction (LoAs) from computer science, particularly object-oriented programming (OOP), and applying it to philosophical issues in a maneuver that is reminiscent of, and perhaps justified by, the strategical borrowing from mathematics and geometry on the part of Descartes, Spinoza and Leibniz, among others. (See Beavers, 2011, for further comment.)
While a complete description of OOP or the methodology of LoAs exceeds the scope of this abstract, a few preliminary comments will serve to provide the essentials. LoAs are presented by Floridi as an improvement of the conceptual schemes analyzed and criticized by Davidson (1974). They are “clusters of networks of observables” and “model the world or its experience” (Floridi 2011, p. 72). As such, they function as lenses through which epistemic subjects may apprehend the world. Their employment governs the way in which we relate to the objects, states and relations we encounter. To use Floridi’s example of an automobile battery:
‘the battery is what provides electricity to the car’ is a typical example of information elaborated at a driver’s LoA. An engineer’s LoA may output something like ‘[a] 12-volt lead-acid battery is made up of six cells, each cell producing approximately 2.1 volts’, and an economist’s LoA may suggest that ‘a good quality car battery will cost between $50 and $100 and, if properly maintained, it should last five years or more’. (p. 77)
As should be clear from this case, all three examples are true for a particular purpose from a particular perspective. They are not arbitrary. That is, while there are many ways one could consider a car battery, there are many more that one could not. Consequently, through their use, “data as constraining affordances—answers waiting for the relevant questions—are transformed into factual information by being processed semantically at a given LoA” (p. 77).
What makes this view Kantian is the way in which LoAs function, namely, in a way similar to the categories enumerated in the Kant’s Transcendental Analytic. Without them, we can know nothing, and, as such, they are necessary conditions for knowledge. They differ from Kant, however, in that they provide a minimalist, local ontology adapted to particular questions rather than a global ontology that supports the very possibility of human experience. This fact, as we shall see momentarily, will save Floridi from the transcendental problem that has plagued so many of his precursors.
Though a form of modified Kantianism, Floridi also notes in Humean fashion that “too often philosophical debates seem to be caused by a misconception of the LoA at which the questions should be addressed and the purpose for which they should be answered” (p. 79), though his antidote to philosophical confusion is not to find the corresponding impression, but the appropriate LoA. They thus play a similar role to Kantian epistemology in that they curb the pretensions of the intellect insofar as they constrain the domain of knowability, though again, locally by adding the pragmatic criterion of relevance to a particular question. In turn, this gesture makes Floridian philosophy of information productive and constructive, while the constraint of having to fit the data allows for “pluralism without relativism” (p. 74). It also allows Floridian epistemology to be characterized as both realist and constructivist. Though the LoA pertinent to a given problem might have to be discovered (constructed, invented?), it must nonetheless fit the data. In this way, it allows us to see something that we might not otherwise be able to see. In this regard, Dennett’s view of real patterns that saves him from the charge of pure ascriptivism regarding the intentional stance is pertinent (1991). We see through the lenses of categories that when set to a particular use and directed toward the right data allow aspects of a world, real in the empirical sense of the term, to become clear to us.
A philosophy that is both realist and constructivist, while simultaneously pluralistic without being relativistic sounds contradictory at first, but no more so than advocating both transcendental idealism and empirical realism, as Kant does. Rather than contradictory, Floridian epistemology, as with that of Kant, is thus conciliatory. In its constructivism, it bears the mark of Continental neo-Kantianism, while in its realism it is more akin to Analytic neo-Kantianism. This allows us to characterize Floridi’s neo-Kantianism as post Analytic/Continental. Given that Kant’s philosophy is pre Analytic/Continental, Floridi’s informational structural realism would seem to be more of a fulfillment of Kantian epistemology, rather than just another variant of it, though one stripped of the extensive grand architecture that often characterizes German philosophy.
There is much to be said about Floridi’s approach to epistemology, among which one could mention its consonance with the modeling movement in the philosophy of science that is slowly exchanging the categories of truth and falsity for those of adequacy and inadequacy, as one sees in Spinoza. More to the point of this abstract, however, is the way that adopting a local approach to epistemology rather than a global one spares Floridi from the transcendental problem. One may wonder, for instance, at what LoA Floridi’s philosophy of information is written, and whether the fact that it must be written at a particular LoA undermines it in the same way that Kant’s references to the noumenal must refer to concepts only and, at the same time (logically impossibly), to things-in-themselves with causal efficacy. But the fact that the method of LoAs supports inheritance, encapsulation and recursion does not involve us in an infinite regress or a need to throw away any Wittgensteinisn ladder. Though not an academic, Spolsky rightly notes that “pointers and recursion require a certain ability to reason, to think in abstractions, and, most importantly, to view a problem at several levels of abstraction simultaneously” (2005). In other words, there is nothing intrinsically regressive about employing a method in computer science that uses itself. Is this mere trickery, or does the fact that the method works suggest that there is something intrinsically wrong with the linear logical methodologies that we find in Kantian-style transcendental deductions? The answer to the question hangs on the fate of computer science and its treasured notion of recursion. I am willing to bet that when all the cards are in, we will have to agree with the computer scientists rather than the philosophers, or perhaps, better yet, with some hybrid of the two, a properly-informed philosophy of information based on the notion of information processing borrowed from computer science. In 2006, Dennett remarked that AI “makes philosophy honest.” Perhaps the same might be said of computer science more generally.
GUSTAVO CEVOLANI
Truthlikeness, partial truth, and (strongly) semantic information
In this paper, we focus on the current debate in the philosophy of information concerning the veridical nature of semantic information, which has been triggered by Floridi’s definition of strongly semantic information (henceforth, SSI) as well-formed, meaningful and “veridical” (or “truthful”) data (Floridi 2004, 2011). After a brief review of the classical theory of semantic information (Carnap and Bar-Hillel 1952), we present Floridi’s theory of SSI, highlighting some conceptual and formal problems with it. We then define a measure of partial truth, quantifying the amount of “information about the truth” conveyed by a statement with respect to a target domain (Hilpinen 1976; Niiniluoto 1987), and argue that this notion provides a provably better explication of SSI than Floridi’s own proposal. We conclude by discussing some conceptual issues concerning the problem of quantifying semantic information and the so-called “veridicality thesis” (cf. also Frické 1997, D’Alfonso 2011, and Cevolani 2011).
SIMON D’ALFONSO
The Logic of Knowledge and the Flow of Information
In this presentation I cover some work investigating the notions of information and knowledge as exemplified in Fred Dretske’s ‘Knowledge and The Flow of Information’. In particular, I (1) provide an explication of the conception of information and its flow which is central to such accounts and (2) look at the issue of developing an epistemic logic which captures Dretske’s notion of knowledge as a semi-penetrating operator.
LUCIANO FLORIDI
A Plea for Antinaturalism
Contemporary science seems to be caught in a strange predicament. On the one hand, it holds a firm and reasonable commitment to a healthy naturalistic methodology, according to which explanations of natural phenomena should not overstep the limits of the natural itself. This “closure” under explanation applies also to social and human phenomena, from economics and sociology to neuroscience and psychology. On the other hand, contemporary science is also inextricably dependent on technologies, especially Information and Communication Technologies, which it exploits and fosters. Yet such technologies are increasingly “artificializing” or “denaturalising” the world, human experiences and interactions, as well as what qualifies as real. So the search for the ultimate explanation of the natural seems to rely upon and promote the development of the artificial. In this paper, I try to find a way out of this apparently strange predicament. I argue that the naturalisation of our knowledge of the world is either correct but trivially so (naturalism as anti-supernaturalism), or mistaken (naturalism as anti-constructionism). I do so through the following steps. First, I distinguish between different kinds of naturalism. Second, I show that the kinds that are justified are no longer very interesting, whereas the kind of naturalism that is still interesting today is now in need of revision in order to remain acceptable. Third, I argue that such a kind of naturalism may be revised on the basis of a realistic philosophy of information, according to which knowing is a constructive activity through which we do not represent the phenomena we investigate, but build more or less correct informational models (semantic artefacts) of them. Finally, I defend the view that the natural is in itself artefactual (a semantic construction), and that the information revolution is disclosing a tension not between the natural and the non-natural, but between a user’s and a producer’s interpretation of knowledge. The outcome is a philosophical view of knowledge and science in the information age that may be called constructionist.
NIR FRESCO & PHILIP STAINES
A Revised Attack on Digital Ontology
There has been an ongoing conflict regarding whether reality is fundamentally digital or analogue. Both options have some merit. Yet, it is not clear how or whether this conflict may be settled either conceptually or empirically. Recently, Luciano Floridi, in a variation on Immanuel Kant’s modes of presentation, has argued that the dichotomy between a digital ontology and an analogue ontology is misapplied. For any attempt to analyse the noumenal reality independently of any level of abstraction at which the analysis is conducted is mistaken. As an alternative to digital (or analogue) ontology, Floridi proposes an informational ontology. To that end, he proposes a thought experiment using four ideal agents that supposedly demonstrates why a classification of reality as digital is wrong. Whilst we agree that the ultimate nature of reality is not purely digital, we find this thought experiment (at least as it stands) to be not compelling. In this paper, we show what has gone wrong in the thought experiment, but also how one can push a similar argument by focusing narrowly on digital computers (rather on reality as a whole). As well, we argue that not only should ‘digital’ not be conflated with ‘informational’, but also that the former should not be conflated with ‘effective computation’.
STEPHEN HARTMANN
Updating on Conditionals = Kullback-Leibler + Causal Structure
Modeling how to learn an indicative conditional has been a major challenge for Formal Epistemologists. One proposal to meet this challenge is to request that the posterior probability distribution minimizes the Kullback-Leibler distance to the prior probability distribution, taking the learned information as a constraint (expressed as a conditional probability statement) into account. This proposal has been criticized in the literature based on several clever examples. In this talk, we revisit three of these examples and show that one obtains intuitively correct results for the posterior probability distribution if the underlying probabilistic model reflects the causal structure of the scenario in question. The talk is based on joint work with Soroush Rafiee Rad.
PHYLLIS ILLARI
Information and causal inference: in defence of generality
In this paper I examine generality in our approach to causal inference. I identify a reason for thinking that what PI does is distinctively informational, whether or not it explicitly draws on information-theoretic concepts, although this involves laying PI open to a serious possible objection. I further explore this objection, while examining what PI might bring to the problem(s) of gaining causal knowledge – causal inference. One reason PI is philosophy of information is that information-theoretic measures give us unparalleled generality in theorizing about the world, as they allow us to study information itself, largely independent of the physical (or chemical, or biological…) basis of that information. Based on this, PI tries to understand deep similarities where others see none: PI aims to offer generality. Generality, however, is out of fashion in the philosophy of science. The unity of science hypothesis has been undermined, and pluralism is rampant. I explore this problem in the context of the problem of gaining causal knowledge. I identify two important forms of pluralism about causality: Hall’s and Anscombe’s, and contrast both of them with methodological pluralism. I argue that methodological pluralism is consistent with an informational approach to causality, and argue that the kind of unity that an informational approach to causality might yield is very useful. The account does, however, imply that what counts as evidence for causality is dependent on background knowledge – knowledge is integrated.
MARK JAGO
Bounded Rationality and Epistemic Blindspots
Real-world agents do not know all consequences of what they know. But we are reluctant to say that a rational agent can fail to know some trivial consequence of what she knows. Since every consequence of what she knows can be reached via chains of trivial consequences of what she knows, we have a paradox. In this paper, I respond to the paradox in three stages. (i) I describe formal models which allow us to draw a distinction, at the level of content, between trivial (uninformative) and non-trivial (informative) inferences. (ii) I argue that agents can fail to know trivial consequences of what they know, but they can never do so determinately. Such cases are epistemic blindspots, and we are never in a position to assert that such-and- such constitutes a blindspot for agent i. (iii) I develop formal epistemic models on which the epistemic accessibility relations are vague. Given these models, we can show that epistemic blindspots always concern indeterminate cases of knowledge.
ERIC KERR
An informational approach to epistemic agency
Epistemologists are beginning to consider the implications of the extended mind thesis on a variety of issues. (Goldberg 2007; Heatherington forthcoming; Marsh & Onof 2008; Pritchard 2010; Roberts forthcoming; Vaesen 2010)They question whether the thesis is compatible with prominent accounts of credit, the ability condition, justification, testimony, reliability, and so on. Out of this body of work we could envisage a new kind of epistemic agent. Call this an extended epistemic agent (EEA) – an entity whose body stretches beyond the ‘skin-bag’ and into an external environment. (Clark 2003) In an extended epistemic agent, e.g., the credit for knowing applies to the system as a whole and is not reducible to its component parts. I argue that an EEA approach provides an account of epistemic agency which admits computational knowers but only if one also accepts an informational approach to epistemology. (Dretske 1981, 1986) Such an account of epistemic agency is desirable because computational systems and algorithms are increasingly replacing human agents in broadly epistemic tasks: stock trading in the financial stock markets; pricing items in marketplaces; predicting events in offshore drilling; predicting the box office takings of films currently in production; piloting aeroplanes – these are tasks that require knowledge production and doxastic decision-making that were once performed solely by human agents (albeit often with technological and computational assistance) which are now being performed by autonomous computational systems. This autonomy may constitute agency. In order to make sense of what occurs in these tasks I argue that we need an account that incorporates both biological and non-biological knowers. In my final remarks I look at some of the implications of the advent of these extended epistemic agents on our cognitive ecology and ask what may be done to manage the risks they can pose. I conclude that, if we accept extended epistemic agents as a category, this throws our existing understanding of (non-extended) epistemic agents into jeopardy on several grounds.
GIUSEPPE PRIMIERO
Towards a taxonomy of errors for information systems
In the past few decades, logical approaches to agent based knowledge have dealt at large with the issue of defeasible conditions and bounded resources for interaction and distributed knowledge. In view of the current epistemological and technological advancements, this trend is expected to become more and more relevant. A starting assumption of this talk is that human rationality, which involves today massive Interaction with mechanical computation, is on a par with artificial systems processing distributed data (see [4]). This parallel is grounded on the conceptual and formal isomorphism existing between proofs and programs, which allows us to identify (at least partially) abstract knowledge processing with practical programming. A second assumption is that both human and mechanical information systems require understanding and control of errors, in order to establish correctness and limits for distributed rationality. The study of interactive computing and rationality under defeasible conditions as processes of error reduction requires: { a full characterization of error states for information systems (in the sense explained above); { a formal model of logical processing with errors states. In this talk I will tackle the first of these problems. I shall start by considering some of the theses on the nature of human errors from the literature in psychology (see e.g. [7]). I will move on to consider some relevant approaches in formal and epistemic logic: fallacious reasoning in real-life cognitive agency ([11]); inexact knowledge and the Margins for Errors Paradox (see [8], [9] and [10]); limits of knowledge in centered Semantics for Dynamic Epistemic Logic [2]; presupposition failures from Dynamic Semantics ([1]) and Propositional Dynamic Error Logic. My final term of comparison will be forms of errors as they appear in some functional programming languages. The aim is to reach a full taxonomy of errors for information systems. Such a task can be approached by analysing the levels of abstraction for human and mechanical systems dealing with the recovery and processing of distributed data. Part of this task is the study of correctness issues via internal and external levels of failure for type systems, presented in [5]. In the present talk, I shall build on this first analysis to approach a full taxonomy of errors for information systems with distributed data. The second of the two tasks mentioned above, which falls outside the scope of this presentation, will provide an algorithmic theory of processes including error states in a multi-agent system. For such task, we shall focus on extensions of the modal type theory presented in [3], [6] in function of new constructions for the negation operator and possibility judgements.
SEBASTIAN SEQUOIAH-GRAYSON
Epistemic Operations
For abstract, see additional pdf below.
SONJA SMETS
Playing for Knowledge
In this presentation I introduce the formal foundation of a new game semantics to define the concept of defeasible knowledge. My approach is inspired by Keith Lehrer’s use of “justification games’’. These games are an essential ingredient of Lehrer’s account of knowledge as”undefeated justified acceptance’’: an agent or proponent plays against an (ultra)critical opponent in order to give an irrefutable justification for accepting a certain proposition. My formal treatment of such type of games should provide a valuable addition to the literature on formal epistemology. I will make the notions of preference, justification, truth, belief and knowledge explicit within the framework of Dynamic Epistemic Logic and its recent extensions to deal with belief revision theory. This talk is based on on-going joint work with A. Baltag and V. Fiutek.
ORLIN VAKARELOV
From Interface to Correspondence: How to recover classical representations in a pragmatic theory of semantic information. The talk will achieve two goals (with the second taking most of the discussion): (1) it will introduce the interface theory of meaning for information, and (2) it will show how to recover correspondence-like semantics for states of some information media. The interface theory of meaning is based on a pragmatic approach to information. It is assumed that the problem of information semantics emerges only in the context of an information system (which will also be called an agent). An information system is a specially organized goal-directed system that incorporates an information medium in controlling its behavior. An information medium is dynamically correlated with an external system and modulates the behavioral dispositions of the agent. The medium acts as an interface between the external system and the control mechanism. According to the interface theory of meaning, the interface function of the medium defines its semantics. Interface-role semantics is odd, in the sense that, on the surface, it does not resemble any typical semantic theory, especially correspondence semantics. It will be demonstrated how, despite the oddness, the theory can be used to recover a correspondence-like semantics within the agent. The key idea for the recovery is that as the structure of the information medium increases in complexity – as we move from a simple medium to a complex network of information media – it is required to coordinate semantically different information media A and B. In the process of coordination two things may happen. The agent may take control of the coordination relation between A and B, presumably by some other control mechanism. And, the coordination relation may begin to look like the correspondence rules of classical representation semantics. Thus, it is possible to recover correspondence semantics within the agent. There is a big catch! According to this theory, if (e.g.) a state ‘Fa’ has the content of an object a having a property of F-ness, the “object” and the “F-ness” are not in the external world, but in another information medium. In other words, correspondence relations are not between an information medium and an external system, but between two internal different media. Aboutness is always internal. So, where does the rubber meet the road? That is, how can such an internal theory of semantic correspondence connect to the world? This is where the interface theory of meaning kicks in. Correspondence semantics can exist in a function mechanism for controlling behavior only on top of the lower level dynamically grounded semantics of interface roles. Ultimately, all semantics gets analyzed in terms of how goal-directed control is modulated by external systems via the interface of information media. The grounding of correspondence semantics via the interface theory of meaning resembles in some ways the praxical solution to the symbol grounding problem offered by Floridi and Taddeo. The similarities and differences will be discussed in the context of the presentation. While the discussion will stay below the level of symbols, I will claim that the grounding solution I offer is more general.
GREGORY WHEELER
Is there a logic of information?
Information-based epistemology maintains that ‘being informed’ is an independent cognitive state that cannot be reduced to knowledge or to belief, and the modal logic KTB has been proposed as a model. But what distinguishes the KTB analysis of ‘being informed’, the Brouwersche schema (B), is precisely its downfall, for no logic of information should include (B). This talk presents a challenge to not only the KTB logic of information, but to any proposed modal logic of information which includes (B) and stretches from the non-normal classical regular system, EMNC+B, to S5.