• Tidak ada hasil yang ditemukan

Making AI Intelligible

N/A
N/A
Nguyễn Gia Hào

Academic year: 2023

Membagikan "Making AI Intelligible"

Copied!
184
0
0

Teks penuh

Inquiries regarding reproduction outside the scope of this license should be sent to the Rights Department, Oxford University Press, at the above address. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America. Aims of this book: The role of philosophy in AI research 3 An illustration: Lucie's mortgage application is rejected 4 Abstraction: Relevant properties of systems.

In this book we will cover 10 The ubiquity of AI decision-making 13 The central questions of this book 17. Concept ownership, functionalism and ways of life 155 Implications for the view defended in this book 156.

INTRODUCTION

No specific point in the network can be said to mean anything in particular - it is the network as a whole that tells the bank something. It's easy enough to spot certain variables in the initial data set—the bank can show her where in the input her annual income is, and where her credit card payment history is, and where her Twitter followers are. Here is a partial characterization of what we have in mind when we talk about 'output of AI systems' in the following:3.

Output (eg "550" displayed on a certain screen) is produced by things that are not human. The emerging field of explainable artificial intelligence (XAI) aims to create AI systems that are interpretable by us, that produce decisions that come with understandable explanations, that use concepts that we can understand and with which we can let's talk the way we can. engage with other rational thinkers.7 The ambiguity of ML systems particularly emphasizes the need for AI to be explainable and interpretable.

ALFRED (THE DISMISSIVE SCEPTIC)

Alfred: Of course we say things like, 'That exit of 550 means that Lucie is a high risk of default.' But this is just free talk - we shouldn't take it seriously. If this is all wrong and the programs say nothing, don't we need to do a serious rethinking of all this technology. This is because in the end the calculator is only a tool to achieve mathematical results.

This is because language is an informational communicative tool, a tool we use to learn things. Once all that is done, all I care about is that the car won't hit pedestrians.

A PROPOSAL FOR HOW TO ATTRIBUTE

CONTENT TO AI

TERMINOLOGY

It is W [weights connecting nodes] that constitutes what the network knows and determines how it will respond to any arbitrary input from the environment. Our theory of what the performance of the speech act of saying is does not need to account for the computational word. It is very important to note that we do not think we have rejected that kind of view.

What is particularly interesting is that their complex forms of behavior are not the same as human complex forms of behavior. On the other hand, the sentence 'The Eiffel Tower is in Paris' says something - it shows what philosophers call imaginative. The striking fact about (1) is that it is something that can be true or false because of its representational properties.5 If the Eiffel Tower is in Paris, then (1) is true.

The phenomenon of caring is so familiar to us (and so central to our lives) that it is easy to forget or overlook how wonderful it is. On the basis of what might a sentence in English, say (1) above, be about the Eiffel Tower. Based on what is the thought that the Eiffel Tower in Paris is about Paris.

It is difficult to prove a negative, but the reader can consult the bibliographies of recent review works - Bringsjord and Govindarajulu (2018) on the philosophy of AI, or - perhaps more importantly - Bruckner (2019) on the philosophy of learning deep. 7 There is certainly significant work on the extended mind hypothesis (Clark and Chalmers 1998) and while this is an external form, it is not grounded in the Kripke, Burge, Putnam tradition. There is also work on what is called 'embodied cognition', and while this can also be called an external form, it is quite different from the tradition we are drawing on here.

OUR THEORY

Putting the two claims together, one of the lessons is that there is room for highly productive interaction between philosophers and AI researchers here. The output means that 5009 is prime because of the programming/computational details of how that output is produced. Our first key lesson for this book is that all of the above is the wrong way to think about the problem of AI content.

Some of our linguistic utterances may mean what they mean in part because of the way we are related to our larger speech community and because of what the words we use mean in that speech community. Some of our beliefs have the content they do because of the specific environments in which we have acquired them. In this picture, looking at computational features of the brains of the believers will not reveal what their beliefs are about – the two believers in the two environments may have their different beliefs, despite the same internal computational organization.).

These are just a few quick snapshots of some externalist approaches - later in this book we will develop some of the externals. Value is not the kind of characteristic that emerges at a particular physical level of the. Similarly with content, we suggest, much of the interesting work done in the artificial intelligence literature on interpretability and explainability of AI has assumed a probability.

Even when we are perfectly clear in what AI systems are different from us and similar to us, and perfectly clear what the right externalist metasemantic framework is for us, how do we decide which abstract analogue of this framework is best for AI systems. Maybe they want metasemantic principles that increase the knowledge (or power or whatever) of artificial systems. Instead, we will use variations of Williamson's principle to illustrate how metametasemantics can and should play an important role in the philosophy of AI.

APPLICATION

It is not computational in the sense that it is the internal computational structure of the representational element itself. In particular: if you look inside the speaker's head, there is no fact "in there". This result is then used to update the connection weights of the nodes in the neural network.

Suppose some of the training cases are mislabeled in manual coding—cases that are actually high-risk lenders are labeled as low-risk lenders. We will focus on the good parts of the theory and use them to understand SmartCredit. As long as B is not the dominant source of information in the file, the entire file can do this.

We have to decide whether it is to trace the guilt of the organization or the guilt of certain people in the organization. We also need to explain why SmartCredit, when combining the terms "Lucie" and "is a high risk" in its results, not only mentions the person and the property, but also assumes the property of the person. For example, thinking that Lucie is high risk involves labeling high risk, labeling Lucie, and then predicting the former from the latter.

In the remainder of this chapter, we explore whether a de-anthropocentric version of the action-theoretic view can be applied to AI systems. SmartCredit can not only indicate the property of a high risk and Lucie, it can also indicate the former of the latter. Start with the assumption that there is some aspect of the AI ​​output that expresses prediction.

One possible answer is an appeal to teleofunctionalism: that is the designed function of that aspect of the output. So the simple answer is: the telos of the system is derived (perhaps in a complex way) from that of its designers.

CONCLUSION

FOUR CONCLUDING THOUGHTS

However, this is not realistic - the available data simply do not conclusively and completely reliably settle the outcomes of each loan. Things are going very well - the loan decisions made by the adviser work in the bank's favor. Their form of externalism is one in which "the human organism is connected to an external entity in a two-way interaction, creating a connected system that can be seen as a cognitive system in its own right" (1998: 8). ).

This is a very natural move, and it is independent of the endorsement of the Extended Mind thesis. A simplistic assumption that the mind is 'inside' the skull/bone would equate P1 with P2. They say that what is part of the expanded mind can be determined.

Our goal will be to get to grips with how hard-to-understand extended parts of us (e.g. the artificially created neural network that is to some extent and in some contexts part of our extended mind) determine content. The objection that we will now briefly address is this: Alfred should have concentrated on the notion of proof or reliability - that is the alternative to a substantive approach. And not necessarily slightly different properties - the detected structural properties can be radically different from each other, but only have a slightly different distribution between the cases we've tested so far.

Exactly how this will work in AI settings is the subject of further work, because at least some literature argues that some reasons are (normative) non-representational entities such as facts (Raz 1975 and Scanlon 1998). That being said, this is not the place (and we are not the authors) to decide these issues. Zalta (ed.), Stanford Encyclopedia of Philosophy (Summer 2020 Edition), URL=

Referensi

Dokumen terkait

The analysis conducted on the AI virus isolates of avian-origin from the surrounding of humans infected by the AI virus showed that only one virus from four viruses,