Read PDF Exact Philosophy: Problems, Tools, and Goals

Free download. Book file PDF easily for everyone and every device. You can download and read online Exact Philosophy: Problems, Tools, and Goals file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Exact Philosophy: Problems, Tools, and Goals book. Happy reading Exact Philosophy: Problems, Tools, and Goals Bookeveryone. Download file Free Book PDF Exact Philosophy: Problems, Tools, and Goals at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Exact Philosophy: Problems, Tools, and Goals Pocket Guide.


  1. The empiricist tradition
  2. Exact Philosophy: Problems, Tools, and Goals (Synthese Library)
  3. Chapter 1 What is Philosophy - Department of Social Sciences | Emporia State University
  4. Artificial Intelligence

Real-time strategy games are games in which players manage an army given limited resources. Real-time strategy games differ from strategy games in that players plan their actions simultaneously in real-time and do not have to take turns playing. Such games have a number of challenges that are tantalizing within the grasp of the state-of-the-art. This makes such games an attractive venue in which to deploy simple AI agents.

An overview of AI used in real-time strategy games can be found in Robertson and Watson Some other ventures in AI, despite significant success, have been only chugging slowly and humbly along, quietly. For instance, AI-related methods have achieved triumphs in solving open problems in mathematics that have resisted any solution for decades. Other related areas, such as natural language translation, still have a long way to go, but are good enough to let us use them under restricted conditions.

Both methods now have comparable but limited success in the wild.

The empiricist tradition

A deployed translation system at Ford that was initially developed for translating manufacturing process instructions from English to other languages initially started out as rule-based system with Ford and domain-specific vocabulary and language. This system then evolved to incorporate statistical techniques along with rule-based techniques as it gained new uses beyond translating manuals, for example, lay users within Ford translating their own documents Rychtyckyj and Plesco This lack of any success in the unrestricted general case has caused a small set of researchers to break away into what is now called artificial general intelligence Goertzel and Pennachin The stated goals of this movement include shifting the focus again to building artifacts that are generally intelligent and not just capable in one narrow domain.

Computer Ethics has been around for a long time. If one were to attempt to engineer a robot with a capacity for sophisticated ethical reasoning and decision-making, one would also be doing Philosophical AI, as that concept is characterized elsewhere in the present entry. There can be many different flavors of approaches toward Moral AI.

Exact Philosophy: Problems, Tools, and Goals (Synthese Library)

Wallach and Allen provide a high-level overview of the different approaches. Moral reasoning is obviously needed in robots that have the capability for lethal action. Arkin provides an introduction to how we can control and regulate machines that have the capacity for lethal behavior. Moral AI goes beyond obviously lethal situations, and we can have a spectrum of moral machines. Moor provides one such spectrum of possbile moral agents. An example of a non-lethal but ethically-charged machine would be a lying machine.

  • The Adaptive Brain IIVision, Speech, Language, and Motor Control?
  • Royal Pains: First, Do No Harm.
  • I. Definition.
  • Philosophy!
  • Analytic philosophy |
  • My Wishlist?

Clark uses a computational theory of the mind , the ability to represent and reason about other agents, to build a lying machine that successfully persuades people into believing falsehoods. The most general framework for building machines that can reason ethically consists in endowing the machines with a moral code. This requires that the formal framework used for reasoning by the machine be expressive enough to receive such codes. The field of Moral AI, for now, is not concerned with the source or provenance of such codes.

The source could be humans, and the machine could receive the code directly via explicit encoding or indirectly reading. Another possibility is that the code is inferred by the machine from a more basic set of laws. We assume that the robot has access to some such code, and we then try to engineer the robot to follow that code under all circumstances while making sure that the moral code and its representation do not lead to unintended consequences.

Deontic logics are a class of formal logics that have been studied the most for this purpose. Abstractly, such logics are concerned mainly with what follows from a given moral code. Engineering then studies the match of a given deontic logic to a moral code i. Bringsjord et al. Deontic logic-based frameworks can also be used in a fashion that is analogous to moral self-reflection. Govindarajulu and Bringsjord present an approach, drawing from formal-program verification , in which a deontic-logic based system could be used to verify that a robot acts in a certain ethically-sanctioned manner under certain conditions.

Since formal-verification approaches can be used to assert statements about an infinite number of situations and conditions, such approaches might be preferred to having the robot roam around in an ethically-charged test environment and make a finite set of decisions that are then judged for their ethical correctness. More recently, Govindarajulu and Bringsjord use a deontic logic to present a computational model of the Doctrine of Double Effect , an ethical principle for moral dilemmas that has been studied empirically and analyzed extensively by philosophers.

While there has been substantial theoretical and philosophical work, the field of machine ethics is still in its infancy. There has been some embryonic work in building ethical machines. One recent such example would be Pereira and Saptawijaya who use logic programming and base their work in machine ethics on the ethical theory known as contractualism , set out by Scanlon And what about the future?

Since artificial agents are bound to get smarter and smarter, and to have more and more autonomy and responsibility, robot ethics is almost certainly going to grow in importance. This endeavor might not be a straightforward application of classical ethics.


For example, experimental results suggest that humans hold robots to different ethical standards than they expect from humans under similar conditions Malle et al. For now it can be identified with the attempt to answer such questions as whether artificial agents created in AI can ever reach the full heights of human intelligence. For example, one could engage, using the tools and techniques of philosophy, a paradox, work out a proposed solution, and then proceed to a step that is surely optional for philosophers: expressing the solution in terms that can be translated into a computer program that, when executed, allows an artificial agent to surmount concrete instances of the original paradox.

Daniel Dennett has famously claimed not just that there are parts of AI intimately bound up with philosophy, but that AI is philosophy and psychology, at least of the cognitive sort. He has made a parallel claim about Artificial Life Dennett In short, Dennett holds that AI is the attempt to explain intelligence, not by studying the brain in the hopes of identifying components to which cognition can be reduced, and not by engineering small information-processing units from which one can build in bottom-up fashion to high-level cognitive processes, but rather by — and this is why he says the approach is top-down — designing and implementing abstract algorithms that capture cognition.

Dennett sees the potential flaw, as reflected in:. Unfortunately, this is acutely problematic; and examination of the problems throws light on the nature of AI. So there is a philosophical claim, for sure. Philosophy of physics certainly entertains the proposition that the physical universe can be perfectly modeled in digital terms in a series of cellular automata, e. Such information processing is known as hypercomputation , a term coined by philosopher Jack Copeland, who has himself defined such machines e. The first machines capable of hypercomputation were trial-and-error machines , introduced in the same famous issue of the Journal of Symbolic Logic Gold ; Putnam Thus, this thesis has nothing to say about information processing that is more demanding than what a Turing machine can achieve.

Put another way, there is no counter-example to CTT to be automatically found in an information-processing device capable of feats beyond the reach of TMs. For all philosophy and psychology know, intelligence, even if tied to information processing, exceeds what is Turing-computational or Turing-mechanical.

Chapter 1 What is Philosophy - Department of Social Sciences | Emporia State University

Therefore, contra Dennett, to consider AI as psychology or philosophy is to commit a serious error, precisely because so doing would box these fields into only a speck of the entire space of functions from the natural numbers including tuples therefrom to the natural numbers. Only a tiny portion of the functions in this space are Turing-computable. AI is without question much, much narrower than this pair of fields. But this new field, by definition, would not be AI. Our exploration of AIMA and other textbooks provide direct empirical confirmation of this.

The best way to demonstrate this is to simply present such research and development, or at least a representative example thereof. For a detailed presentation and further discussion, see the. Given that the work in question has appeared in the pages of Artificial Intelligence , a first-rank journal devoted to that field, and not to philosophy, this is undeniable see, e. Many such papers do exist. But we must distinguish between writings designed to present the nature of AI, and its core methods and goals, versus writings designed to present progress on specific technical issues.

Writings in the latter category are more often than not quite narrow, but, as the example of Pollock shows, sometimes these specific issues are inextricably linked to philosophy. For example, for an entire book written within the confines of AI and computer science, but which is epistemic logic in action in many ways, suitable for use in seminars on that topic, see Fagin et al.

What of writings in the former category? Writings in this category, while by definition in AI venues, not philosophy ones, are nonetheless philosophical. Most textbooks include plenty of material that falls into this latter category, and hence they include discussion of the philosophical nature of AI e. Recall that we earlier discussed proposed definitions of AI, and recall specifically that these proposals were couched in terms of the goals of the field. In TTT, a machine must muster more than linguistic indistinguishability: it must pass for a human in all behaviors — throwing a baseball, eating, teaching a class, etc.

After all, what philosophical reason stands in the way of AI producing artifacts that appear to be animals or even humans? CRA is based on a thought-experiment in which Searle himself stars. The Chinese speakers send cards into the room through a slot; on these cards are written questions in Chinese.

The following schematic picture sums up the situation. The labels should be obvious. Now, what is the argument based on this thought-experiment? Where does CRA stand today? This is of course thoroughly unsurprising. Among these practitioners, the philosopher who has offered the most formidable response out of AI itself is Rapaport , who argues that while AI systems are indeed syntactic, the right syntax can constitute semantics.

Readers may wonder if there are philosophical debates that AI researchers engage in, in the course of working in their field as opposed to when they might attend a philosophy conference.

Surely, AI researchers have philosophical discussions amongst themselves, right? Generally, one finds that AI researchers do discuss among themselves topics in philosophy of AI, and these topics are usually the very same ones that occupy philosophers of AI.

  • 5 Whys: The Ultimate Root Cause Analysis Tool;
  • History of Homosexuality in Europe: Berlin, London, Paris 1919-1939 Volume I.
  • The radiochemistry of copper.

However, the attitude reflected in the quote from Pollock immediately above is by far the dominant one. That is, in general, the attitude of AI researchers is that philosophizing is sometimes fun, but the upward march of AI engineering cannot be stopped, will not fail, and will eventually render such philosophizing otiose.

Artificial Intelligence

We will return to the issue of the future of AI in the final section of this entry. Four decades ago, J. His argument has not proved to be compelling, but Lucas initiated a debate that has produced more formidable arguments. Instead, readers will be given a decent sense of the argument by turning to an online paper in which Penrose, writing in response to critics e. Here is this version, verbatim:. Does this argument succeed? A firm answer to this question is not appropriate to seek in the present entry.