COGS/PHIL 3750: Philosophy of Artificial Intelligence (Ch. 2)
Unlock all answers in this set
Unlock answersquestion
What is intelligence
answer
Intelligence can be defined in various ways: β they can creatively, abstractly β reasoning, using logic β planning, solving problems β combining ideas and productive ways like many steam. Concepts, and may not have a single, brief definition β but maybe we can recognize it when we see it?
question
Thomas Hobbes on intelligence
answer
β thinking or reasoning is a kind of calculation, combining ideas together according to rules. "For 'reason' in this sense is nothing but ' 'reckoning', that is adding and subtracting, of the consequences of general names agreed upon for the 'marking' and 'signifying' of our thoughts" (Thomas Hobbes).
question
Logical reasoning
answer
β this type of reasoning is evident in simple logical or deductive arguments: 1) if it's raining, and my soccer match will be canceled. 2) if my soccer matches canceled, then either I will work on my paper or remember 3) but I cannot work on my papers since I don't have my laptop 4) it is raining, sorry my book
question
logical arguments
answer
β logical arguments are valid when they had the right form (content doesn't matter): if P, then Q. if Q, than either R or S. Not R, since T. P, therefore S.
question
what about artificial intelligence
answer
β if (human or natural) intelligence is based on using such rules, can this be done artificial? β could we design a machine that can carry out such operations? β What are the bare minimum requirements for such a machine? β what would an abstract description of such machine like?
question
Turing machine
answer
β before computers, Turing conceived of a machine that could follow rules: β algorithm: a rule that can be followed by machine β this is not a prototype for a computer more like an ideal model of rule following devices β it's a rule following machine stripped down to its bare essentials
question
Alan Turing
answer
β logician, philosopher, mathematician β one of the founders of computer science β was involving cracking Nazi codes during World War II β also known for another very influential idea in AI β could computers ever be intelligent? βTuring: could they passed the "imitation game"? β this is widely known as a Turing test
question
the frame problem in context
answer
β the frame problem in the nature/ nurture debate β the frame problem in the induction problem β the initial formulation of the frame problem (narrow construction) β the semantic versus the syntactic problems β the epistemological frame problem β the computational versus Hamlet's real problem
question
arguments using the frame problem
answer
β Fodor β Dreyfus β Possible defense strategies
question
the frame problem
answer
β in order to react properly in a situation, what has to be considered, what can be neglected (Think R2D2 and the bomb in the wagon)? β the number of factors to consider in some situations the enormous β how can the person/system decide, which information is relevant in a given context?
question
the frame problem: Dennett on midnight snacks
answer
β in order to plan an action properly, what has to be considered, what can be neglected? β mayonnaise doesn't melt lives β opening the fridge doesn't cause an explosion β with the mayonnaise jar my left and I cannot also be spreading the mayonnaise with my left hand
question
the frame problem: Dennett on midnight snacks cont'd
answer
and efficient system of information storage requires efficient: β space management: since our brains are not all that large to store all the knowledge β time management: stored information has to be reliably accessible for use in the short real time spans in order to be an intelligent system
question
computation: deductive logic isn't the sufficient
answer
β world knowledge of the systems are represented in axioms β the logic is used to determine effects of actions β back on axioms: descriptions of the frame: sandwich situation
question
why deductive logic is insufficient
answer
β the psychological plausibility problem: β introspectively it doesn't seem that humans think like this β so either we do it unconsciously, or we don't the property problem: β virtually every aspect of the situation can change under some circumstances β that requires an axiom for every such aspects, since there is a case in which a might change β strategy: stating that only explicit information counts, creates a flow a problem: you can't just change one aspect, typically comes with other changes (dennett's plate example).
question
Dennett on midnight snacks: a possible solution
answer
habits and routines: β idea: maybe Dennett's midnight snack routines, developed over the years, and guide his actions. The mechanism of some complexity contained subroutines for many spreading, standard making, and getting something out of the fridge. β These quasi-automatic actions (which include subgoal checks), therefore he does not need to consider all hypothetical options. β maybe the problem of induction (Hume) is the frame problem? after all, we want systems to have the right expectations, draw the right inferences
question
Different formulations of the frame problem
answer
β the initial formulation: McCarthy and Hayes 1969 narrow construction: in real-time planning systems with strategic planning, how can we represent the available options without creating information overflow? How can we represent the effects of actions without having to represent the non-effects of explicitly.
question
Different formulations of the frame problem
answer
βDennett: we need to distinguish the semantic problem (or knowledge problem) from the syntactic problem (availability problem) Semantic problem or Newell's problem of the knowledge level: β what information must be installed? Syntactic Problem: What kind of system, what kind of representational format, which structures, processes, or mechanisms do we use to store this information?
question
The more general frame problem
answer
β the frame problem is not just a technical problem of AI, or robotics β it's a general Vista and logical problem: how do humans (or any intelligent system) know/decide which options to neglect as irrelevant, without computing/creating it as an option before?
question
the epistemological frame problem:
answer
βHow do humans know which option is to neglect is relevant, without computing, creating it as an option before? When humans consider the consequences of inaction, how do they limit the scope of the reasoning that is required? βDennett: " how can a cognitive creature with many beliefs about the world update those beliefs when it performs an act so they remain roughly faithful to the world?"
question
computational and the real frame problem
answer
β how can we compute the consequences of inaction without computing all the non-effects of an action? β Hamlet's Problem: when the stop thinking β even if we solve come additional worry (by using decision Harry sticks for example) the real philosophical issue is still unsolved β how can the robot ever be sure it had sufficiently thought through the consequences of its actions and didn't miss anything important?
question
Fodor's Conclusion:
answer
β Fodor 1983 uses the frame problem to argue against central modularity β claim: the mine central processes can draw on information from any source, they are "informationally unencapsulated."
question
objections based on the frame problem
answer
βDreyfus 1972: most human knowledge and competence, in particular specialized knowledge, cannot in fact be reduced in algorithmic/commutation procedures. Human knowledge is not computable in an AI sense. AI overlooks a principled difference between the kind of cognition one might employ when learning a skill in the kind employed by an expert. Thus, a eyes of fundamentalist mistaken method for studying the mind.
question
possible defense strategies against Deyfus
answer
β strategy one: Dreyfus starts with properties from present-day AI systems, and draws inferences about all possible rule-based informal systems on that basis. But the failures of the particular example of the discipline that is still very young is insufficient to draw the conclusion.
question
Marvin Minksy and Roger Schank
answer
β Late 70s and early 80s β in humans, all of life's experiences, for all their variety, can be understood as variations of the manageable number of stereotypic themes, and paradigmatic scenarios - "frames" as Minsky calls them, Schanks refers to them as "Scripts. Answer Dennett: The scripts/frame approach "attempt to resolve the frame problems", it implements and scripts of problems, a particular system is likely to encounter.
question
Cognitive wheels
answer
β no cognitive wheel is simply any design proposal and cognitive theory (at any level from the purest semantic level most concrete level of wiring diagrams of the neurons) that is profoundly on biological, however wizardly inelegant it is as bit of technology. β Clearly this is a vaguely defined concept, useful only as a rhetoric abbreviation, as a gesture in the direction of real difficulties to be spelled out carefully. Beware of postulating cognitive wheels masquerades as good advice to the cognitive scientist, while courting the vacuity. it occupies the same rhetorical position at stockbrokers next: buy low and sell high. Still, the term is a good theme fixer for discussion.
question
why cognitive wheels don't work
answer
"if these procedural details lacks psychological reality then there is nothing left in the proposal that might model cycle article processes except the phenomenological level description the terms of jumping to conclusions, the ignoring, and the like - and we already know we do that." (pg 14)
question
strategy two
answer
β there might be (maybe general) problem for certain types of systems, classical rule-based systems, but to claim other types systems are able to avoid the frame problem 1) connectionism: " bottom-up" shortages 2) dynamic, situated approaches β Rodney Brooks attempts to build a simple insect level intelligence without rule-based procedures β Andy Clark: action Baker plantations
question
overview part two: reverse engineering, research methodologies
answer
β top-down versus bottom-up models versus strategies β Marrs levels of explanation βMarrs model of research and strategy Example: visual perception βdennett's hierarchy of levels in the intentional stance βCogSci as reverse engineering: the hierarchy of levels βDennett: the problem with reverse engineering β AI and AL
question
bottom-up and top-down
answer
β we can study the mind bottom-up, beginning with individual neurons. Or even molecules. than try to build up from there by reverse engineering to higher cognitive functions βor we start with general theories about thought and about how cognition works and then worked downwards to investigate how corresponding mechanisms might be instantiated in the brain β in both cases, we have to consider different levels of explanation, which often correspond to different disciplines
question
speech perception: bottom-up top-down
answer
βspeech perception is partially data-driven β it can't be completely data-driven: languages you can understand/ speak, top-down effect depending on interest β the suitable trade-off between top-down and bottom-up influences is a central parameter: system should filter the noise, without over interpreting
question
Levels of Explanation
answer
the competition level: what is the goal accommodation, why is it appropriate question mark was the logic of strategy by which the goal is carried out β the algorithmic level: what is the competition theory to be implemented? What is your visitation for the input and output? What are the algorithm for the transformation? β the limitation level: how can both the representation and the algorithm be realized physically?
question
functional level:
answer
β The goal of the system is to derive her plantation of a three-dimensional shape and spatial arrangement of an object in the form that allows to to be recognized. Thus, This representation should be object centred (not viewpoint dependent) disease. It should contain information about all parts of the object (including hidden elements).
question
the hierarchy of levels
answer
β the intentional stance: we treat the system as if it is a rational agent who tries to solve up to the task (or set of tasks); we are adjusted in constraints to impose the task, and the general strategies to solve the task. β the design stance: one level lower; we consider the general principles and constraints of the system that might solve the task. β the physical stance a level lower; we consider how a system with a specific design might actually be physically construed.
question
The physical stance
answer
β The physical stance stems from the perspective of the physical sciences. to predict the behavior of a given entity according to the physical stance, we use information about its physical constitution in conjunction with information for the laws of physics.
question
the physical stance: example
answer
β Holding a book in my hands. I predict that it will fall to the floor when I release it. My production relies on a) the fact the book has a mass and weight; and b) the law of gravity. Frictions and explanations based on the physical stance are exceedingly common. Consider: explanations of why water freezes at 32Β°F, how mountain ranges are formed. These excavations proceed by way of the physical stance.
question
the design stance
answer
We make a prediction for the design stance, we assume that the entity in question has been designed in a certain way, and we predict that the entity will thus behave as designed. Like physical stance predictions, designs and predictions are commonplace.
question
the design stance: example
answer
β When someone steps into an elevator pushes"7," the predicted the elevator will take them to the seventh floor. Again, they do not need to know any details about the inner workings of the elevator in order to make this prediction. There is no need, for example, for them to take apart weight parts. Likewise: when in the evening student set sthere alarm clock for 8:30 AM, she breaks that it will behave is designed: i.e., that it will buzz at 830 the next morning. B
question
The intentional stance
answer
βWe can improve our predictions yet further by adopting the intentional stance. When making production from the stands, we interpret the behaviour of the entity in question by treating it as a rational agent with behaviour is governed by intentional states. Reminder: potential states are mental state such as beliefs and desires which are the property of "aboutness," that is, they are about, or directed at, objects or states of affairs in the world.
question
interpretationalism
answer
whether system has a certain belief or desire depends on our imposing a certain interpretation on the system. A statement ascribing is true when the best overall intimidation the system's behavior said that the organism has the belief or desire. From the intentional stance, we detect certain patterns that, although partially construed by her own reactions to them, our objective.
question
Realism and instrumentalism
answer
β Typically, a realist about the mental treats beliefs and desires as interstates of the system that cause that belief systems behaviour. β in contrast, and instrumentalist treats believes and desires as theoretical posits which we ascribe to various systems when doing so is instrumental to understanding that system's behavior. These posits, however useful they might be to us, are nonetheless fictions, and thus our ascriptions of beliefs and desires are strictly speaking false according to the instrumentalist.