Difference between revisions of "DouglasRMiles"
From Public Domain Knowledge Bank
DavidWhitten (talk | contribs) |
DavidWhitten (talk | contribs) |
||
Line 1: | Line 1: | ||
<gdoc id="1XCDnWMjNh4y2Um_1B-HE4x5Q34oxHRxBl-Oy9eNUzTI" /> | <gdoc id="1XCDnWMjNh4y2Um_1B-HE4x5Q34oxHRxBl-Oy9eNUzTI" /> | ||
− | |||
---- | ---- | ||
<font color="pink">[https://docs.google.com/document/d/1XCDnWMjNh4y2Um_1B-HE4x5Q34oxHRxBl-Oy9eNUzTI/edit#heading=h.uv71c9k7i3ck|Yak Shaving]</font> | <font color="pink">[https://docs.google.com/document/d/1XCDnWMjNh4y2Um_1B-HE4x5Q34oxHRxBl-Oy9eNUzTI/edit#heading=h.uv71c9k7i3ck|Yak Shaving]</font> | ||
+ | |||
+ | <pre> | ||
+ | |||
Commonsense Bibliography | Commonsense Bibliography | ||
Latest revision as of 15:16, 1 December 2016
<gdoc id="1XCDnWMjNh4y2Um_1B-HE4x5Q34oxHRxBl-Oy9eNUzTI" />
Commonsense Bibliography Collected by Push Singh Contributions from: Leiguang Gong, Stefan Marti and Erik Mueller Last updated: 29 January 2002 Table of contents 1 Recent overviews of the common sense problem 2 Classics 3 Cyc 3.1 Cyc overview 3.2 Cyc criticisms and evaluations 4 Cognitive architectures 4.1 Emotions 4.2 Heterogenous architectures 4.3 Blackboard systems 4.4 Human-level AI 4.5 Soar 4.6 Case-based reasoning 4.7 Belief-desire-intention architectures 5 Acquiring common sense 5.1 Distributed human projects 5.2 Acquisition through sketching 5.3 Learning structural representations 5.4 Sensory-grounded learning 6 Common sense reasoning 6.1 Issues in common sense inference 6.2 Default reasoning 6.3 Reflection 6.4 Problem reformulation 6.5 Analogical reasoning 6.6 Embodiment and metaphor 7 Logical formalisms 7.1 Situation calculus 7.2 Event calculus 7.3 Causal theories 7.4 Features and fluents 8 Contexts and organizing commonsense knowledge 9 Representations for commonsense knowledge 9.1 Overviews 9.2 Commonsense ontologies 9.3 Representing causality 9.4 Representing time 9.5 Story representations 9.6 Connectionist representations 10 Applications of common sense knowledge 10.1 Context-aware agents 10.2 The Semantic Web 11 Robots and common sense 11.1 Cognitive robotics 11.2 Natural language interfaces to robots 12 Natural language 12.1 Frame semantics 12.2 Lexical semantics 13 Realms of thinking 13.1 Spatial reasoning 13.2 Physical reasoning 13.3 Social reasoning 13.4 Story understanding 13.5 Visual reasoning 14 Psychology 14.1 Cognitive psychology 14.2 Psychology of memory 14.3 Psychology of story understanding 14.4 Psychology of inference 15 Criticisms 16 Web resources 17 UNPLACED 1 Recent overviews of the common sense problem Minsky, Marvin. (2000). Commonsense-Based Interfaces. Communications of the ACM 43(8):67-73. http://commonsense.media.mit.edu/minsky.pdf http://library.navo.navy.mil/Journals/jcs03Aug00.pdf Minsky, Marvin. (Forthcoming). The Emotion Machine (draft chapter on commonsense) Singh, Push. (2002). The Open Mind Common Sense Project. http://openmind.media.mit.edu/Kurzweil.htm http://www.kurzweilai.net/meme/frame.html?main=/articles/art0371.html Davis, E. (1998). The Naive Physics Perplex. AI Magazine, Winter 1998, Vol. 19. No. 4. pp. 51-79. 2 Classics McCarthy, John. (1959). Programs with Common Sense. In Mechanisation of Thought Processes, Proceedings of the Symposium of the National Physics Laboratory, London, U.K.: Her Majesty's Stationery Office, pp. 77-84. http://www-formal.stanford.edu/jmc/mcc59/mcc59.html http://www-formal.stanford.edu/jmc/mcc59.pdf Minsky, Marvin. (1968). Introduction. In Marvin L. Minsky (Ed.), Semantic information processing (pp. 1-32). Cambridge, MA: MIT Press. Minsky, Marvin. (1974). A framework for representing knowledge (AI Memo 306). Artificial Intelligence Laboratory, Massachusetts Institute of Technology. http://www.media.mit.edu/~minsky/papers/Frames/frames.html Minsky, Marvin (1986). The society of mind. New York: Simon and Schuster. 3 Cyc 3.1 Cyc overview Lenat, Douglas, & Guha, Ramanathan. (1990). Building large knowledge-based systems. Reading, MA: Addison-Wesley. Lenat, Douglas, & Guha, Ramanathan. (1990). Cyc: A Mid Term Report. AI Magazine, 11(3):32-59. Lenat, Douglas. (1995). CYC: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11). Guha, Ramanathan, & Lenat, Douglas. (1994). Enabling agents to work together. Communications of the ACM, 37(7):127-142. http://www.acm.org/pubs/citations/journals/cacm/1994-37-7/p126-guha/ http://www.acm.org/pubs/articles/journals/cacm/1994-37-7/p126-guha/p126-guha.pdf 3.2 Cyc criticisms and evaluations Locke, Christopher. Common Knowledge or Superior Ignorance? http://www.panix.com/~clocke/ieee.html Pratt, Vaughan. Cyc report. http://www.cs.umbc.edu/~narayan/proj/cyc-critic.html Mahesh, Kavi, Nirenburg, Sergei, Cowie, Jim, & Farwell, David (1996). An assessment of Cyc for natural language processing (Technical Report MCCS 96-302). Computing Research Laboratory, New Mexico State University, Las Cruces, New Mexico. http://crl.nmsu.edu/Research/Pubs/MCCS/Postscript/mccs-96-302.ps Mark J. Stefik and Stephen W. Smoliar (1993). The Commonsense Reviews – Eight reviews of: Douglas Lenat and Ramanathan V. Guha (1990) Building Large Knowledge-Based Systems: Representations and Inference in the CYC Project, Addison-Wesley, and Ernest Davis, Representations of Commonsense Knowledge, Morgan Kaufmann 1990. Artificial Intelligence, 61:37-179. http://www.cs.brandeis.edu/~brendy/CYC_report.txt http://www1.elsevier.nl/inca/publications/store/5/0/5/6/0/1/ Guha, Ramanathan, & Lenat, Douglas. (1993). Re: CycLing paper reviews, Artificial Intelligence, 61(1):149-174. 4 Cognitive architectures 4.1 Emotions Sloman, Aaron. (2001). Beyond shallow models of emotion. Cognitive Processing, 1(1). Chapters from: Minsky, Marvin. (Forthcoming). The Emotion Machine. 4.2 Heterogenous architectures Minsky, Marvin. (1991). Logical versus analogical or symbolic versus connectionist or neat versus scruffy. AI Magazine, Summer 1991. Mueller, Erik T. (1998). Natural language processing with ThoughtTreasure. New York: Signiform. http://www.signiform.com/tt/book/ Mueller, Erik T. (1990). Daydreaming in humans and machines: A computer model of the stream of thought. Norwood, NJ: Ablex/Intellect. ftp://ftp.cs.ucla.edu/tech-report/198_-reports/870017.pdf Singh, Push. (1999). Big list of mental agents for common sense thinking. http://www.media.mit.edu/people/push/agencies.html 4.3 Blackboard systems Hayes-Roth, B. A blackboard architecture for control. Artificial Intelligence, 1985. 26: p. 251-321. Nii, H. P. (1986). Blackboard Systems: The Blackboard Model of Problem Solving and the Evolution of Blackboard Architectures. AI Magazine, 7(2):38-53. Engelmore, R. and Morgan, T. (1988). Blackboard systems. Addison-Wesley, Reading, Massachuset. Carver, N., & Lesser, V. (1994). Evolution of blackboard control architectures. Expert Systems with Applications 7, 1-30. 4.4 Human-level AI McCarthy, John. The well-designed child. http://www-formal.stanford.edu/jmc/child1.html McCarthy, John. (1996). From here to human-level AI. http://www-formal.stanford.edu/jmc/human.html 4.5 Soar Newell A., & Simon, H. A. (1963). GPS, a program that simulates human thought. In E. A. Feigenbaum and J. Feldman, editors, Computers and Thought, pages 279--293. McGraw-Hill, New York. Lehman, J.F., Laird, J.E., & Rosenbloom, P.S. (1996) A gentle introduction to Soar, an architecture for human cognition. In S. Sternberg & D. Scarborough (eds.) Invitation to Cognitive Science, Volume 4. Rosenbloom, P.S., Laird, J.E. & Newell, A. (1993) The Soar Papers: Readings on Integrated Intelligence. Cambridge, MA: MIT Press. Laird, J.E., & Rosenbloom, P.S. (1996) The evolution of the Soar cognitive architecture. In T. Mitchell (ed.) Mind Matters. Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard. 4.6 Case-based reasoning Carbonell, J. (1986). Derivational analogy: a theory of reconstructive problem solving and expertise acquisition, in: R.S. Michalski et al. (eds.), Machine Intelligence; an AI approach, 2:371—392. Hammond, C. (1989). Case-Based Planning: Viewing Planning as a Memory Task. Academic Press, San Diego. K.J. Hammond. Explaining and Repairing Plans that Fail. Artificial Intelligence, 45(3):173--228, 1990. Kolodner, J. (1992). An introduction to case-based reasoning. Artificial Intelligence Review, 6:3—34.. Kolodner, J. (1993). Case-Based Reasoning. Morgan Kaufman, San Mateo, CA. Veloso, M. M., & Carbonell, J. G. (1993). Derivational analogy in Prodigy: Automating case acquisition, storage, and utilization. Machine Learning , 10 , 249--278. 4.7 Belief-desire-intention architectures Fagin, Halpern, Moses, and Vardi. (1995). Reasoning About Knowledge. Cambridge, MA: MIT Press. Cohen, Philip R., and Levesque, Hector J. (1990). Intention is choice with commitment. Artificial Intelligence, 42, 213-261. M. E. Bratman, D. J. Isreal, and M. E. Pollack. Plans and resource-bounded practical reasoning. Computational Intelligence, 4(4), 1988. http://citeseer.nj.nec.com/bratman88plans.html A.S. Rao and M.P. Georgeff. Modeling rational agents within a BDI-architecture. In J. Allen, R. Fikes, and E. Sandewall, editors, Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning (KR'91), pages 473-484. Morgan Kaufmann, 1991. http://citeseer.nj.nec.com/rao91modeling.html Halpern, J. and Moses, Y. 1984. Knowledge and common knowledge in a distributed environment, Proc. 3rd ACM Symposium on Principles of Distributed Computing, New York: ACM, pp. 50-61. http://citeseer.nj.nec.com/halpern84knowledge.html Lakemeyer, G. and Levesque, H. J., AOL: a logic of acting, sensing, knowing, and only knowing, Proc. of the 6th International Conference on Principles of Knowledge Representation and Reasoning (KR'98), Morgan Kaufmann, 1998. 5 Acquiring common sense 5.1 Distributed human projects Stork, David. (1999). The OpenMind Initiative. IEEE Intelligent Systems & their applications, 14(3):19-20. Singh, Push, et al. (In submission). Open Mind Common Sense: knowledge acquisition from the general public. 5.2 Acquisition through sketching Forbus, K. D., Ferguson, R. W., & Usher, J. M. (2000). Towards a computational model of sketching, Proceedings of the International Conference on Intelligent User Interfaces . Sante Fe, New Mexico. 5.3 Learning structural representations Pazzani, M., & Kibler, D. (1992). The Utility of Knowledge in Inductive Learning, Machine Learning, 9:57—94. Quinlan, J. R., & Cameron-Jones, R. M. (1993). FOIL: A midterm report. In Pavel B. Brazdil, editor, Machine Learning: ECML-93, Vienna, Austria. Quinlan, J. R., & Cameron-Jones, R. M. (1995). Induction of logic programs: FOIL and related systems. New Generation Computing, 13:287-312. 5.4 Sensory-grounded learning Deb Roy. (In press). Learning Visually Grounded Words and Syntax of Natural Spoken Language. Evolution of Communication. Sarah Finney, Natalia Gardiol Hernandez, Tim Oates, and Leslie Pack Kaelbling, "Learning in Worlds with Objects," Working Notes of the AAAI Stanford Spring Symposium on Learning Grounded Representations, 2001. Cohen, Paul R; Atkin, Marc S.; Oates, Tim; and Beal, Carole R. Neo: Learning Conceptual Knowledge by Sensorimotor Interaction with an Environment. In Proceedings of the First International Conference on Autonomous Agents, pages 170 - 177, 1997. Schmill, Matthew D.; Oates, Tim; and Cohen, Paul R. Learning Planning Operators in Real-World, Partially Observable Environments. In Proceedings of the Fifth International Conference on Artificial Intelligence Planning and Scheduling, pages 246-253, 2000. 6 Common sense reasoning 6.1 Issues in common sense inference Minsky, Marvin. (1981). Jokes and their relation to the cognitive unconscious. In Vaina and Hintikka (eds.), Cognitive Constraints on Communication. Reidel. Minsky, Marvin. (1994). Negative expertise, International Journal of Expert Systems, 7(1):13-19. 6.2 Default reasoning McDermott, D., & Doyle. J. (1980). Non-Monotonic Logic I. Artificial Intelligence 13:41--72. de Kleer, J. (1986). An Assumption Based Truth Maintenance System. Artificial Intelligence, 28:127-162. Doyle, J. (1979). A truth maintenance system. Artificial Intelligence, 12:231—272. Reiter, R. (1980). A logic for default reasoning. Artificial Intelligence 13:81--132. 6.3 Reflection Smith, B. (1982). Reflection and semantics in a procedural language (Technical Report 272). Cambridge, MA: MIT, Laboratory for Computer Science. Doyle, J. (1980). A model for deliberation, action, and introspection (Technical Report 581). Cambridge, MA: MIT, AI Laboratory. McCarthy, John. (1995), Making robots conscious of their mental states, in AAAI Spring Symposium on Representing Mental States and Mechanisms. E. Stroulia and A. Goel. Functional Representation and Reasoning in Reflective Systems. To appear in Journal of Applied Intelligence, Special Issue on Functional Reasoning, 9(1), January 1995. 6.4 Problem reformulation Amarel, Saul. (1968). On representations of problems of reasoning about actions. In Michie, editor, Machine Intelligence 3, pages 131--171. Edinburgh University Press, 1968. McCarthy, John. Elaboration tolerance. http://www-formal.stanford.edu/jmc/elaboration.html 6.5 Analogical reasoning Falkenhainer, B., Forbus, K.D. and Gentner, D. (1990). The structure-mapping engine: algorithm and examples, Artificial Intelligence, 41:1-63. http://citeseer.nj.nec.com/falkenhainer89structuremapping.html Davis, E. (1991). Lucid representations. NYU Computer Science Dept. Tech Report 565. http://citeseer.nj.nec.com/davis94lucid.html Gentner, D. (2001). Spatial metaphors in temporal reasoning. In M. Gattis (Ed.), Spatial schemas in abstract thought (pp. 203-222). Cambridge, MA: MIT Press. http://www.psych.nwu.edu/psych/people/faculty/gentner/pdfs%20papers/spatial%20schemas.2001.pdf Gentner, D., Bowdle, B., Wolff, P., & Boronat, C. (2001). Metaphor is like analogy. In D. Gentner, K. J. Holyoak, & B. N. Kokinov (Eds.), (2001). The analogical mind: Perspectives from cognitive science (pp. 199-253). Cambridge, MA: MIT Press. http://www.psych.nwu.edu/psych/people/faculty/gentner/pdfs%20papers/gentner-a2k-01.pdf Forbus, K. D., Gentner, D., Markman, A. B., & Ferguson, R. W. (1998). Analogy just looks like high-level perception: Why a domain-general approach to analogical mapping is right. Journal of Experimental and Theoretical Artificial Intelligence, 10(2), 231-257. http://www.psych.nwu.edu/psych/people/faculty/gentner/pdfs%20papers/forbus-gentner-98.pdf 6.6 Embodiment and metaphor Lakoff G. & Johnson M. (1990) Metaphors we live by. The University of Chicago Press. Narayanan, S. (1997). Talking the Talk is Like Walking the Walk . (Also in Proceedings of CogSci97, Stanford, August 1997). Siskind, Jeffrey M. (1994). Grounding language in perception. Artificial Intelligence Review, 8:371—391. Siskind, Jeffrey M. (2001). Grounding the Lexical Semantics of Verbs in Visual Perception Using Force Dynamics and Event Logic. Journal of Artificial Intelligence Research, volume 15, pp. 31-90, August 2001. 7 Logical formalisms 7.1 Situation calculus McCarthy, John, & Hayes, Patrick J. (1969). Some philosophical problems from the standpoint of artificial intelligence. In D. Michie & B. Meltzer (Eds.), Machine intelligence 4. Edinburgh, Scotland: Edinburgh University Press. McCarthy, John (1990). Formalizing common sense. Norwood, NJ: Ablex. McCarthy, John. (1980). Circumscription -- a form of non-monotonic reasoning. Journal of Artificial Intelligence, 13:27—39. McCarthy, John. (1968). Programs with common sense. In: M. Minsky, (Ed.), Semantic Information Processing, MIT Press, Cambridge, MA, pages 403—418. Reiter, Raymond (2001), Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems. MIT Press. 7.2 Event calculus R. Kowalski, & M. J. Sergot. (1986). A Logic-Based Calculus of Events. New Generation Computing, Vol. 4, Springer Verlag, pp. 67--95. Shanahan, Murray (1997). Solving the frame problem. Cambridge, MA: MIT Press. 7.3 Causal theories N. McCain and H. Turner. (1997). Causal theories of action and change. In Proceedings AAAI-97. N. McCain and H. Turner. (1995). A causal theory of ramifications and qualifications. In Proceedings IJCAI-95. Lifschitz, Vladimir. (2000). Missionaries and cannibals in the causal calculator. In Principles of Knowledge Representation and Reasoning: Proceedings of Seventh International Conference. To appear. Lee, Joohyung, Lifschitz, Vladimir, & Turner, Hudson. (2001). A representation of the zoo world in the language of the causal calculator. Unpublished draft. 7.4 Features and fluents Sandewall, Erik. (1994). Features and Fluents. The Representation of Knowledge about Dynamical Systems. Volume I. Oxford University Press. 8 Contexts and organizing commonsense knowledge Lenat, D. (1998) The dimensions of context-space, Cycorp technical report, www.cyc.com. McCarthy, John. (1993). Notes on formalizing context. In Proceedings of the thirteenth international joint conference on artificial intelligence. http://citeseer.nj.nec.com/318177.html 9 Representations for commonsense knowledge 9.1 Overviews R. Davis, H. Shrobe, and P. Szolovits. What is a Knowledge Representation? AI Magazine, pages 17--33, Spring 1993. Selected chapters from Ernest Davis (1990). Representations of Commonsense Knowledge. San Mateo, CA: Morgan Kaufmann Publishers, Inc. (book, 544 pages) “A central goal of artificial intelligence is to give a computer program commonsense understanding of basic domains such as time, space, simple laws of nature, and simple facts about human minds. Many different systems of representation and inference have been developed for expressing such knowledge and reasoning with it. Representations of Commonsense Knowledge is the first thorough study of these techniques.” http://www.mkp.com/books_catalog/catalog.asp?ISBN=1-55860-033-7 9.2 Commonsense ontologies Lenat, Douglas. Cyc Upper Ontology, See http://www.cyc.com/cyc-2-1/index.html Lenat, Douglas, & Guha, Ramanathan. (1991). The Evolution of CycL, The Cyc Representation Language. SIGART Bulletin, 2(3): 84-87, June 1991. Hayes, P. J. (1985). The Second Naive Physics Manifesto. In Formal Theories of the Commonsense World, 1-36, eds. J.R. Hobbs & R.C. Moore. Norwood, NJ: Ablex Publishing Corp. Also reprinted in Brachman & Levesque 1985, 468-485. Hayes, P. J. (1985). Naive Physics I: Ontology for liquids. In Formal theories of the common sense world, ed. J. Allen, J. F. and Hayes, P. J. (1985). A common-sense theory of time, Proceedings of the 9th International Joint Conference on Artificial Intelligence, pp. 528--531. Hayes, P. J. (1979). Naive physics manifesto. Expert Systems in the Microelectronic Age. Edinburgh: Edinburgh University Press. 9.3 Representing causality Pearl, J. (2000). Causality: Models, Reasoning and Inference. Cambridge University Press. 9.4 Representing time Allen, J. F. (1991). Time and time again: The many ways to represent time. International Journal of Intelligent Systems 6(4):341-356, July 1991. Allen, J. F. Planning as temporal reasoning. (1991). In Proceedings of 2nd Principles of Knowledge Representation and Reasoning, Morgan Kaufmann. Allen, J. F. (1983). Maintaining knowledge about temporal intervals. Communications of the ACM, 26(11):832-843, November 1983. Allen, J. F. (1984). Towards a general theory of action and time, Artificial Intelligence 23:123—154. 9.5 Story representations Schank, Roger. (1972). Conceptual dependency: a theory of natural language understanding. Cognitive Psychology 3, 552--631. Schank, R.C., & Rieger, C.J. (1974). Inference and the Computer Understanding of Natural Language. Artificial Intelligence, 5:373—412. Mueller, Erik T. (1999). A database and lexicon of scripts for ThoughtTreasure. http://www.signiform.com/tt/htm/script.htm Mueller, Erik T. (1999): Prospects for in-depth story understanding by computer (unpublished paper, 23 pages) http://www.media.mit.edu/~mueller/papers/storyund.html Mueller, Erik T. (2002). Story understanding. In Encyclopedia of Cognitive Science. London: Nature Publishing Group. 9.6 Connectionist representations Marvin Minsky, and Seymour Papert. Perceptrons (expanded edition), MIT Press,1988. Dyer … 10 Applications of common sense knowledge 10.1 Context-aware agents Mueller, Erik T. (2000). A calendar with common sense. Proceedings of the 2000 International Conference on Intelligent User Interfaces (pp. 198-201). New York: Association for Computing Machinery. Singh, Push. (2002). The public acquisition of commonsense knowledge. In Proceedings of AAAI Spring Symposium: Acquiring (and Using) Linguistic (and World) Knowledge for Information Access. Palo Alto, CA, AAAI. McCarthy, John., "Some Expert Systems Need Commonsense, " in Lifschitz, V. (ed.), Formalizing Common Sense: Papers by John McCarthy, pp. 189-197, Norwood, NJ: Ablex, 1990. Lenat, Douglas, & Guha, Ramanathan. (1994). Ideas for Applying CYC. Cyc technical report, www.cyc.com. Available at http://www.cyc.com/tech-reports/act-cyc-407-91/act-cyc-407-91.html Lieberman, Henry, & Selker, Ted. (2000). Out of context: Computer systems that adapt to, and learn from, context. IBM Systems Journal, 39(3,4):617-632. http://www.research.ibm.com/journal/sj/393/part1/lieberman.pdf 10.2 The Semantic Web Berners-Lee, Tim., Hendler, James, & Lassila, Ora. (2001). The Semantic Web. Scientific American Volume 284, Number 5, May 2001, pp. 34-43. http://www.scientificamerican.com/2001/0501issue/0501berners-lee.html Berners-Lee, Tim. (1998). Semantic Web Road map. http://www.w3.org/DesignIssues/Semantic.html Berners-Lee, Tim. (1998). What the Semantic Web can represent. http://www.w3.org/DesignIssues/RDFnot.html 11 Robots and common sense Shapiro, Stuart C., Amir, Eyal, Grosskreutz, Henrik, Randell, David, & Soutchanski, Mikhail. (2001). Commonsense and Embodied Agents: A Panel Discussion. Common Sense 2001: 5th Symposium on Logical Formalizations of Commonsense Reasoning, May 20-22, 2001. http://www.cs.nyu.edu/faculty/davise/commonsense01/final/panel.pdf 11.1 Cognitive robotics Amir, Eyal, & Maynard-Reid Pedrito II. (2001). LiSA: A Robot Driven by Logical Subsumption. Common Sense 2001: 5th Symposium on Logical Formalizations of Commonsense Reasoning, May 20-22, 2001. http://www.cs.nyu.edu/faculty/davise/commonsense01/papers.html#Amir http://www.cs.nyu.edu/faculty/davise/commonsense01/final/amir.pdf 11.2 Natural language interfaces to robots Eva Stopp, Klaus-Peter Gapp, Gerd Herzog, Thomas Längle, and Tim C. Lüth (1994). Utilizing Spatial Relations for Natural Language Access to an Autonomous Mobile Robot. Unpublished paper (paper, 16 pages) http://wwwipr.ira.uka.de/internal/detailed_publication.php?id=974907079 Thomas Längle, Tim C. Lüth, Eva Stopp, Gerd Herzog, and Gjertrud Kamstrup (1995). KANTRA – A Natural Language Interface for Intelligent Robots. International Conference on Intelligent Autonomous Systems, Karlsruhe, Germany, March. In Rembold et al. (eds.), Intelligent Autonomous Systems, IOS Press, pp. 357-364 (paper, 8 pages) http://wwwipr.ira.uka.de/internal/detailed_publication.php?id=975423485 http://wwwipr.ira.uka.de/internal/download.php?id=975423485&filetype=pdf 12 Natural language 12.1 Frame semantics Fillmore, C. (1968). The case for Case. Universals in Linguistic Theory. E. Bach and R. Harms. New York, Holt, Reinhart and Winston. 12.2 Lexical semantics Jackendoff, R. (1983). Semantics and cognition. Cambridge, MA, MIT Press. Pustejovsky, J. (1991). The generative lexicon. Computational linguistics 17: 409-441. 13 Realms of thinking 13.1 Spatial reasoning Amitabha Mukerjee, Neat vs Scruffy: A survey of Computational Models for Spatial Expressions http://citeseer.nj.nec.com/250615.html This is from a book called Representation and Processing of Spatial Expressions, Erlbaum. Davis, E. Representing and Acquiring Geographic Knowledge. Morgan Kauffman, California. Kuipers, B. J. (1978). Modeling spatial knowledge. Cognitive Science, 2:129—153. Kuipers, B. J. (2000). The spatial semantic hierarchy. Artificial Intelligence, 119:191—233. 13.2 Physical reasoning Rieger, C., & Grinberg, M. (1977). The Causal Representation and Simulation of Physical Mechanisms. Technical Report TR-495, Dept. of Computer Science, University of Maryland. 13.3 Social reasoning Lehnert, W. G. (1981). Plot Units and Narrative Summarization. Cognitive Science, 4:293—331. Carbonell, J. (1980). Towards a process model of human personality traits. Artificial Intelligence, 15, 49-74. Hendler, J. (1988). Integrating Marker-Passing and Problem-Solving. 13.4 Story understanding Ram, Ashwin. (1987). AQUA: asking questions and understanding answers. In Proceedings of the Sixth Annual National Conference on Artificial Intelligence, pp. 312--316 Seattle, WA. 13.5 Visual reasoning L. Stark and K. Bowyer. ``Functional context in vision''. In Workshop on Context-based Vision. IEEE Press, 1995. T.M. Strat and M.A. Fischler. ``The role of context in computer vision''. In Workshop on Context-based Vision. IEEE Press, 1995. R.K. Srihari. ``Linguistic context in vision''. In Workshop on Context-based Vision. IEEE Press, 1995. H. Buxton and S. Gong. ``Visual surveillance in a dynamic and uncertain world''. Artificial Intelligence, 78:371--405, 1995. Smith, Barry, "Formal Ontology, Common Sense, and Cognitive Science", International Journal of Human- Computer Studies, 43 (1995). Gong, L., Kulikowski, C., Composition of Image Analysis Processes through Object-Centered Hierarchical Planning, IEEE TRANS on Pattern Recognition and Machine Intelligence (PAMI), 1995; 17(10):997-1009. T.M. Strat and M.A. Fischler. ``Context-based vision: Recognising objects using both 2D and 3D imagery''. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13:1050--1065, 1991. D. Rosenthal and R. Bajscy, ``Visual and conceptual hierarchy: a paradigm for studies of automated generation of recognition strategies'', IEEE Trans. PAMI-6, 3: pp. 319-324, 1984. P. Selfridge, ``Reasoning about success and failure in aerial image understanding'', PhD Thesis, University of Rochester, 1981. D. Garvey, "Perceptual strategies for purposive vision'', Technical note 117, AI Center, SRI International, 1976. M. Minsky, ``A Framework for representing knowledge'', in The Psychology of Computer Vision. P.Winston (ed.), New York, McGraw-Hill 1975. Roberto Casati and Achille C. Varzi. Holes and Other Superficialities, Cambridge, MA, and London: MIT Press [Bradford Books], 1994. J.L. Crowley and H. Christensen. Vision as Process. Springer-Verlag, Berlin, 1993. J. Aloimonos, Integration of Visual Modules. San Diego, Academic Press, Inc., 1989. Steven Pinker. Editor. Visual Cognition, The MIT Press, 1988. Waltz, David L., & Boggess, Lois (1979). Visual analog representations for natural language understanding. Proceedings of the 1979 International Joint Conference on Artificial Intelligence. D. Marr. Vision, San Francisco, W.H. Freeman, 1982. Rudolf Arnheim. Visual Thinking, University of California Press, Ltd., 1969. Gong, L., Image Analysis as Context-Based Reasoning, In Proc. of ISCA 10th International conference on Intelligent Systems. Virginia, p130-34, 2001. Ahmed E. Ibrahim An Intelligent Framework for Image Understanding http://www.engr.uconn.edu/%7Eibrahim/publications/image.html Roger C. Schank, Andrew E. Fano: Memory and Expectations in Learning, Language, and Visual Understanding. 261-271. R. Collins, A. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, and O. Hasegawa tech. report CMU-RI-TR-00-12, Robotics Institute, Carnegie Mellon University, May, 2000. Nuria Oliver. "Towards Perceptual Intelligence: Statistical Modeling of Human Individual and Interactive Behaviors", PhD thesis, MIT Media Lab, 2000. Milan Sonka, Vaclav Hlavac, and Roger Boyle. Image Processing, Analysis, and Machine Vision, Pacific Grove, CA : PWS Pub. c1999. Joachim M. Buhmann, Jitendra Malik and Pietro Perona. Image Recognition: Visual Grouping, Recognition and Learning. in: Proceedings of the National Academy of Science, Vol. 96, No 25, pp. 14203-14204, Dec. 7, 1999. By Christopher O. Jaynes. Seeing is Believing: Computer Vision and Artificial Intelligence. ACM Crossroads (the student magazine of the Association for Computing Machinery), 1996. H. Buxton and R. Howarth. ``Watching behaviour: The role of context and learning''. In International Conference on Image Processing, Lausanne, Switzerland, 1996. G. Socher, G. Sagerer, F. Kummert and T. Fuhr. ``Talking about 3D scenes: Integration of image and speech understanding in a hybrid distributed system''. In International Conference on Image Processing, Lausanne, Switzerland, 1996. Bobick, Aaron, and Pinhanez, Claudio. "Using Approximate Models as Source of Contextual Information for Vision Processing." Proceedings of the Workshop on Context-Based Vision, ICCV'95, Cambridge, Massachusetts, pp.13-21. June 1995. Bobick, Aaron, and S. Intille. "Exploiting Contextual Information for Tracking by Using Closed-Worlds." Proceedings of the Workshop on Context-Based Vision, Cambridge, Massachusetts, pp.87-98. June 1995. Cassell, Justine. "Speech, Action and Gestures as Context for Ongoing Task-Oriented Talk." Proceedings of AAAI Fall Symposium on Embodied Language and Action, pp. 20-25. November 1995. 14 Psychology 14.1 Cognitive psychology Clark, Herbert H. (1977). Bridging. In Thinking: Readings in Cognitive Science. Graesser, Arthur C., Singer, Murray, and Trabasso, Tom (1994). Constructing inferences during narrative text comprehension. Psychological Review. 101(3):371-395. McKoon, Gail, & Ratcliff, Roger (1992). Inference during reading. Psychological Review. 99(3):440-466. McKoon, Gail, & Ratcliff, Roger (1986). Inferences about predictable events. Journal of Experimental Psychology: Learning, Memory, and Cognition. 12(1):82-91. Beeman, Mark. (1998). Coarse semantic coding and discourse comprehension. In Right hemisphere language comprehension. Mahwah, NJ: Erlbaum. Heider, Fritz (1958). The psychology of interpersonal relations. Hillsdale, NJ: Erlbaum. Smedslund, Jan (1997). The structure of psychological common sense. Mahwah, NJ: Erlbaum. 14.2 Psychology of memory Landauer, Thomas K. (1986). How much do people remember? Some estimates of the quantity of learned information in long-term memory. Cognitive Science, 10:477-493. 14.3 Psychology of story understanding Goldman, Susan R., Graesser, Arthur C., & van den Broek, Paul (1999). Narrative comprehension, causality, and conherence. Mahwah, NJ: Erlbaum. 14.4 Psychology of inference St. George, Marie, Mannes, Suzanne, and Hoffman, James E. (1997). Individual differences in inference generation: An ERP analysis. Journal of Cognitive Neuroscience, 9(6):776-787. Van Petten, Cyma, & Kutas, Marta (1990). Interactions between sentence context and word frequency in event-related brain potentials. Memory & Cognition, 18(4):380-393. Tanenhaus, Michael K., Spivey-Knowlton, Michael J., Eberhard, Kathleen M., & Sedivy, Julie C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632-1634. Burgess, Curt, & Simpson, Greg B. (1988). Cerebral hemispheric mechanisms in the retrieval of ambiguous word meanings. Brain and Language, 33:86-103. http://locutus.ucr.edu/abstracts/88-bs-cere.html Caramazza, Alfonso (1998). The interpretation of semantic category-specific deficits: What do they reveal about the organization of conceptual knowledge in the brain? Neurocase, 4:265-272. 15 Criticisms McDermott, D. (1987). A critique of pure reason. Computational Intelligence, 3:151—160. 16 Web resources Commonsense problem page http://www-formal.stanford.edu/leora/cs/ Open Mind Common Sense http://www.openmind.org/commonsense ThoughtTreasure http://www.signiform.com/tt/htm/tt.htm OpenCyc http://www.opencyc.org 17 UNPLACED Wilensky, R., 1983. Planning and Understanding: A Computational Approach to Human Reasoning. Reading, MA:Addison--Wesley. Various papers by Doug Lenat (e.g. Common Sense and the Mind of HAL by Doug Lenat) Minsky, M. More Turing Option chapters (on common sense bugs) Paper on the IEEE Standard Upper Level ? Selected papers from the Common Sense 2001: 5th Symposium on Logical Formalizations of Commonsense Reasoning. May 20-22, 2001 http://www.cs.nyu.edu/faculty/davise/commonsense01/ Jon Barwise and John Perry: Situations and Attitudes Lucy Suchman: Situated Systems More on conceptual primitives? Should anything from http://www.eecs.umich.edu/~rthomaso/bibs/context.bib.txt be incorporated? Modeling personality: Don't forget Dyer's In-Depth Understanding and Lehnert's book on question answering and work on plot units. Logical approaches to reasoning about action and change (I think you've tracked most of these down already): Situation calculus McCarthy and Hayes, Some Philosophical Problems ... Reiter, Knowledge in Action Event calculus Kowalski and Sergot Shanahan, Solving the Frame Problem Features and fluents Sandewall Temporal Action Logics: Patrick Doherty (this is an important derivative of Features and Fluents with an implementation as the program VITAL) Causal theories McCain and Turner (papers attached) (See attached file: ActionLanguages.ps)(See attached file: CCalcManual.ps) (See attached file: McCainPhD.ps) Big List of Mental Agents for Common Sense Thinking Push Singh | MIT Media Lab Started 4-1-98 Last updated 12-7-99 Contents | Vocabulary | Questions | Agents | Representations 1. Contents The following is a list of many of the elements one needs to build a system capable of commonsense thinking. No AI system contains all of them, though many contain more than a few of them. I am trying to put these elements together into a system in order to control a simulated robot. I have implemented many of them using standard AI building blocks (recognize, compare, retrieve, etc.) I am working on a multiagent cognitive architecture to build them in, based on ideas from Marvin Minsky's Society of Mind theory, which is efficient, cooperative, and robust at responding to failures, but I will leave that discussion for another time. Nevertheless, I see many of the following elements as easily programmed in any architecture. This list of elements taken together implies a larger, more powerful architecture than the architecture that underlies it, so it is not clear how much it matters what the underlying architecture is. One could probably go through the literature and find several hundred more elements. This list came from working on the problem of building a robot capable of movement and manipulation, and learning what techniques could be applied to solving the many problems that came up, and figuring out how to organize them into a single system. Despite the large size of this list, I think building this architecture is only one percent of the big problem of building a system capable of commonsense thinking. There's a lot more you need to know, both in terms of how to think and what to think. And in the end, everyone has their own "thinking style", which no single, simple architecture will be able to encompass. 2. Basic Vocabulary of Objects for Artificial Intelligence This system is "object oriented" in the sense that there are a standard set of objects which the agents use and manipulate. Here are some of the objects needed: plan manager priority options sensor difference transition goal subgoal situation state script method failure impasse/handler frame transframe k-line paranome polyneme operator constraint object event property relation expectation deviation precondition proposer censor advocate critic optimist pessimist predict 3. Mental Questions that Direct Common Sense Thinking One way to control the direction of mental activity is by focusing on some set of questions. Of course, these questions do not need to be expressed in english, and may be expressed as queries in various internal representations. These questions initiate and prioritize inference processes that help answer them. Here are some of the questions a robot should always be asking itself: What is going to happen next? What would explain this? What is the best thing for me to do now? What objects are around me? What is happening around me? Have I seen that object before? What can I learn from this event? What can I learn from this failure? How long will it take to perform this action? What sorts of things might go wrong while performing this action? What sorts of problems might occur, and how can I prepare for them? Am I spending too much time on this problem? 4. Mental Agents for Common Sense Thinking Some of these agents are simple IF-THEN-DO rules. Some are more complex IF-THEN-DECIDE/SEARCH rules. Some are really complex IF-THEN-FIGURE-OUT rules! Tracking, Recognizing and Describing given a partial state, try to find a frame that describes it (if you see a fridge and stove, think "kitchen") given a situation, find a more abstract representation of it given a description, find something additional to say about it given an object for which a set of categories match, look for a feature that discriminates between them binding problem: if an object might be the same as one seen before, impose temporal-behavioral constraints to verify Goal Scheduling and Monitoring given a goal that cannot seem to be achieved, give up on the goal given a goal that cannot seem to be achieved, censor the goal given a pair of goals, look for a causal chain that results in a conflict between them given a pair of goals, prioritize them with respect to higher priority supergoals given a goal, propose a set of subgoals to achieve it given a goal, find the difference between it and the situation in order to characterize the problem monitor for mania: believing you can achieve any goal monitor for scattered feeling: too many goals are active at once monitor for impatience: wanting too many things right away monitor for depression: excessively critical of your actions and goals checkpoint our mental state whenever something important happens Plan Construction given a problem, reset your mind state to an earlier state in which a similar problem was solved (k-lines) given a problem, retrieve from memory an abstract plan to solve it given a problem, try to chain a few methods to solve it given a problem, look for different ways to solve that problem given a plan, adapt it to the present situation given a plan, adapt it to the present goal context given a plan, check if you have tried it recently and it failed, and if so suppress it given a primary goal and secondary goals, retrieve a plan that achieves primary but does not undo secondary given a situation, look for methods you can execute given a goal, look for methods that have been known to achieve it given a method, look for ways to adapt it given a plan, criticize the plan by finding another reason why it will fail given a plan, justify the plan by finding another reason why it will work given a script, generate a version of that script that monitors important conditions given a plan to execute, assign monitors to its preconditions given a method under consideration, check if the method is undoable given a plan, estimate how long it will take given a plan, predict what resources it will take and what goals it will undo given a pair of plans that achieve the same goal, find some way to combine them into a better plan given a plan that is strongly criticized, censor it given a plan being constructed, add constraints one at a time given a plan being constructed, prefer plans that are undoable given that a plan cannot be adapted, raise an impasse given that no plan can be retrieved, raise an impasse given a set of subplans, combine them into a larger plan in which their mutual interactions are resolved given a problem, try to find a simple problem that resembles it and try to solve the simple problem Plan Selection given two plans, compare them to see which is better Plan Optimization given a plan, look for a plan that is better than it in some regard, and try to improve it in that regard given a plan, improve its reliability and speed and minimize its use of resources given a plan, look for simple useful variations Plan Parallelizing given a set of plans, run them at the same time and monitor for interactions Plan Verification given a plan, try the plan in simulation given a plan, consider the cost of failure of the plan given a plan, check if its preconditions are satisfied Plan Learning given an applied method, note any deviations from expected effects given a successful plan, try that plan in other situations and on other methods, to learn more about it given a successful plan, save the plan to memory given a successful plan, index the plan Plan Execution and Monitoring given a plan, execute it given an executing plan, monitor if the actions do as expected given an executing plan, check if its actions are taking too long given an executing plan, suppress it if it is predicted it will achieve a dangerous state Dealing with Planning Impasses given a failed plan, try the plan in simulation at finer temporal grain given a failed plan, try the plan in simulation while simulated more objects given a failed plan, try the plan in simulation while simulating causes given a failed plan, figure out which subaction didn't achieve its subgoal given a failed plan, find or generate an alternate plan given a failed plan, try another plan given a failed plan, mutate the plan slightly in a way that worked before, and try again given a failed plan that has been recently modified, regress to an earlier version of that plan given a failed method, check if its preconditions are still satisfied given a failed method, change a parameter of a method, like doing it harder or more smoothly given a failed method, do some experiments to test the method Prediction and Envisioning given a situation, predict the next state by memory (using a transframe) given a situation, predict the next state by dynamic simulation (like QSIM) given a simulation, control its level of detail (which and how many objects and properties are simulated) Expectation Failure given an expectation failure, raise an impasse given an object that behaves unexpectedly, try to explain it by seeing it in terms of other frame Perceiving given an object and a frame, look for a way to see an object in terms of the frame given an object, look for unusual features given a transframe, look for ways to see the current situation in terms of it given an observed event, try to break it down into achievable subevents given an object in the world, check if you saw it before given a situation, recognize the general context (inside, outside, etc.) Understanding and Explaining the Dynamic Situation given an event, look for its cause given a sequence of events, judge whether things are getting better or worse given an unusual conclusion, verify it by trying to prove it another way given a pair of situations, before and after, find a chain of events that links them given a failure, look for another frame in terms of which to describe the situation given an event, verify that it meets our expectations Reasoning given an assertion, assert its opposite and monitor for a contradiction given a theory (a set of constraints), find an example from memory or in the environment given a theory, find a counterexample from memory or in the environment given a pair of theories, find analogies between them given a pair of theories and a goal, find analogies between them with respect to the goal given a property P, and P implies Q, then assume Q given a property Q, and P implies Q, then check if P is true given a pair of properties P and Q, if there is a constraint between those properties, verify the constraint if there is a constraint failure between properties, verify the properties involved given a failure of reasoning, describe the reasoning using your language machinery given a failure of reasoning, reformulate by modifying your theory Dealing with Reasoning Impasses given a failure, look for a failed assumption given a failure, look for a missing assumption given an explanation, look for a weakness given an X, criticize it Learning and Abstracting given a successful plan, save the trace of how it was derived given a set of objects, look for a way to uniframe them given a set of events, build a theory that explains the behavior given an object, find a new frame that matches it given a new object, construct a frame for it given a new X, construct a frame for it given a set of descriptions, find a way to unify them as variations of a single description given an observed method, figure out how to copy it given a failure, construct a critic that complains when it sees the potential to fail that way again given a pair of correlated events, look for a causal dependency between them, and if found build transframe given a plan of action or reasoning, generalize it by variablizing its elements given a successful plan, take a snapshot of the mental state in which it was discovered (make a k-line) Resource Management given no more resource, replenish that resource (if out of fuel, go get more) given no more resource, find another source of that resource (if fuel tank is empty, find another tank) given a resource, see how much of that resource is left (check gauge in fuel tank) given a function, find another way to achieve that function (you can also walk on your hands!) Thinking for the Future (Daydreaming, Worrying) given an anti-goal, build pessimists by thinking of ways we might get to that anti-goal given a goal, build optimists by thinking of ways we might get to that goal given a plan, look for subplans that can be extracted and usable in their own right, and index them given a goal, make a list of the top 10 ways we might fail at that goal Knowledge Management and Maintenance (Theories, Plans, etc.) given a method, index it according to its effects given a set of operators, find a reformulation of those operators that "orthogonalizes" them given a pair of representations, look for an analogy given a transframe, look for scripts that match it given a pair of frames, find the differences between them given a set of plans that achieve the same goal, index them so we can quickly jump between them given a method, look for plans that can undo its effects Multi-Agent Coordination given a method, check if it interferes with the goals of the other agents given a method, check if it might help other agents if an agent notices another agent doing something that might help it, help that other agent do it given a manager, look for ways in which its worker agents might interfere with one another given a pair of interacting agents, debug the interaction one at a time if a manager notices a set of agents working well together, consider them a team Social Reasoning given a person, guess at their goals from their actions given a person model, simulate what they would do in some situation S given a person model, simulate what they would think in some situation S Language given a sentence, generate some possible syntactic parses of it given a sentence, generate some possible interpretations of it in other representations given a word, check if you know what it means given a sentence, record how the words in that sentence are used given a sentence, determine its type (examples: question, asserting a truth, rhetorical statement) given a desired mental state of a listener, generate a sentence that achieves that mental state (reduplication) Vision given a visual scene, track the objects in it given one view of an object, retrieve other views of that object given a view of an object, compute its contour Comparing and Decision Making given a set of options, consider the consequences of choosing them Searching and Constraint Satisfaction given lack of progress or cycling, declare an impasse given a set of constraints between variables, look for a set of variable assignments that satisfy it given a causal dependency, monitor that if the independent variable changes, then the dependent one changes given bidirectional constraint between variables, monitor that if one changes so does the other one 5. Mental Representations for Common Sense Thinking These agents operate on data structures that represent aspects of the world and of the mind of the robot itself. Here are some examples of the kinds of concepts robots will need. Each of these should be represented in multiple ways! time : before, after, duration, sensed-at-time temporal : repeating-event, sudden-event methods : expensive, cheap, one-time, undoable events : periodic, unspecified duration, momentary shape : round, square, changing, has-holes bodily : hand position, head position, orientation spatial : boundaries, paths, holes physical : solid, flexible, surfaces, connections perceptual : occlusion, clarity, depth geometric : above, near, inside, outside goals : closer-to-goal, interacting-goals resources : limbs-as-resource, time-as-resource balance : center-of-mass, leaning-to-the-left movement : moving-quickly, moving-sporadically energy : expensive, cheap difficulty : hard, easy