Intention without Representation
Friday, 28th May 2004
0930 - 1100
Representation is a central issue in classic AI. In the late 80's and 90's
there was considerable interest in solving the representation problem by
avoiding it and using not-representational approaches to classic AI tasks.
Some claimed that neural nets had a distributed representation that in some
way avoided the symbol grounding problem. Others such as Rodney Brooks at
MIT simply hard-wired behaviour into situated agents. This talk is based
on a paper that is to appear in Philosophical Psychology that describes how
planning, in the full means-ends sense, can be achieved without representing
the world in any way. The mechanism is based on BDI, but uses a plan library
containing data structures called Goal Tagged Activities. While the mechanism
does not solve the symbol grounding problem, it does push back the boundary
at which rational behaviour requires symbols. The mechanism described enables
a range of applications that require more than insect level intelligence
and the presentation finishes with a discussion of teamwork.
Peter Wallis has a PhD from RMIT on semantics for search engines and has
been active in the Natural Language Processing community in Australia since
1989. From 1995 to 2001, he worked primarily for Defence on information
extraction from text, and on conversational agents. While at Defence he was
studying for an MBA and currently does consulting through his own company.