Thursday, December 15, 2005

Constructing Meaning: The Role of Affordances and Grammatical 

Constructing Meaning: The Role of Affordances and Grammatical, Constructions in Sentence Comprehension (PDF) by Michael P. Kaschak and Arthur M. Glenberg
"The Indexical Hypothesis describes how sentences become meaningful through grounding their interpretation in action. We develop support for the hypothesis by examining how people understand innovative denominal verbs, that is, verbs made from nouns and first encountered by participants within the experiment (e.g., to crutch). Experiments 1 and 2 demonstrated that different syntactic constructions provide scenes or goals that influence the meaning created for the innovative verbs. Experiment 3 used reading time to demonstrate that people also consider possible interactions with
the objects underlying the verbs (i.e., the affordances of the objects) when creating meaning. Experiment 4 used a property verification procedure to demonstrate that the affordances derived from the objects depend on the situation-specific actions needed to complete the goal specified by the syntactic construction. Thus the evidence supports a specific type of interaction between syntax and semantics that leads to understanding: The syntax specifies a general scene, and the affordances of objects are used to specify the scene in detail sufficient to take action."

RSS Extensions 

Main Page - RSS Extensions: "RSS is taking the world by storm. And as it marches forward, increasingly many extensions to RSS are proposed to meet special requirements that the general-purpose RSS specification ( was never meant to address.

The recent high-profile adoption, with extensions, of RSS by Microsoft and Apple are but two examples of this trend. But how do you know what RSS extensions have been defined already, where they have been defined, and how you can avoid re-inventing what somebody else has created already?

This is the problem this site is meant to solve. If you have or are developing an RSS extension, or know of one that may be of interest to the larger community, please enter it here. That only takes a minute and helps everybody in order to avoid that RSS fragments."

The Problem With Meaning 

Mixing Memory: The Problem With Meaning: "the problem I saw back then, and still see today. In order to write a program that can understand the meaning of sentences with which you and I would have no trouble, you basically have to program in most or all of the knowledge of at least a well-developed human child, if not an adult. And I don't see how that's really possible. It certainly doesn't seem to be possible today, since we don't have a firm understanding of how people reason about the mechanics of situations like the one in (2), or how they activate the relevant background knowledge (the paper linked above gives one potential answer, in the form of the 'Indexical Hypothesis'). If a machine doesn't have that level of knowledge, every time it gets a novel verb, it's going to be lost."

Practical natural language processing, circa 2005 

Jon Udell: Practical natural language processing, circa 2005: "I've wondered for a long time how natural language processing will enter the mainstream. My guess is that email will be the vector. Suppose you could mine email for the following patterns:
I've written about this kind of thing before and will again. As with voice recognition, natural language processing isn't likely to deliver major breakthroughs. It's a long slog, but over decades you can look back and see the progress that's been made. Categorizing email and other kinds of interpersonal messages according to the speech acts they express is an age-old challenge, and the goal still eludes us, but it's nice to be reminded from time to time that the enabling technology is slowly but surely maturing. "

Monday, December 12, 2005

Federated Databases in Science 

The Nassar Blog » Blog Archive » Federated Databases in Science: "The sciences are now heavily engaged in what was until recently a minor component of their work: the design and management of databases. Databases are becoming a significant part of the raw materials of science, and data mining one of its primary activities. The flurry of discussion surrounding XML, metadata, and the hypothetical Semantic Web have created an expectation that information technology will soon be capable of integrating scientific knowledge automatically. This may have led to some confusion in the way we talk about information management. One area of concern is the frequent discussion of whether databases should be organized centrally or in a distributed, federated system. For the sake of simplicity these two models are usually described side by side, with the centralized model having all of the properties common to any very large database, and the federated version having a loose, inconsistent structure similar to that of the Web. Additionally there is often a discussion about the technical challenges of distributed queries.
The Web is successful because it exploits the relationships among a huge number of people making individual judgements that only people can make. Even the Semantic Web, if it ever has a chance of working, would have to depend on a very large base of common metadata standards, and that can only result from the slow process of people coming together and agreeing. There are many things that information technology cannot do on its own. The semantic integration of knowledge still remains a human activity."

Sunday, December 11, 2005

Cheap Eats at the Semantic Web Café 

Burningbird » Cheap Eats at the Semantic Web Café: "Whether LID can be seen as an ‘expedient solution’ or not, if LID had implementations in PHP or Python that would be simple to install and use, and there was more clarity on the license, it would have fired enough grassroots support to make it a contender for the de facto digital identity implementation, thus making it that much more difficult for other, perhaps more ‘robust’ solutions to find entry into the community at a later time.

This also applies to the concept of meta-data. If people become used to receiving value, even if it is only limited value, from folksonomies based on very little effort on their part, they’re going to become reluctant when other more robust solutions are provided if these latter require more effort on their part. Especially if these more robust or effective solutions take time to be accessible ‘to the masses’ because the creators of same are *enclosured behind walls built of scholarly interest, with no practical means of entry for the likes of you and me."

Cognitive Authority 

Cognitive Authority (PDF)
"Patrick Wilson (1983) developed the cognitive authority theory from social epistemology
in his book, Second-hand Knowledge: An Inquiry into Cognitive Authority. The fundamental
concept of Wilson’s cognitive authority is that people construct knowledge in two different
ways: based on their first-hand experience or on what they have learned second-hand from
others. What people learn first-hand depends on the stock of ideas they bring to the interpretation
and understanding of their encounters with the world. People primarily depend on others for
ideas as well as for information outside the range of direct experience. Much of what they think
of the world is what they have gained second-hand."

This page is powered by Blogger. Isn't yours?