Saturday, November 26, 2005

Ontology of Folksonomy 

Ontology of Folksonomy: "Summary

Ontologies are enabling technology for the Semantic Web. They are a means for people to state what they mean by formal terms used in data that they might generate or consume. Folksonomies are an emergent phenomenon of the social web. They are created as people associate terms with content that they generate or consume. Recently the two ideas have been put into opposition, as if they were right and left poles of a political spectrum. This piece is an attempt to shed some cool light on the subject, and to preview some new work that applies the two ideas together to enable an Internet ecology for folksonomies."

FW: Call for Participation: First International Workshop on Mediation in Semantic Web Services (MEDIATE 2005) 

-----Original Message-----
[]On Behalf Of Martin Hepp (DERI
Sent: Thursday, November 24, 2005 10:18 AM
Subject: Call for Participation: First International Workshop on
Mediation in Semantic Web Services (MEDIATE 2005)

(our apologies for cross-posting)

********************** Call for Participation ***********************
First International Workshop on Mediation in Semantic Web Services
(MEDIATE 2005)
in conjunction with the 3rd International Conference on Service Oriented
Computing (ICSOC 2005)
Amsterdam, The Netherlands, December 12, 2005, 9:45 a.m. - 5.00 p.m.

Description of the Workshop Topic:
The usage of computer systems is widely characterized by decentralized
design and autonomous evolution, i.e. if we look at system components from a
global perspective, they are often developed and modified without alignment
in the design stage. Also, components follow individual paths of evolution
during their life-cycles. It can be observed that this is a major cause for
interoperability problems, contributing to the brittleness of systems
integration efforts. If we want to increase the degree of automation in
eneral, it seems important to provide software components that can help
overcome occurring interoperability conflicts and this in an automated
fashion. This functionality is known as mediation and the respective
components are called mediators. Mediation can take place on a multiplicity
of levels, e.g. on data, ontologies, processes, protocols, or goals. To a
great extent, it will depend on the availability of sophisticated,
industry-strength mediation support whether the promise of Semantic Web
services can become reality.

In this workshop we want to advance the theoretical and practical knowledge
about the design and implementation of mediators in the Semantic Web and
Semantic Web services.

Register now for the workshop at

More information on the venue, registration, hotels, and related events is
available at and on the workshop website at

Agenda Monday, December 12, 2005

09:45 - 10:00 Welcome

Morning Session: Service Mediation and Discovery

10:00 - 10:30 Liliana Cabral and John Domingue:
Mediation of Semantic Web Services in IRS-III
10:30 - 11:00 Gösta Grahne and Victoria Kiricenko:
Process Mediation in an Extended Roman Model
11:00 - 11:30 Emanuele Della Valle and Dario Cerizza:
The mediators centric approach to Automatic Web Service Discovery of COCOON

11:30 - 11:45 Coffee Break

11:45 - 12:15 Michael Stollberg, Emilia Cimpian, and Dieter Fensel:
Mediating Capabilities with Delta-Relations
12:15 - 12:45 Colombe Hérault, Gaël Thomas, and Philippe Lalanda:
Mediation and Enterprise Service Bus: A position paper

12:45 - 14:00 Lunch Break

Afternoon Session: Data and Ontology Mediation

14:00 - 14:30 Jérôme Euzenat:
Alignment Infrastructure for Ontology Mediation and other Applications
14:30 - 15:00 Adrian Mocan and Emilia Cimpian:
Mappings Creation Using a View Based Approach
15:00 - 15:30 Philipp Kunfermann and Christian Drumm:
Lifting XML Schemas to Ontologies - The Concept Finder Algorithm

15:30 - 16:00 Coffee Break

16:00 - 17:00 Plenary Discussion

Organizing Committee:
Michael Genesereth, Stanford University
Frank van Harmelen, Vrije Universiteit Amsterdam
Martin Hepp, Digital Enterprise Research Institute (DERI), Innsbruck
Axel Polleres, Digital Enterprise Research Institute (DERI), Innsbruck

Program Committee:
Diego Calvanese (Free University of Bozen/Bolzano)
Emilia Cimpian (DERI)
Jos De Bruijn (DERI)
John Domingue (Open University)
Jerome Euzenat (INRIA)
Dieter Fensel (DERI)
Fausto Giunchiglia (University of Trento)
Rick Hull (Bell Labs)
Michael Kifer (University at Stony Brook)
Deborah McGuinness (Stanford University)
Enrico Motta (The Open University)
Marco Pistore (University of Trento)
Pavel Shvaiko (University of Trento)
Jianwen Su (UC Santa Barbara)
York Sure (AIFB)
Paolo Traverso (ITC/IRST)
Michael F. Uschold (Boeing)
Ludger van Elst (DFKI)
Holger Wache (Vrije Universiteit Amsterdam)
Gio Wiederhold (Stanford University)

Administrative Contact:
Martin Hepp
Digital Enterprise Research Institute (DERI) Innsbruck University of
Innsbruck, Technikerstrasse 21a, A-6020 Innsbruck, Austria
Phone: +43 512 507 6465, Fax: +43 512 507 9872

Friday, November 25, 2005

Learning from THE WEB 

ACM Queue - Learning from THE WEB - The Web has taught us many lessons about distributed computing, but some of the most important ones have yet to fully take hold.
"...Hmm. There are some interesting implications in all of this.

One is that the Semantic Web is in for a lot of heartbreak. It has been trying for five years to convince the world to use it. It actually has a point. XML is supposed to be self-describing so that loosely coupled works. If you require a shared secret on both sides, then I’d argue the system isn’t loosely coupled, even if the only shared secret is a schema. What’s more, XML itself has three serious weaknesses in this regard:..."

"...What does this Mean for Databases?

All of this has profound implications for databases. Today databases violate essentially every lesson we have learned from the Web. ..."

"... Distributed computing has been learning and evolving in response to the lessons of the Web. Formats and protocols are arising to overcome the limitations of XML—even as XML in turn arose to overcome the limitations of CORBA and DCOM. It is time that the database vendors stepped up to the plate and started to support a native RSS 2.0/Atom protocol and wire format; a simple way to ask very general queries; a way to model data that encompasses trees and arbitrary graphs in ways that humans think about them; far more fluid schemas that don’t require complex joins to model variations on a theme about anything from products to people to places; and built-in linear scaling so that the database salespeople can tell their customers, in good conscience, for this class of queries you can scale arbitrarily with regard to throughput and extremely well even with regard to latency, as long as you limit yourself to the following types of queries. Then we will know that the database vendors have joined the 21st century...."

Ang on meaning 

Ang on meaning: "Criticism of transmission models - Ien Ang

As Ang's criticism is so eloquently expressed, it is quoted in full here:

I would suggest ... that it is the failure of communication that we should emphasize if we are to understand contemporary (postmodern) culture. That is to say, what needs to be stressed is the fundamental uncertainty that necessarily goes with the process of constructing a meaningful order, the fact that communicative practices do not have to arrive at common meanings at all. This is to take seriously the radical implications of semiotics as a theoretical starting point: if meaning is never given and natural but always constructed and arbitrary, then it doesn't make sense to prioritize meaningfulness over meaninglessness. Or, to put it in the terminology of communication theory: a radically semiotic [see the section on semiotics] perspective ultimately subverts the concern with (successful) communication by foregrounding the idea of 'no necessary correspondence' between the Sender's and the Receiver's meanings. That is to say, not success, but failure to communicate should be considered 'normal' in a cultural universe where commonality of meaning cannot be taken for granted.

If meaning is not an inherent property of the message, then the Sender is no longer the sole creator of meaning. If the Sender's intended message doesn't 'get across', this is not a 'failure in communications' resulting from unfortunate 'noise' or the Receiver's misinterpretation or misunderstanding, but because the Receiver's active participation in the construction of meaning doesn't take place in the same ritual order as the Sender's. And even when there is some correspondence in meanings constructed on both sides, such correspondence is not natural but is itself constructed, the product of a particular articulation, through the imposition of limits and constraints to the openness of semiosis in the form of 'preferred readings', between the moments of 'encoding' and 'decoding' (see Hall 1980a). That is to say, it is precisely the existence, if any, of correspondence and commonality of meaning, not its absence, that needs to be accounted for. Jean Baudrillard has stated the import of this inversion quite provocatively:

[M]eaning [...] is only an ambiguous and inconsequential accident, an effect due to ideal convergence of a perspective space at any given moment (History, Power etc.) and which, moreover, has only ever really concerned a tiny fraction and superficial layer of our 'societies'.

Baudrillard (1983)"


meaning: "In everyday speech we bandy the term 'meaning' around quite happily without giving it a lot of thought:

'...if you see what I mean.'
'...if you take my meaning.'
'What's that supposed to mean?'
'I always say what I mean.'
''Cochon' means 'pig'.'
'I didn't really mean it.'
'I meant to write.'
'A green light means 'go''
'What is the meaning of life?'
'Health means everything.'
'His look was full of meaning
'What's the dictionary meaning of 'meaning'?'

That's fairly typical of the sort of things we might say. You can see from those that we don't even use the word 'meaning' with the same meaning every time. Some of the examples are taken from The Meaning of Meaning by Ogden and Richards (1923), in which they identified 16 different meanings of the word!

The last example, with its reference to 'dictionary meaning' suggests that there is some kind of 'correct' meaning of words. If two people disagree about what a word means, they might well settle their argument by referring to the dictionary.

However, when we stop for a while to consider just what we mean by meaning, things get pretty complicated pretty fast. A number of thought-provoking statements about the nature of meaning were made by the communication theorist David Berlo Berlo (1960):

Click on any of them for a discussion of what he 'means'." - Forschungsgruppe Sprachphilosophie Universität Bern - Research project philosophy of language, University of Berne, Switzerland: " is the digital platform of a research group in the philosophy of language at the University of Berne, Switzerland.

The 'Saying & Meaning and the Semantics/ Pragmatics-Distinction' research project aims at a faithful reconstruction of fundamental concepts in Paul Grice's theories of meaning and conversation, with a view to shedding new light on the controversial issue of where to draw the boundary between semantics and pragmatics.

The project covers four central research topics and is accompanied by a number of events.

Our site's extensive link collection lists resources on Grice and homepages of researchers working in semantics and pragmatics, along with a range of bibliographies and other useful materials; as a special feature, it gathers a selection of brand new papers.

Also planned is a bibliographical web-database containing the masses of literature on the semantics/pragmatics-distinction and a complete bibliography of Grice's works – a valuable tool for ongoing research in relevant areas."


Meaning - Wikipedia, the free encyclopedia: "A meaning is a set of thoughts that people take symbols to have. Meanings can do many things, such as provoke a certain idea, or denote a certain real-world entity.

Meanings can be linguistic and non-linguistic. Linguistic meaning is any meaning that words and other items of language have. Non-linguistic meaning is whatever meaning can be conveyed without the use of language.

Meanings can be presented through various different mediums, or vehicles of communication. The kind of medium that is used determines whether or not a meaning is linguistic or non-linguistic. The newspaper, or the vocal cords, are mediums for 'linguistic meaning'. By contrast, body language is an example of a medium for the display of non-linguistic meanings, such as the 'thumbs up' in Western cultures.

Meaning as a whole is studied in philosophy and semiotics, and especially in philosophy of language, philosophy of mind, and logic, and communication theory. Fields like sociolinguistics tend to be more interested in non-linguistic meanings. Linguistics lends itself to the study of linguistic meaning in the fields of semantics (which studies conventional meanings) and pragmatics (studies in how language is used by individuals). Literary theory, critical theory, and some branches of psychoanalysis are also involved in the discussion of meaning. However, this division of labor is not absolute, "

The two-way data web 

The two-way data web | InfoWorld | Column | 2005-11-23 | By Jon Udell
"The idea, in a nutshell, is that the truly scalable databases of the future will be more like the Web than like Oracle (Profile, Products, Articles), DB2, or SQL Server.
In last month’s ACM Queue, Bosworth elaborated on some of the lessons the Web has taught us about simplicity, human accessibility, “sloppily extensible” formats, the social dimension of software, and loose coupling. But he also introduced a key technical point about RSS and Atom, the feed formats powering the blog revolution. These formats represent sets of items. Typically, the items contain Weblog postings, but they can also contain XML fragments that represent anything under the sun. What’s more, items can link to other items or collections. Bosworth argues that this architecture lends itself to aggressive scale-out, decentralized caching, and grassroots schema evolution, all of which tend to elude conventional databases.

There’s no free lunch, of course. When you query this RSS/Atom data web, you should expect more structural precision than full-text search affords, but you shouldn’t plan on fast execution of complex nested queries.

We’ve yet to colonize the middle ground between these extremes, and I don’t think anyone really knows what the sweet spot will turn out to be. I’ve gotten plenty of mileage out of XPath and XQuery, and my dream is that these XML-oriented query disciplines can be federated at large scale. But first things first: We need to create the data web. And recently, two leading figures have dropped major hints about how that’s going to happen."

Three Forms of Binding and their Neural Substrates: Alternatives to Temporal Synchrony 

Three Forms of Binding and their Neural Substrates: Alternatives to Temporal Synchrony(PDF) by Randall C. O’Reilly, Richard S. Busby, and Rodolfo Soto, June 23, 2001
This paper presents three different ways of addressing the binding problem in different brain areas: generic neocortex,
hippocampus, and prefrontal cortex. None of these approaches involve the popular mechanism of temporal synchrony.
The first two involve conjunctive representations that bind by ensuring that different neural units are activated for
different combinations of input features. Specifically, we think the cortex constructs low-order conjunctions using
coarse-coded distributed representations to avoid the combinatorial explosion usually associated with conjunctive
solutions to the binding problem. We present a model that learns these representations in a challenging relational
binding task, and furthermore is capable of considerable generalization to novel inputs. Next, we review the idea that
the hippocampus performs conjunctive binding in long term memory through the use of higher-order conjunctions that
are much more specific to particular events than those in the cortex. Finally, we present a model of a very different form
of binding that involves the phonological loop — a mechanism for maintaining arbitrary sequences of phonemes in
active memory. This phonological system can be used to bind by continuously repeating the to-be-bound information
(e.g., “press left key for green X’s,...”). In total, this work suggests that instead of one simple and generic solution
to the binding problem, the brain has developed a number of specialized mechanisms that build on the strengths of
existing neural hardware in different brain areas."

Monday, November 21, 2005

Ontologies and Semantics for Seamless Connectivity 

Ontologies and Semantics for Seamless Connectivity by Michael Uschold and Michael Gruninger, 2004.
"...2.3 Ontologies vs. Database Schema
There are many interesting relationships between database schema and formal ontologies. We will consider the following issues: language expressivity, systems that implement the languages and usage scenarios. There is much overlap in expressivity, including: objects, properties, aggregation, generalization, set-valued properties, and constraints. For example, entities in an ER model correspond to concepts or classes in ontologies, and attributes and relations in an ER model correspond to relations or properties in most ontology languages. For both, there is a vocabulary of terms with natural language definitions. Such definitions are in separate data dictionaries for DB schema, and are inline comments in ontologies. Arguably, there is little or no obvious essential difference between a language used for building DB schema and one for building ontologies. They are similar beasts. There are many specific differences in expressivity, which vary in importance. Many of the differences are attributable to the historically different ways that DB schema and ontologies have been used...."

Order from Chaos 

Order from Chaos - Natalya Noy on the issues of ontology reuse: "...Both of these trends are reflected in the vision of the Semantic Web, a form of Web content that will be processed by machines with ontologies as its backbone. Tim Berners-Lee, James Hendler, and Ora Lassila described the “grand vision” for the Semantic Web in a Scientific American article in 2001:1 Ordinary Web users instruct their personal agents to talk to one another, as well as to a number of other integrated online agents—for example, to find doctors that are covered by their insurance; to schedule their doctor appointments to satisfy both constraints from the doctor’s office and their own personal calendars; to request prescription refills, ensuring no harmful drug interactions; and so on. For this scenario to be possible, the agents need to share not only the terms—such as appointment, prescription, time of the day, and insurance—but also the meaning of these terms. For example, they need to understand that the time constraints are all in the same time zone (or to translate between time zones), to know that the term plans accepted in the knowledge base of one doctor’s agent means the same as health insurance for the patient’s agent (and not insurance, which refers to car insurance), and to realize it is related to the term do not accept for another doctor, which contains a list of excluded plans..."

"...Reusing ontologies is hard, just as reusing software code is. The Semantic Web makes it likely that people will reuse (portions of) ontologies incorrectly or inconsistently. Semantic interoperability, however, will be facilitated only to the extent that people reference and reuse public ontologies in ways that are consistent with their original intended use..."

And how is "their original intended use" established?

"...To reuse an ontology, one needs to find something to reuse. Users must be able to search through available ontologies to determine which ones, if any, are suitable for their particular tasks..."

And also determine which ones are known and used by the unknown agent(s) you may end up interacting with.

This page is powered by Blogger. Isn't yours?