Saturday, November 26, 2005
Ontologies are enabling technology for the Semantic Web. They are a means for people to state what they mean by formal terms used in data that they might generate or consume. Folksonomies are an emergent phenomenon of the social web. They are created as people associate terms with content that they generate or consume. Recently the two ideas have been put into opposition, as if they were right and left poles of a political spectrum. This piece is an attempt to shed some cool light on the subject, and to preview some new work that applies the two ideas together to enable an Internet ecology for folksonomies."
FW: Call for Participation: First International Workshop on Mediation in Semantic Web Services (MEDIATE 2005)
[mailto:email@example.com]On Behalf Of Martin Hepp (DERI
Sent: Thursday, November 24, 2005 10:18 AM
To: firstname.lastname@example.org; email@example.com;
firstname.lastname@example.org; email@example.com; firstname.lastname@example.org;
email@example.com; firstname.lastname@example.org; email@example.com;
firstname.lastname@example.org; email@example.com; firstname.lastname@example.org;
Subject: Call for Participation: First International Workshop on
Mediation in Semantic Web Services (MEDIATE 2005)
(our apologies for cross-posting)
********************** Call for Participation ***********************
First International Workshop on Mediation in Semantic Web Services
in conjunction with the 3rd International Conference on Service Oriented
Computing (ICSOC 2005)
Amsterdam, The Netherlands, December 12, 2005, 9:45 a.m. - 5.00 p.m.
Description of the Workshop Topic:
The usage of computer systems is widely characterized by decentralized
design and autonomous evolution, i.e. if we look at system components from a
global perspective, they are often developed and modified without alignment
in the design stage. Also, components follow individual paths of evolution
during their life-cycles. It can be observed that this is a major cause for
interoperability problems, contributing to the brittleness of systems
integration efforts. If we want to increase the degree of automation in
eneral, it seems important to provide software components that can help
overcome occurring interoperability conflicts and this in an automated
fashion. This functionality is known as mediation and the respective
components are called mediators. Mediation can take place on a multiplicity
of levels, e.g. on data, ontologies, processes, protocols, or goals. To a
great extent, it will depend on the availability of sophisticated,
industry-strength mediation support whether the promise of Semantic Web
services can become reality.
In this workshop we want to advance the theoretical and practical knowledge
about the design and implementation of mediators in the Semantic Web and
Semantic Web services.
Register now for the workshop at
More information on the venue, registration, hotels, and related events is
available at http://www.icsoc.org/ and on the workshop website at
Agenda Monday, December 12, 2005
09:45 - 10:00 Welcome
Morning Session: Service Mediation and Discovery
10:00 - 10:30 Liliana Cabral and John Domingue:
Mediation of Semantic Web Services in IRS-III
10:30 - 11:00 Gösta Grahne and Victoria Kiricenko:
Process Mediation in an Extended Roman Model
11:00 - 11:30 Emanuele Della Valle and Dario Cerizza:
The mediators centric approach to Automatic Web Service Discovery of COCOON
11:30 - 11:45 Coffee Break
11:45 - 12:15 Michael Stollberg, Emilia Cimpian, and Dieter Fensel:
Mediating Capabilities with Delta-Relations
12:15 - 12:45 Colombe Hérault, Gaël Thomas, and Philippe Lalanda:
Mediation and Enterprise Service Bus: A position paper
12:45 - 14:00 Lunch Break
Afternoon Session: Data and Ontology Mediation
14:00 - 14:30 Jérôme Euzenat:
Alignment Infrastructure for Ontology Mediation and other Applications
14:30 - 15:00 Adrian Mocan and Emilia Cimpian:
Mappings Creation Using a View Based Approach
15:00 - 15:30 Philipp Kunfermann and Christian Drumm:
Lifting XML Schemas to Ontologies - The Concept Finder Algorithm
15:30 - 16:00 Coffee Break
16:00 - 17:00 Plenary Discussion
Michael Genesereth, Stanford University
Frank van Harmelen, Vrije Universiteit Amsterdam
Martin Hepp, Digital Enterprise Research Institute (DERI), Innsbruck
Axel Polleres, Digital Enterprise Research Institute (DERI), Innsbruck
Diego Calvanese (Free University of Bozen/Bolzano)
Emilia Cimpian (DERI)
Jos De Bruijn (DERI)
John Domingue (Open University)
Jerome Euzenat (INRIA)
Dieter Fensel (DERI)
Fausto Giunchiglia (University of Trento)
Rick Hull (Bell Labs)
Michael Kifer (University at Stony Brook)
Deborah McGuinness (Stanford University)
Enrico Motta (The Open University)
Marco Pistore (University of Trento)
Pavel Shvaiko (University of Trento)
Jianwen Su (UC Santa Barbara)
York Sure (AIFB)
Paolo Traverso (ITC/IRST)
Michael F. Uschold (Boeing)
Ludger van Elst (DFKI)
Holger Wache (Vrije Universiteit Amsterdam)
Gio Wiederhold (Stanford University)
Digital Enterprise Research Institute (DERI) Innsbruck University of
Innsbruck, Technikerstrasse 21a, A-6020 Innsbruck, Austria
Phone: +43 512 507 6465, Fax: +43 512 507 9872
Friday, November 25, 2005
"...Hmm. There are some interesting implications in all of this.
One is that the Semantic Web is in for a lot of heartbreak. It has been trying for five years to convince the world to use it. It actually has a point. XML is supposed to be self-describing so that loosely coupled works. If you require a shared secret on both sides, then I’d argue the system isn’t loosely coupled, even if the only shared secret is a schema. What’s more, XML itself has three serious weaknesses in this regard:..."
"...What does this Mean for Databases?
All of this has profound implications for databases. Today databases violate essentially every lesson we have learned from the Web. ..."
"... Distributed computing has been learning and evolving in response to the lessons of the Web. Formats and protocols are arising to overcome the limitations of XML—even as XML in turn arose to overcome the limitations of CORBA and DCOM. It is time that the database vendors stepped up to the plate and started to support a native RSS 2.0/Atom protocol and wire format; a simple way to ask very general queries; a way to model data that encompasses trees and arbitrary graphs in ways that humans think about them; far more fluid schemas that don’t require complex joins to model variations on a theme about anything from products to people to places; and built-in linear scaling so that the database salespeople can tell their customers, in good conscience, for this class of queries you can scale arbitrarily with regard to throughput and extremely well even with regard to latency, as long as you limit yourself to the following types of queries. Then we will know that the database vendors have joined the 21st century...."
As Ang's criticism is so eloquently expressed, it is quoted in full here:
I would suggest ... that it is the failure of communication that we should emphasize if we are to understand contemporary (postmodern) culture. That is to say, what needs to be stressed is the fundamental uncertainty that necessarily goes with the process of constructing a meaningful order, the fact that communicative practices do not have to arrive at common meanings at all. This is to take seriously the radical implications of semiotics as a theoretical starting point: if meaning is never given and natural but always constructed and arbitrary, then it doesn't make sense to prioritize meaningfulness over meaninglessness. Or, to put it in the terminology of communication theory: a radically semiotic [see the section on semiotics] perspective ultimately subverts the concern with (successful) communication by foregrounding the idea of 'no necessary correspondence' between the Sender's and the Receiver's meanings. That is to say, not success, but failure to communicate should be considered 'normal' in a cultural universe where commonality of meaning cannot be taken for granted.
If meaning is not an inherent property of the message, then the Sender is no longer the sole creator of meaning. If the Sender's intended message doesn't 'get across', this is not a 'failure in communications' resulting from unfortunate 'noise' or the Receiver's misinterpretation or misunderstanding, but because the Receiver's active participation in the construction of meaning doesn't take place in the same ritual order as the Sender's. And even when there is some correspondence in meanings constructed on both sides, such correspondence is not natural but is itself constructed, the product of a particular articulation, through the imposition of limits and constraints to the openness of semiosis in the form of 'preferred readings', between the moments of 'encoding' and 'decoding' (see Hall 1980a). That is to say, it is precisely the existence, if any, of correspondence and commonality of meaning, not its absence, that needs to be accounted for. Jean Baudrillard has stated the import of this inversion quite provocatively:
[M]eaning [...] is only an ambiguous and inconsequential accident, an effect due to ideal convergence of a perspective space at any given moment (History, Power etc.) and which, moreover, has only ever really concerned a tiny fraction and superficial layer of our 'societies'.
'...if you see what I mean.'
'...if you take my meaning.'
'What's that supposed to mean?'
'I always say what I mean.'
''Cochon' means 'pig'.'
'I didn't really mean it.'
'I meant to write.'
'A green light means 'go''
'What is the meaning of life?'
'Health means everything.'
'His look was full of meaning
'What's the dictionary meaning of 'meaning'?'
That's fairly typical of the sort of things we might say. You can see from those that we don't even use the word 'meaning' with the same meaning every time. Some of the examples are taken from The Meaning of Meaning by Ogden and Richards (1923), in which they identified 16 different meanings of the word!
The last example, with its reference to 'dictionary meaning' suggests that there is some kind of 'correct' meaning of words. If two people disagree about what a word means, they might well settle their argument by referring to the dictionary.
However, when we stop for a while to consider just what we mean by meaning, things get pretty complicated pretty fast. A number of thought-provoking statements about the nature of meaning were made by the communication theorist David Berlo Berlo (1960):
- Meanings are in people
- Communication does not consist of the transmission of meanings, but of the transmission of messages
- Meanings are not in the message; they are in the message-users
- Words do not mean at all; only people mean
- People can have similar meanings only to the extent that they have had, or can anticipate having, similar experiences
- Meanings are never fixed; as experience changes, so meanings change
- No two people can have exactly the same meaning for anything
Click on any of them for a discussion of what he 'means'."
The 'Saying & Meaning and the Semantics/ Pragmatics-Distinction' research project aims at a faithful reconstruction of fundamental concepts in Paul Grice's theories of meaning and conversation, with a view to shedding new light on the controversial issue of where to draw the boundary between semantics and pragmatics.
The project covers four central research topics and is accompanied by a number of events.
Our site's extensive link collection lists resources on Grice and homepages of researchers working in semantics and pragmatics, along with a range of bibliographies and other useful materials; as a special feature, it gathers a selection of brand new papers.
Also planned is a bibliographical web-database containing the masses of literature on the semantics/pragmatics-distinction and a complete bibliography of Grice's works – a valuable tool for ongoing research in relevant areas."
Meanings can be linguistic and non-linguistic. Linguistic meaning is any meaning that words and other items of language have. Non-linguistic meaning is whatever meaning can be conveyed without the use of language.
Meanings can be presented through various different mediums, or vehicles of communication. The kind of medium that is used determines whether or not a meaning is linguistic or non-linguistic. The newspaper, or the vocal cords, are mediums for 'linguistic meaning'. By contrast, body language is an example of a medium for the display of non-linguistic meanings, such as the 'thumbs up' in Western cultures.
Meaning as a whole is studied in philosophy and semiotics, and especially in philosophy of language, philosophy of mind, and logic, and communication theory. Fields like sociolinguistics tend to be more interested in non-linguistic meanings. Linguistics lends itself to the study of linguistic meaning in the fields of semantics (which studies conventional meanings) and pragmatics (studies in how language is used by individuals). Literary theory, critical theory, and some branches of psychoanalysis are also involved in the discussion of meaning. However, this division of labor is not absolute, "
"The idea, in a nutshell, is that the truly scalable databases of the future will be more like the Web than like Oracle (Profile, Products, Articles), DB2, or SQL Server.
In last month’s ACM Queue, Bosworth elaborated on some of the lessons the Web has taught us about simplicity, human accessibility, “sloppily extensible” formats, the social dimension of software, and loose coupling. But he also introduced a key technical point about RSS and Atom, the feed formats powering the blog revolution. These formats represent sets of items. Typically, the items contain Weblog postings, but they can also contain XML fragments that represent anything under the sun. What’s more, items can link to other items or collections. Bosworth argues that this architecture lends itself to aggressive scale-out, decentralized caching, and grassroots schema evolution, all of which tend to elude conventional databases.
There’s no free lunch, of course. When you query this RSS/Atom data web, you should expect more structural precision than full-text search affords, but you shouldn’t plan on fast execution of complex nested queries.
We’ve yet to colonize the middle ground between these extremes, and I don’t think anyone really knows what the sweet spot will turn out to be. I’ve gotten plenty of mileage out of XPath and XQuery, and my dream is that these XML-oriented query disciplines can be federated at large scale. But first things first: We need to create the data web. And recently, two leading figures have dropped major hints about how that’s going to happen."
This paper presents three different ways of addressing the binding problem in different brain areas: generic neocortex,
hippocampus, and prefrontal cortex. None of these approaches involve the popular mechanism of temporal synchrony.
The first two involve conjunctive representations that bind by ensuring that different neural units are activated for
different combinations of input features. Specifically, we think the cortex constructs low-order conjunctions using
coarse-coded distributed representations to avoid the combinatorial explosion usually associated with conjunctive
solutions to the binding problem. We present a model that learns these representations in a challenging relational
binding task, and furthermore is capable of considerable generalization to novel inputs. Next, we review the idea that
the hippocampus performs conjunctive binding in long term memory through the use of higher-order conjunctions that
are much more specific to particular events than those in the cortex. Finally, we present a model of a very different form
of binding that involves the phonological loop — a mechanism for maintaining arbitrary sequences of phonemes in
active memory. This phonological system can be used to bind by continuously repeating the to-be-bound information
(e.g., “press left key for green X’s,...”). In total, this work suggests that instead of one simple and generic solution
to the binding problem, the brain has developed a number of specialized mechanisms that build on the strengths of
existing neural hardware in different brain areas."
Monday, November 21, 2005
"...2.3 Ontologies vs. Database Schema
There are many interesting relationships between database schema and formal ontologies. We will consider the following issues: language expressivity, systems that implement the languages and usage scenarios. There is much overlap in expressivity, including: objects, properties, aggregation, generalization, set-valued properties, and constraints. For example, entities in an ER model correspond to concepts or classes in ontologies, and attributes and relations in an ER model correspond to relations or properties in most ontology languages. For both, there is a vocabulary of terms with natural language definitions. Such definitions are in separate data dictionaries for DB schema, and are inline comments in ontologies. Arguably, there is little or no obvious essential difference between a language used for building DB schema and one for building ontologies. They are similar beasts. There are many specific differences in expressivity, which vary in importance. Many of the differences are attributable to the historically different ways that DB schema and ontologies have been used...."
"...Reusing ontologies is hard, just as reusing software code is. The Semantic Web makes it likely that people will reuse (portions of) ontologies incorrectly or inconsistently. Semantic interoperability, however, will be facilitated only to the extent that people reference and reuse public ontologies in ways that are consistent with their original intended use..."
And how is "their original intended use" established?
"...To reuse an ontology, one needs to find something to reuse. Users must be able to search through available ontologies to determine which ones, if any, are suitable for their particular tasks..."
And also determine which ones are known and used by the unknown agent(s) you may end up interacting with.