Thursday, November 17, 2005
As Walter Perry points out regularly on xml-dev, the real value of XML is that it reduces the extent to which I force any one processing
model onto others. This enables re-use and innovation in a way that, say, application sharing does not.
The price we pay for this freedom is that designers of XML languages need to find ways to communicate "processing expectations" or
"processing models" separately from the data.
It is still the case today that the true meaning of a chunk of markup is dependent on what some application actually *does* with it. It is not in the data itself. For example, I can create rtf, xml, csv files that are completely valid per the markup requirements but "invalid"
because they fail to meet particular processing models in rtf/xml/csv-aware applications.
This is one reason why HTML as a Web concept (forget about markup for the moment) and XML as a Web concept are so different. With HTML the processing model was a happy family of 1, namely, "lets get this here content rendered nicely onto the screen". With XML the processing
model is an extended family of size...infinity. Who knows what the next guy is going to do with the markup? Who knows what the next processing model will be? Who knows whether or not my segmentation of reality into tags/attributes/content will need the requirements of the next guy.
Although some may regard mediation as simply the most recent addition to the SOA and Enterprise Service Bus jargon, its emergence is in fact hugely significant: it is now apparent that mediation is the fundamental issue that any viable ESB product or solution must address.
What exactly IS mediation? To make the concept more concrete, examples of mediation steps associated with a simple foreign exchange contract in financial services might be:
* Validating the message against that company standard
* Mapping of the customer name into the correct customer id format (by looking up a database) required by the FX order system
* Transforming the incoming message order format into the format expected by an FX order system – probably filtering out data not required by this system but required by audit or risk systems.
* Ensuring the sender of the order is allowed to order that value of transaction.
* Extracting the value and originator of the order and updating the financial risk engine.
* Enforcing management controls and service level agreements that might restrict particular quantities and currency being ordered."
"Why do we need mediation?
Mediation is required because differences in data models, service definitions and the granularity of software services, makes communication between applications much more complex than might be imagined at first glance. The oft-proposed ‘solution’ to these issues - the act of creating a ‘standards-based’ service - only goes part of the way towards delivering truly flexible, loosely-coupled integration architectures (which in turn deliver business agility – let’s keep this front-of-mind, it is the whole point after all).
This may seem to be heresy to some proponents of SOA, who believe that the perfect service definition should never need mediation, or alternatively that the solution to any mismatch is to create a new service definition. Based on that belief, some propose that service orchestration (usually based on the BPEL standard) alongside the ‘correct’ service definition on their own will solve any integration requirement.
To be blunt: this is optimistic and naïve nonsense. ...."
Tuesday, November 15, 2005
[mailto:firstname.lastname@example.org]On Behalf Of Pascal Hitzler
Sent: Tuesday, November 15, 2005 9:36 AM
Subject: CfP: WWW06 workshop "Reasoning on the Web" RoW06
RoW06 - Reasoning on the Web
Workshop at WWW2006, Edinburgh, UK, May 2006
First Call for Papers
The advent of the Semantic Web marks a turning point in the development
of the World Wide Web. As Web content is being annotated using
ontologies, a huge amount of knowledge will become accessible for
intelligent systems and agents on and off the Web. At the core of these
developments are reasoning technologies for ontologies, inspired by
research and technology stemming from automated deduction, artificial
intelligence, and mathematical logic. While RDF and OWL have been
established as standard ontology languages by the W3C, and W3C
standardization efforts for a Semantic Web Rules Language are under way,
both theory and practice of reasoning on the web are in rapid
development. The quest for suitable ontology language paradigms,
reasoning system, and the realization of application scenarios is
ongoing and being pursued with frenzy.
With respect to the realization of practical reasoning support on the
web, research is currently faced with serous challenges which need to be
mastered, including the following.
* Providing scalable reasoning support in the face of the amount of
data on the Web.
* Establishing reasoning technology which can deal with the
heterogeneous nature of real data on the Web.
* Realizing reasoning in a modular and distributed way suitable for
* Providing reasoning systems which can be utilized by non-experts.
* Developing convincing use cases which show the added value of
Semantic Web technology to a larger audience.
Aim and Scope
This workshop aims to bring together researchers and practitioners
concerned with reasoning technology for the Semantic Web, in order to
stimulate the exchange of ideas and results on Web reasoning.
Topics of interest include, but are not limited to:
* Scalability vs. expressivity of reasoning on the web
* Reasoning with heterogeneous ontologies
* Distributed reasoning on the web
* Rule-based languages for the web
* Ontology languages and their relationships
* Reasoning systems for the Semantic Web
* Web applications of reasoning technology
We invite the submission of original papers that have not been submitted
for review or published elsewhere. Submitted papers must be written in
English and should not exceed 8 pages in the case of research and
experience papers, and 2 pages in the case of position papers. All
submitted papers will be judged based on their quality, relevance,
originality, significance, and soundness. Papers must be submitted
directly by email in PDF format to RoW06@aifb.uni-karlsruhe.de. Authors
must adhere to the formatting instructions given for regular WWW2006 papers.
Selected papers must be presented at the workshop. At least one author
of each paper must register for the main conference before the early
registration deadline. The workshop will include extra time for audience
discussion of the presentation allowing the group to have a better
understanding of the issues, challenges, and ideas being presented.
Accepted papers will be published in official workshop proceedings,
which will be distributed during the workshop. We intend to invite
authors of the best papers to submit revised and extended versions of
their papers for a special issue of a major Semantic Web journal.
We currently expect that we can move the paper submission deadline for
RoW06 until after the notification deadline of the main conference. If
you would possibly like to submit a paper but the deadline January 10th
is too early for submitting, please contact us and let us know!
January 10th, 2006: paper submission
February 1st, 2006: notification
February 8th, 2006: camera-ready versions
May 22nd-26th: WWW 2006
May 22nd or 23rd, 2006: workshop
Ian Horrocks, Manchester
Pascal Hitzler, AIFB, Universität Karlsruhe, Germany
Holger Wache, Vrije Universiteit Amsterdam, The Netherlands
Thomas Eiter, Technische Universität Wien, Austria
Jose Alferes, UN Lisboa, Portugal
Jürgen Angele, ontoprise GmbH, Germany
Anupriya Ankolekar, AIFB Karlsruhe, Germany
Sean Bechhofer, Manchester, UK
Alex Borgida, Rutgers, USA
Jos de Bruijn, DERI Innsbruck, Austria
Francois Bry, LMU Munich, Germany
Francois Fages, INRIA Paris, France
Benjamin Grosof, MIT, USA
Pat Hayes, IHMC Pensacola, USA
Anthony Hunter, London, UK
Markus Krötzsch, AIFB Karlsruhe, Germany
Carsten Lutz, TU Dresden, Germany
Deborah McGuinness, Stanford, USA
Ralf Möller, Hamburg, Germany
Boris Motik, FZI Karlsruhe, Germany
Daniel Olmedilla, Hannover, Germany
Jeff Pan, Aberdeen, UK
Bijan Parsia, Mindlab Maryland, USA
Peter Patel-Schneider, Bell Laboratories, USA
Axel Polleres, Innsbruck, Austria
Riccardo Rosati, Rome, Italy
Sebastian Schaffert, Salzburg Research GmbH, Austria
Stefan Schloebach, Amsterdam, The Netherlands
Michael Sintek, DFKI Kaiserslautern, Germany
Giorgos Stamou, Athens, Greece
Umberto Straccia, CNR, Italy
Heiner Stuckenschmidt, Mannheim, Germany
Katia Sycara, CMU Pittsburgh, USA
Daniele Turi, Manchester, UK
Contact the organizers at RoW06@aifb.uni-karlsruhe.de for all
information on the workshop.
Dr. Pascal Hitzler
Institute AIFB, University of Karlsruhe, 76128 Karlsruhe
email: email@example.com fax: +49 721 608 6580
web: http://www.pascal-hitzler.de phone: +49 721 608 4751
Monday, November 14, 2005
Jeff Hawkins describes the concept of Invariant Representation in his book On Intelligence. He uses it in his theory of neocortical function that he titles the Memory Prediction Framework.
Here is a simple notion. Imagine a group of people, say all of your family members. Try to imagine every possible detail of each one. Now try to see all of them in a room or in a huddle. This is like the extension of a set. But now try to eliminate all the details you can while still holding the idea of that group together. What are the minimum conditions that all members satisfy? This is the intension of the set. As you imagine all the details of the group milling about, changing each second, it is apparent that the extension is in motion, in fact, it is constantly changing. The intension is different, once the conception of it is attained, it is still. It is, in fact, invariant. Such is an Invariant Representation. It is a model of something in the world containing only those attributes that stay the same.
Sunday, November 13, 2005
* The graph associated with each mapping continuum is
acyclic and has one or more anchors, its semantic
models (ontologies in Semantic Web jargon).
* The semantic models constitute formally represented
encyclopedias on a given subject matter; they are
application-independent, specified in an expressive
modeling language (e.g., OWL.)
* Every model has an associated (semantic) mapping to
one or more other models."
* A model is intended to answer a specific set of
questions about its subject matter (a model has a
* For example, a model airplane can answer questions
about the dimensions and aerodynamics of an aircraft;
but not questions about its engine power, physical
* For every model, we need a translation function
(mapping) that will translate a query about the subject
matter into one about the model (where applicable), and
vice versa for the result of the query."
"Types of Models
* I-models (intentional): Consist of a set of predicates
with associated axioms; Database schemas (with
integrity constraints), but also logical theories fit here.
* E-models (extensional): These have set-theoretic
constructions, and query answering based on settheoretic
relationships; Tarskian and Kripke models, but
also databases, fit here.
* C-models (computational): These are characterized by
the fact that query answering is produced by running
programs, e.g., a simulation program."
I think HTM (Hierarchical Temporal Memory) models should be added to the above list.
At the beginning of our study we learnt that the Web is at least distributed, decentralized and an open world.
The Web is distributed. One of the driving factors in the proliferation of the Web is the freedom from a centralized authority.
However, since the Web is the product of many individuals, the lack of central control presents many challenges for reasoning with its information.
First, different communities will use different vocabularies, resulting in problems of synonymy (when two different words have the same meaning) and polysemy (when the same word is used with different meanings).
There is no reason to deny this description at least as a starting point. Remember, the description of the weather system sounds very similar. But all these emphasis of the openness and decentralized distributedness of the Web is describing not much more than the very surface structure of the Web. It emphasize the use of the Web by its users not the definition and structure, that is, the functioning of the Web. There are no surprises at all if we discover that the structure of the Web is strictly centralized, hierarchic, non-distributed and totally based on the principle of identity of all its basic concepts. The functioning of the Web is defined by its strict dependence on a "centralized authority".
If we ask about the conditions of the functioning of the Web we are quickly aimed at its reality in the well known arsenal of identity, trees, centrality and hierarchy.
Why? Because the definition of the Web is entirely based on its identification numbers. Without our URIs, DNSs etc. nothing at all is working. And what else are our URIs then centralized, identified, hierarchically organized numbers administrated by a central authority?
Again, all this is governed by the principle of identity.
"We should stress that the resources in RDF must be identified by resource IDs, which are URIs with optional anchor ID." (Daconta, p. 89)
What is emerging behind the big hype is a new and still hidden demand for a more radical centralized control of the Web than its control by URIs. The control of the use, that is of the content of the Web. Not on its ideological level, this is anyway done by the governments, but structurally as a control over the possibilities of the use of all these different taxonomies, ontologies and logics. And all that in the name of diversity and decentralization.
All the fuss about the freedom of the (Semantic) Web boils down to at least two strictly centralized organizational and definitorial conditions: URI and GOL (Global Ontology Language)."
An add-hoc, cousin to a 'web proper name' for a concept. Only 36 results.