Friday, January 07, 2005

Hurdles in the Business Case for the Semantic Web 

Hurdles in the Business Case for the Semantic Web
"Abstract
The nuclear winter that filled the vacuum created by the Internet implosion was characterized by highly conservative investments in new technologies. This was particularly true for Internet- and Web-oriented technologies since after all, being a believer just wasn’t as popular as it used to be. However, life, business, and science go on, and the Web is no exception.
This thesis will examine hurdles in the business case for the Semantic Web. In one sense, the Semantic Web is an extension or enhancement of the existing World Wide Web (Web). As we know it today, the Web is a rich medium that allows humans to express themselves, learn, interact, and reach an audience that was a pipe dream just a decade ago.
At the same time, the Web is of limited utility to computers (machines). For example, a human being could easily recognize a postal address or the specifications of an order for steel; a machine could not. To a machine, these data would simply be elements to be rendered and displayed on a monitor, with no intrinsic or cumulative meaning. In this sense, one of the goals set for the Semantic Web is to create meaning and utility for machines that allows for interpretation and action with far less human intervention.
Issues related to the challenges, practicalities, theories and opportunities of the Semantic
Web will be discussed. In the process, hopefully, this thesis will identify some of the
stepping stones in building a business case for this evolution. Notably, today’s comments
regarding the Semantic Web sound very similar to what was once said about the
practicalities of eBusiness and the likelihood of its adoption."

FW: [SE] Ontology Driven Architectures Second Draft etc 

Ontology Driven Architectures

In all well-established engineering disciplines, modelling a common
understanding of domains through a variety of formal and semi-formal
notations has proven itself essential to advancing the practice in each
such line of work. This has led to large section of the Software
Engineering profession evolving from the concept of constructing models of
one form or another as a means to develop, communicate and verify abstract
designs in accordance with original requirements. So spawning the fields of
Computers Aided Software Engineering (CASE) and, more recently, Model
Driven Architectures (MDA). Here models are not only used for design
purposes, but associated tools and techniques can be utilised further to
generate executable artefacts for use later in the Software Lifecycle.
Nevertheless there has always been a frustrating paradox present with
tooling use in Software Engineering, arising from the range of modelling
techniques available and the breadth of systems requiring design:
Engineering nontrivial systems demands rigour and unambiguous statement of
concept, yet the more formal the modelling approach chosen, the more
abstract the tools needed, often making methods difficult to implement,
limiting the freedom of expression available to the engineer and proving a
barrier to communication amongst practitioners with lesser experience. For
these reasons less formal approaches have seen mainstream commercial
acceptance in recent years, with the Unified Modelling Language (UML)
currently being the most favoured amongst professionals.

Even so, approaches like the UML are by no means perfect. Although they are
capable of capturing highly complex conceptualisations, current versions
are far from semantically rich. Furthermore they can be notoriously
ambiguous. A standard isolated schematic from such a language, no matter
how perfect, can still be open to gross misinterpretation by those who are
not overly familiar with its source problem space. It is true that
supporting annotation and documentation can help alleviate such problems,
but traditionally this has still involved a separate, literal, verbose and
longwinded activity often disjointed for the production of the actual
schematic itself.

What is needed instead is a way to incorporate unambiguous, rich semantics
into the various semi-formal notations underlying methods like the UML. In
so doing, the ontologies inherent to a system’s problem space – real world
or not - and its various abstract solution spaces could be encapsulated via
the very same representations used to engineer its design. This would not
only provide a basis for improved communication, conformance verification
and automated generation of run time-artefacts, but would also present
additional mechanisms for cross-checking the consistency of deliverables
throughout the design and build process.

In many respects an ontology can be considered as simply a formal model in
its own right. Hence, given the semantically rich, unambiguous qualities of
information embodiment on the Semantic Web, and the universality of the
Semantic Web’s XML heritage, there appears a compelling argument to combine
the semi-formal, model driven techniques of Software Engineering with
approaches common to Information Engineering on the Semantic Web. This may
involve the implanting of ontologies directly into systems’ design
schematics themselves, the referencing of separate metadata artefacts by
such descriptions or a mixture of both. What is important is that
mechanisms are made available to enable cross-referencing between design
descriptions and related ontologies in a manner that can be easily
engineered and maintained for the betterment of systems’ quality and cost.
Moreover, such mechanism should be capable of supporting both the
interlinking of more broadly related ontologies into grander information
corpuses – thereby implying formal similarities and potential relationships
between discreet systems through their design description metadata - and
the transformation of designtime ontology-to-design-artefact relationships
into useful runtime bindings - thereby realising metadata use across a
broader spectrum of the software lifecycle. This carries two obvious
implications for Web-based systems employing such techniques; firstly that
the Web could be used as a framework for runtime component sharing between
discreet and disparate systems and, secondly, that new forms of hybrid
system could be created through the amalgamation of discreet and disparate
functionality. This appears especially appealing given current advances the
areas of Web Services and Service Oriented Architectures. If underlying
metadata were also used as a basis for parameterised dynamic systems’
behaviour, there are further intriguing potentials in the areas of Web
Service Choreography and autonomic systems.

Composite Identification schemes on the Semantic Web

Identity is one of the most fundamental ideas in conception. Without a
notion of identity, it would become impossible to reuse information we have
previously acquired. Our experience would disintegrate into a sea of
informational ‘moments’ with no global thread or coherence. In such a world
there would be very little we could usefully deduce. Studies of subjects
with specific defects in the faculty of identity have shown just how
significant a disability this can become. As with human cognition, it is a
common occurrence in the Semantic Web for us to need to be able to equate
things based upon partial, observable data. Whilst the architecture of the
Web gives us a large, universal space of identifiers, we often need to deal
with concepts for which there is no single globally-agreed identifier
available. A notorious example, from the Friend Of AFriend (FOAF) project,
is that of a person. Most would recoil in horror at the idea of a single
number identifying you worldwide, quite apart from the practical issues
involved in running such a worldwide system of identifiers.

Inverse Functional Properties (IFPs) have come to the aid of those trying
to model concepts without unique identifiers. An IFP is defined by the OWL
Reference as ‘[when] the object of a property statement uniquely determines
the subject’. An IFP describes a relation to a piece of information that
can uniquely identify the subject. It is important to note that an IFP need
not be functional from subject to object – in the FOAF ontology, a person
can have many email addresses; the significant property is that each email
address corresponds to at most one person. A problem seen recently with
IFPs is that there is a relatively small set of
binary properties which can uniquely identify a subject. Many useful
inverse functional relationships relate a subject and a piece of complex
information. Many other useful inverse-functional properties only gain
their uniqueness in highly specific contexts. The notion of Composite
Inverse Functional Properties (CIFPs) aims to address these needs by
allowing a composite references to act as an inverse functional property.

Self-Organising Applications using Semantic Web Technologies

DESCRIPTION STILL IN PROGRESS

Semantic Web Technologies in Highly Adapted/Adaptive (User) Interfaces and
Support Tools

DESCRIPTION STILL IN PROGRESS


Links and Further Reading

o Ontology Driven Software Development in the Context of the Semantic
Web: An Example Scenario with Protégé/OWL. Holger Knublauch, Stamford
Medical Informatics, Stamford University, CA.
http://smi-web.stanford.edu/people/holger/publications/MDSW2004.pdf
o SOA, Glial and the Autonomic Semantic Web Machine – Tools for
Handling Complexity? Philip Tetlow, IBM, UK.
http://www.alphaworks.ibm.com/g/g.nsf/img/semanticsdocs/$file/soa_semanticweb.pdf
o Object Co-identification on the Semantic Web. R.V.Guha, IBM
Research,Almaden. http://tap.stanford.edu/CoIdent.pdf
o Situation and Identity – A Generalisation of Inverse Functional
Properties. Tom Croucher, University of Sunderland and Joe Geldart,
University of Durham – currently under conference submission restriction
o Semantic Management of Web Services. Daniel Oberle, Steffen
Lamparter, Andreas Eberhart1, Steffen Staab, University of Karlsruhe,
Germany. http://www.aifb.uni-karlsruhe.de/WBS/dob/pubs/www2005.pdf
o Semantic Management of Middleware, Daniel Oberle, Steffen Lamparter,
Andreas Eberhart1, Steffen Staab, University of Karlsruhe, Germany.
http://www.aifb.uni-karlsruhe.de/Publikationen/showPublikation_english?publ_id=766
o Developing and Managing Software Components in an Ontology-based
Application Server, Daniel Oberle, Andreas Eberhart, Steffen Staab, Raphael
Volz,
http://www.aifb.uni-karlsruhe.de/Publikationen/showPublikation_english?publ_id=459

Regards

Phil Tetlow
Senior Consultant
IBM Business Consulting Services

Forget That Thumb-Board, Samsung Lets You Talk Your Text Messages Into Your Phone (LinuxWorld) 

Forget That Thumb-Board, Samsung Lets You Talk Your Text Messages Into Your Phone (LinuxWorld)

Thursday, January 06, 2005

PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning 

PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning: "Project summary
The objective is to build a Europe-wide Distributed Institute which will pioneer principled methods of pattern analysis, statistical modelling and computational learning as core enabling technologies for multimodal interfaces that are capable of natural and seamless interaction with and among individual human users.
At each stage in the process, machine learning has a crucial role to play. It is proving an increasingly important tool in Machine Vision, Speech, Haptics, Brain Computer Interfaces, Information Extraction and Natural Language Processing; it provides a uniform methodology for multimodal integration; it is an invaluable tool in information extraction; while on-line learning provides the techniques needed for adaptively modelling the requirements of individual users. Though machine learning has such potential to improve the quality of multimodal interfaces, significant advances are needed, in both the fundamental techniques and their tailoring to the various aspects of the applications, before this vision can become a reality.
The institute will foster interaction between groups working on fundamental analysis including statisticians and learning theorists; algorithms groups including members of the non-linear programming community; and groups in machine vision, speech, haptics, brain-computer interfaces, natural language processing, information-retrieval, textual information processing and user modelling for computer human interaction, groups that will act as bridges to the application domains and end-users."

Recognising Textual Entailment Challenge 

Recognising Textual Entailment Challenge
"Call for Participation
Motivation
Recent years have seen a surge in research of text processing applications that perform semantic-oriented inference about concrete text meanings and their relationships. Even though many applications face similar underlying semantic problems, these problems are usually addressed in an application oriented manner. Consequently it is difficult to compare, under a generic evaluation framework, semantic methods that were developed within different applications. The PASCAL Challenge introduces textual entailment as a common task and evaluation framework for Natural Language Processing, Information Retrieval and Machine Learning researchers, covering a broad range of semantic-oriented inferences needed for practical applications. This task is therefore suitable for evaluating and comparing semantic-oriented models in a generic manner. Eventually, work on textual entailment may promote the development of generic semantic 'engines', which will play an analogous role to that of generic syntactic analyzers across multiple applications."

Tuesday, January 04, 2005

Tony Belpaeme 

Tony Belpaeme: "Tony Belpaeme is a postdoctoral fellow of the Flemish fund for scientific research (FWO Vlaanderen). He is affiliated with the Artificial Intelligence Laboratory, directed by Luc Steels, at the Vrije Universiteit Brussel. He holds a guest professorship at the same university, where he teaches introductory artificial intelligence and autonomous systems.
His research interests include cognitive robotics, the evolution of language, concept formation and artificial intelligence in general."

Socialtext -- Enterprise Social Software 

Socialtext -- Enterprise Social Software: "Year of the Enterprise Wiki

InfoWorld columnist Jon Udell calls 2004 The Year of the Enterprise Wiki, saying the Wiki concept -- a Web site that every reader can also write and edit -- has flourished beyond all expectations.
Flexible, direct, lightweight, and requiring only a Web browser to use, Wikis suit a wide range of applications..."

Monday, January 03, 2005

Aduna Metadata Server 

Aduna Metadata Server

The TriG Syntax 

The TriG Syntax

Google Search: Adaptive "Radial Basis Networks" 

Google Search: Adaptive "Radial Basis Networks"

Kitten's Project Blog - Kitten's Spaminator 

Kitten's Project Blog - Kitten's Spaminator

Intelligent Enterprise Magazine: Set Disruptors ON FULL 

Intelligent Enterprise Magazine: Set Disruptors ON FULL: "Rationalizing data across an enterprise is a problem so hard that it's only been attacked piecemeal. However, the stars are aligning for real progress. First, Moore's Law continues to deliver unbelievable hardware resources to power solutions at increasingly affordable prices. Second, the development of the Internet has put us on the road to universal connection and access. Third, e-business - alive and well although somewhat less glamorous - has fueled the development of business integration technologies, which reside primarily in the enterprise application integration (EAI) technology bucket.
Fourth, when data warehousing gave rise to ETL tools, it put data quality and metadata issues on the front burner. Today's obsession with governance, disclosure, and regulatory compliance is accentuating the demand to solve these issues - but, the requirements are driving a change toward real-time reporting supported by EII.
Finally, other stars aligning include Web services, service-oriented architecture, and evolving information sharing standards based on XML, the Semantic Web, Resource Description Framework (RDF), and ontology - and what the newer and major enterprise application vendors are doing to take advantage of standards. Such steps include turning their application suites into modular components that come together through integration infrastructure; and second, introducing tools and methodologies for creating agile business processes, especially through employing the standard Business Process Modeling Language (BPML). The conclusion is that many existing approaches to information integration urgently require rethinking."

Sunday, January 02, 2005

xFunctions xPresso Educational Mathematics Applet 

xFunctions xPresso Educational Mathematics Applet

mathematical function - encyclopedia article about mathematical function. 

mathematical function - encyclopedia article about mathematical function.

AboveNet 

AboveNet: "The Premier Metro Network Access Provider for Your Business.

AboveNet, Inc. is the leading provider of network infrastructure services that enable unconstrained information exchange within and between businesses. AboveNet builds and operates an office-to-office, 100% optical network that enables customers to create an efficient, cost-effective network that breaks economic and performance barriers imposed in the last mile by complex legacy telecom infrastructures. With the most extensive optical metro network in the world, data centers throughout the US and Europe, top quality managed services and a high-performance IP network, AboveNet is able to offer the most flexible and complete information exchange solutions in the industry.
"

This page is powered by Blogger. Isn't yours?