Saturday, January 01, 2005

Riemann zeta function - encyclopedia article about Riemann zeta function. 

Riemann zeta function - encyclopedia article about Riemann zeta function.

Riemann Hypothesis -- from MathWorld 

Riemann Hypothesis -- from MathWorld

Riemann Zeta Function -- from MathWorld 

Riemann Zeta Function -- from MathWorld

Google Search: python "zeta function" graph 

Google Search: python "zeta function" graph

A Tale of Two Layerings 

A Tale of Two Layerings: "Abstract
Semantic layering is a current problem in the Semantic Web. This is strange, as there are many examples of semantic layering in logical formalisms that have been around for decades. The problem is that semantic layering in the Semantic Web is a much stronger notion than what one would think it should be. Some effects of this very strong notion and how to get around them are presented.

"

Alexa Web Information Service - BETA! 

Alexa Web Information Service - BETA!: "Alexa's vast database of web information is available on the Amazon.com Web Services platform.

The Alexa Web Information Service offers a platform for creating innovative Web solutions and services available via the Amazon.com Web Service platform. Developers, researchers, web site owners, and merchants can now find answers to the most difficult questions on the Web, and incorporate them directly into their own websites or services. "

Thursday, December 30, 2004

Who's Who: Paul Vogt 

Who's Who: Paul Vogt: "I currently work as a Research Fellow at the Language Evolution and Computation (LEC) Unit (University of Edinburgh - UK) and as a Guest Researcher at the Induction of Linguistic Knowledge group of the Section Computational Linguistics at Tilburg University in the Netherlands. Previously, I was at the Institute of Knowledge and Agent Technology (Universiteit Maastricht - Netherlands) and at the AI Lab (Vrije Universiteit Brussel - Belgium)."

The SCL Archives 

The SCL Archives

SCL: Simple Common Logic 

SCL: Simple Common Logic: "Abstract
SCL is a first-order logical language intended for information exchange and transmission. SCL allows for a variety of different syntactic forms, called dialects, all expressible within a common XML-based syntax and all sharing a single semantics.
Requirements
SCL has been designed with several requirements in mind, all arising from its intended role as a medium for transmitting logical content on the WWWeb.
1. Be a full first-order logic with equality.
1a. SCL syntax and semantics should provide for the full range of first-order syntactic forms, with their usual meanings. Any conventional first-order syntax should be directly translateable into SCL without loss or alteration of meaning.
2. Provide a general-purpose syntax for communicating logical expressions.
2a. There should be a standard XML syntax for communicating SCL content.
2b. The language should be able to express various commonly used 'syntactic sugarings' for logical forms
2c. The syntax should relate to existing logical standards and conventions; in particular, it should be capable of rendering any content expressible in RDF, RDFS or OWL.
2d. The syntax should provide for extendability to include future syntactic extensions to the language, such as modalities, extended quantifier forms, nonmonotonic constructions, etc..
2e. There should be at least one compact, human-readable syntax defined which can be used to express the entire language
2f. The notation should not make gratuitous or arbitrary assumptions about logical relationships between different expressions, particularly if these assumptions can be expressed in SCL directly.
3. Be 'web-savvy'
3a. The XML syntax must be compatible with the specs for XML, URI syntax and XML Schema, Unicode and other standards relevant to transmission of information on the WWWeb.

3b. URI references can be used as logical names in the language

3c. URI references can be used to to give names to expressions and sets of expressions, to facilitate Web operations such as retrieval, importation and cross-reference.

3d. SCL must contain a general-purpose datatyping convention which handles the XSD datatype suite so as to be compatible with RDF, RDFS and OWL usage.

4. Be an open-network common logic
4a. Transmission of content between SCL-aware agents should not require negotiation about syntactic roles of symbols, or translations between syntactic roles.

4b. Any piece of SCL text should have the same meaning, and support the same inferences, everywhere on the network. Every name should have the same logical meaning at every node of the network.

4c. No ontology can limit the ability of another ontology to refer to any entity or to make assertions about any entity.

4d. The language should support ways to refer to a local universe of discourse and relate it to other universes.

4e. Users of SCL should be free to invent new names and use them in published ontologies."

"Acknowledgements.
This document represents the combined efforts of the SCL working group, a self-selected group comprising Murray Altheim, Bill Anderson, Pat Hayes, Chris Menzel, John F. Sowa, and Tanel Tammet. Contributions were also made by Michael Gruninger, Geoff Sutcliffe, Kenneth Murray, Jay Halcomb, Robert E. Kent, Elisa Kendall, David Fraenkel and Mark Stickel. The work was an outgrowth of earlier work by the KIF/CL working group comprising, in addition to the above, Adam Pease, Michael F. Uschold, Christopher A. Welty and David Whitten, with contributions from Mike Genesereth. The ancestor of this entire effort was KIF, authored by Mike Genesereth."

Translating Semantic Web languages into SCL 

Translating Semantic Web languages into SCL: "Pat Hayes, IHMC
The three SWeb languages so far given W3C 'recommendation' status (RDF [RDF], RDFS [RDFS] and OWL[Webont]) can all be translated straightforwardly into SCL. This document gives full translations, showing how to express all of the content of these languages directly in SCL.
Very little of this is new or original. The translations follow a number of standard conventions: classes map into unary relations, properties into binary relations, and expression forms in OWL and RDFS map into particular patterns of quantification applied to boolean combinations of atoms built from these relations. These more elaborate languages (i.e RDF semantic extensions, c.f. [RDF-Semantics]) also require adding axioms which describe their semantic assumptions explicitly; this is along the lines used by Fikes and McGuinness [Fikes&McGuinness] in their axiomatic semantics for DAML (the precursor to OWL) and also suggested in a W3C note [Lbase]. Datatyped literals are handled by using functions representing the datatypes, in a uniform way.
Throughout this document, examples of RDF, RDFS and OWL text are rendered in italics, while SCL text is rendered as code. Three-letter strings such as sss, ppp are used to indicate generic components of expressions, and when giving translations (usually in the form of tables) a change of rendering indicates the application of the translation in question, so that if ppp indicates some expression in RDF or OWL, then ppp indicates the result of translating that expression into SCL syntax using the conventions in the table. The SCL core syntax [SCL] and the N-triples notation for RDF [RDFTestCases] are used throughout."

Tag URI 

Tag URI

Tuesday, December 28, 2004

Google Search: "Adobe Reader could not open" 

Google Search: "Adobe Reader could not open"

Dynamic PDF FAQ 

FAQ

Monday, December 27, 2004

The nature of meaning in the age of Google. Google, Indexing, Web, Meaning 

The nature of meaning in the age of Google. Google, Indexing, Web, Meaning: "The culture of lay indexing has been created by the aggregation strategy employed by Web search engines such as Google. Meaning is constructed in this culture by harvesting semantic content from Web pages and using hyperlinks as a plebiscite for the most important Web pages. The characteristic tension of the culture of lay indexing is between genuine information and spam. Google's success requires maintaining the secrecy of its parsing algorithm despite the efforts of Web authors to gain advantage over the Googlebot. Legacy methods of asserting meaning such as the META keywords tag and Dublin Core are inappropriate in the lawless meaning space of the open Web. A writing guide is urged as a necessary aid for Web authors who must balance enhancing expression versus the use of technologies that limit the aggregation of their work."

Semantics and the Web 

Semantics and the Web: "Also, it is interesting to note that metadata efforts have largely failed with web search engines, because any text on the page which is not directly represented to the user is abused to manipulate search engines. There are even numerous companies which specialize in manipulating search engines for profit. (Brin & Page, 1998)"

Wanted: Cheap Metadata 

Wanted: Cheap Metadata: "I've written before in this forum (1, 2) about the value of adding ID values to HTML and XML block-level elements. As I wrote to Elliotte, "Simple automated ways to add genuinely useful metadata are few and far between, so I think it's worth jumping on any we can find." Tim Bray wrote an excellent essay on metadata with the assertion, catchy enough that Simon St. Laurent blogged it on the O'Reilly Network, that "there is no cheap metadata." Tim contradicted himself, however, by listing some metadata that's free: "filename, created/modified dates, who created it, what kind of file (HTML, Excel, PowerPoint), how big it is." This metadata, although free, has definite value. The knowledge that Google's number one hit for my search term is a four meg Word file and the number two hit is a 200K HTML file strongly influences my choice of which link to follow first, and providing criteria for making link traversal decisions is the whole point of link metadata.

As Tim alludes, the best metadata comes from a paid staff making human judgments about the best metadata to add. I'll call this "judgment-call metadata" to distinguish it from metadata generated with algorithms. This is expensive, but makes sense at a business like my employer because lawyers will pay extra for summaries of court decisions and for the ability to search legal cases using keywords from a carefully maintained taxonomy. But what about users without the kind of working budget that lawyers have?

Some useful metadata is still pretty cheap. I managed to convince Elliotte that the trouble of adding IDs to block elements was worth it. Larry Page and Sergey Brin, who didn't start off with giant server farms but by doing academic research, identified and took advantage of a new kind of web metadata that was cheaper than human editors, and it certainly worked out well for them."

This page is powered by Blogger. Isn't yours?