VroniPlag Wiki

This Wiki is best viewed in Firefox with Adblock plus extension.

MEHR ERFAHREN

VroniPlag Wiki
Assessing the Impact of XML/EDI with Real Option Valuation

von Shermin Voshmgir

vorherige Seite | zur Übersichtsseite | folgende Seite

Statistik und Sichtungsnachweis dieser Seite findet sich am Artikelende

[1.] Svr/Fragment 017 01 - Diskussion
Zuletzt bearbeitet: 2020-02-14 11:51:26 WiseWoman
BauernOpfer, Berners-Lee et al 2001, Fragment, Gesichtet, SMWFragment, Schutzlevel sysop, Svr

Typus
BauernOpfer
Bearbeiter
SleepyHollow02
Gesichtet
Yes
Untersuchte Arbeit:
Seite: 17, Zeilen: 1-6, 8-19, 101-114
Quelle: Berners-Lee et al 2001
Seite(n): 37, 38, 39, 42, Zeilen: 37: right col., last paragraph; 38: left col. 1 ff., right col. 25 ff., 36 ff.; 39: left col. 1 ff., center col. 2 ff., 20 ff., right col., 1 ff.; 42: left col., 9 ff.
[Traditional knowledge-representation systems typically have been centralized, requiring everyone to share exactly the same definition of common concepts, but central] control is stifling, and increasing the size and scope of such a system rapidly becomes unmanageable.

Three important technologies for developing the Semantic Web are already in place: XML, the Resource Description Framework (RDF)1 and Ontologies2. XML allows users to add arbitrary structure to their documents but says nothing about what the structures mean. [This is what RDF (an XML application itself) is used for, expressing meaning. The next challenge for the realization of the semantic web is that] two databases may use different identifiers for what is in fact the same concept, such as <⁠chair​>. A program that wants to compare or combine information across the two databases has to know whether these two terms are being used to mean the same thing, a chair to sit on or the chairman of a conference. Ideally, the program must have a way to discover such common meanings for whatever databases it encounters. A solution is provided by ontologies, the third basic component of the Semantic Web.

The real power of the Semantic Web will be realized when people create many programs that collect Web content from diverse sources, process the information and exchange the results with other programs. The effectiveness of such software agents will increase exponentially as more machine-readable Web content and automated services (including other agents) become available (Berners-Lee et. al. 2001).


1 RDF (W3C 1999b) encodes XML in sets of triples, each triple being rather like the subject, verb and object of an elementary sentence. These triples can be written using XML tags. In RDF, a document makes assertions that particular things (people, Web pages or whatever) have properties (such as "is a sister of," "is the author of") with certain values (another person, another Web page). This structure turns out to be a natural way to describe the vast majority of the data processed by machines. Subject and object are each identified by a Universal Resource Identifier (URI), just as used in a link on a Web page. The verbs are also identified by URIs, which enables anyone to define a new concept, a new verb, just by defining a URI for it somewhere on the Web. The triples of RDF form webs of information about related things. Because RDF uses URIs to encode this information in a document, the URIs ensure that concepts are not just words in a document but are tied to a unique definition that everyone can find on the Web.

2 Ontologies are a document or file that formally define the relations among terms. They can be compared to a mediator between the information seeker and the set of XML documents. The most typical kind of ontology for the Web has taxonomy and a set of inference rules.


Berners-Lee, T. Hendler, J. and Lassila,O. [sic] 2001, "The Semantic Web. A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities.," [sic] The Scientific American, May 2001.

W3C (eds) 1999b, Resource Description Framework (RDF). Model and Syntax Specification. W3C Recommendation 22 February 1999. [Online]. Available: http://www.w3.org/TR/REC-rdf-syntax/
Accessed: 3 January 2000.

[page 37]

Traditional knowledge-representation systems typically have been centralized, requiring everyone to share exactly the same definition of common concepts

[page 38]

such as “parent” or “vehicle.” But central control is stifling, and increasing the size and scope of such a system rapidly becomes unmanageable.

[...]

Two important technologies for developing the Semantic Web are already in place: eXtensible Markup Language (XML) and the Resource Description Framework (RDF). XML lets everyone create their own tags—hidden labels such as <⁠zip code​> or <⁠alma mater​> that annotate Web pages or sections of text on a page. [...] In short, XML allows users to add arbitrary structure to their documents but says nothing about what the structures mean [see “XML and the Second-Generation Web,” by Jon Bosak and Tim Bray; SCIENTIFIC AMERICAN, May 1999].

Meaning is expressed by RDF, which encodes it in sets of triples, each triple being rather like the subject, verb and object of an elementary sentence. These triples

[page 39]

can be written using XML tags. In RDF, a document makes assertions that particular things (people, Web pages or whatever) have properties (such as “is a sister of,” “is the author of”) with certain values (another person, another Web page). This structure turns out to be a natural way to describe the vast majority of the data processed by machines. Subject and object are each identified by a Universal Resource Identifier (URI), just as used in a link on a Web page. (URLs, Uniform Resource Locators, are the most common type of URI.) The verbs are also identified by URIs, which enables anyone to define a new concept, a new verb, just by defining a URI for it somewhere on the Web.

[...]

The triples of RDF form webs of information about related things. Because RDF uses URIs to encode this information in a document, the URIs ensure that concepts are not just words in a document but are tied to a unique definition that everyone can find on the Web.

[...]

OF COURSE, THIS IS NOT the end of the story, because two databases may use different identifiers for what is in fact the same concept, such as zip code. A program that wants to compare or combine information across the two databases has to know that these two terms are being used to mean the same thing. Ideally, the program must have a way to discover such common meanings for whatever databases it encounters.

A solution to this problem is provided by the third basic component of the Semantic Web, collections of information called ontologies. In philosophy, an ontology is a theory about the nature of existence, of what types of things exist; ontology as a discipline studies such theories. Artificial-intelligence and Web researchers have co-opted the term for their own jargon, and for them an ontology is a document or file that formally defines the relations among terms. The most typical kind of ontology for the Web has a taxonomy and a set of inference rules.

[page 42]

THE REAL POWER of the Semantic Web will be realized when people create many programs that collect Web content from diverse sources, process the information and exchange the results with other programs. The effectiveness of such software agents will increase exponentially as more machine-readable Web content and automated services (including other agents) become available.

Anmerkungen

The source is given, but it is not made clear that the text is so extensively copied.

Sichter
(SleepyHollow02), Schumann, WiseWoman



vorherige Seite | zur Übersichtsseite | folgende Seite
Letzte Bearbeitung dieser Seite: durch Benutzer:Schumann, Zeitstempel: 20200210172203