VroniPlag Wiki

Angaben zur Quelle [Bearbeiten]

Autor     Tim Berners-Lee / James Hendler / Ora Lassila
Titel    The Semantic Web. A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities
Zeitschrift    Scientific American
Datum    17. May 2001
Seiten    34-43
URL    https://www.jstor.org/stable/26059207

Literaturverz.   

yes
Fußnoten    yes
Fragmente    5


Fragmente der Quelle:
[1.] Svr/Fragment 014 22 - Diskussion
Zuletzt bearbeitet: 2020-04-18 20:20:25 [[Benutzer:|]]
Berners-Lee et al 2001, Fragment, Gesichtet, KomplettPlagiat, SMWFragment, Schutzlevel sysop, Svr

Typus
KomplettPlagiat
Bearbeiter
Schumann
Gesichtet
Yes.png
Untersuchte Arbeit:
Seite: 14, Zeilen: 22-24
Quelle: Berners-Lee et al 2001
Seite(n): 42, Zeilen: right col., 14 ff.
Today, many automated Web-based services already exist without semantics, but other programs such as agents have no way to locate one that will perform a specific function. Many automated Web-based services already exist without semantics, but other programs such as agents have no way to locate one that will perform a specific function.
Anmerkungen

The source is not given.

Sichter
(Schumann), WiseWoman


[2.] Svr/Fragment 016 02 - Diskussion
Zuletzt bearbeitet: 2020-02-14 11:59:21 [[Benutzer:|]]
BauernOpfer, Berners-Lee et al 2001, Fragment, Gesichtet, SMWFragment, Schutzlevel sysop, Svr

Typus
BauernOpfer
Bearbeiter
SleepyHollow02
Gesichtet
Yes.png
Untersuchte Arbeit:
Seite: 16, Zeilen: 2-10, 15-17, 20-23, 25-32
Quelle: Berners-Lee et al 2001
Seite(n): 36, Zeilen: 36: right col., 15 ff.; 37: left col., 1 ff. center col., 9 ff., right col., 1 ff. ; 38: left col., 1 ff., right col., 3 ff.
[The Idea of XML is to support the vision of the Semantic Web,] to bring structure to the meaningful content of Web pages, creating an environment where software agents roaming from page to page can read ly carry out sophisticated tasks for users.

2. The Semantic Web

It has been discussed that most of the content on the Internet is designed for humans to read, not for computer programs to manipulate meaningfully. Computers can parse Web pages for layout and routine processing (headers, links, meta-tags), but in general computers have no reliable way to process the semantics. [...]

The challenge of the semantic web is to provide a language that expresses both data and rules for reasoning about the data and that allows rules from any existing knowledge-representation system to be exported onto the Web. [In the world of the semantic web a tourist can connect to the Internet via his handheld device and enter certain keywords when looking for a restaurant. A software agent searching the web will find the information of a list of restaurant [sic] containing] keywords such as <dinner>, <lunch>, <restaurant>, <Italian> (as might be encoded today) but also that the <opening-hours> at this restaurant are <weekdays> and then the script takes a <time range> in <yyyy-mm-dd-hour> format and returns <table available>, [thus automatically booking a table when the tourist enters <ok>.]

For the semantic web to function, computers must have access to structured collections of information and sets of inference rules that they can use to conduct automated reasoning. Knowledge representation, as this technology is often called, is clearly a good idea, and some very nice demonstrations exist, but it has not yet changed the world. It contains the seeds of important applications, but to realize its full potential it must be linked into a single global system. Traditional knowledge-representation systems typically have been centralized, requiring everyone to share exactly the same definition of common concepts, but central [control is stifling, and increasing the size and scope of such a system rapidly becomes unmanageable.]

[page 36]

Most of the Web's content today is designed for humans to read, not for computer programs to manipulate meaningfully. Computers can adeptly parse Web pages for layout and routine processing—here a header, there a link to another page—but in general, computers have no reliable way to process the semantics: this is the home page of the Hartman and Strauss Physio Clinic, this link goes to Dr. Hartman's curriculum vitae.

The Semantic Web will bring structure to the meaningful content of Web pages, creating an environment where software agents roaming from page to page can readily carry out sophisticated tasks for users. Such an agent coming to the clinic's Web page will know not just that the page has keywords such as "treatment, medicine, physical, therapy"

[page 37]

(as might be encoded today) but also that Dr. Hartman works at this clinic on Mondays, Wednesdays and Fridays and that the script takes a date range in yyyy-mm-dd format and returns appointment times.

[...]

Knowledge Representation

FOR THE SEMANTIC WEB to function, computers must have access to structured collections of information and sets of inference rules that they can use to conduct automated reasoning. Artificial-intelligence researchers have studied such systems since long before the Web was developed. Knowledge representation, as this technology is often called, is currently in a state comparable to that of hypertext before the advent of the Web: it is clearly a good idea, and some very nice demonstrations exist, but it has not yet changed the world. It contains the seeds of important applications, but to realize its full potential it must be linked into a single global system.

Traditional knowledge-representation systems typically have been centralized, requiring everyone to share exactly the same definition of common concepts

[page 38]

such as "parent" or "vehicle." But central control is stifling, and increasing the size and scope of such a system rapidly becomes unmanageable.

[...]

The challenge of the Semantic Web, therefore, is to provide a language that expresses both data and rules for reasoning about the data and that allows rules from any existing knowledge-representation system to be exported onto the Web.

Anmerkungen

The source is given on page 17, but it is not made clear that the text is so close to the source.

Sichter
(SleepyHollow02), Schumann, WiseWoman


[3.] Svr/Fragment 017 01 - Diskussion
Zuletzt bearbeitet: 2020-02-14 11:51:26 [[Benutzer:|]]
BauernOpfer, Berners-Lee et al 2001, Fragment, Gesichtet, SMWFragment, Schutzlevel sysop, Svr

Typus
BauernOpfer
Bearbeiter
SleepyHollow02
Gesichtet
Yes.png
Untersuchte Arbeit:
Seite: 17, Zeilen: 1-6, 8-19, 101-114
Quelle: Berners-Lee et al 2001
Seite(n): 37, 38, 39, 42, Zeilen: 37: right col., last paragraph; 38: left col. 1 ff., right col. 25 ff., 36 ff.; 39: left col. 1 ff., center col. 2 ff., 20 ff., right col., 1 ff.; 42: left col., 9 ff.
[Traditional knowledge-representation systems typically have been centralized, requiring everyone to share exactly the same definition of common concepts, but central] control is stifling, and increasing the size and scope of such a system rapidly becomes unmanageable.

Three important technologies for developing the Semantic Web are already in place: XML, the Resource Description Framework (RDF)1 and Ontologies2. XML allows users to add arbitrary structure to their documents but says nothing about what the structures mean. [This is what RDF (an XML application itself) is used for, expressing meaning. The next challenge for the realization of the semantic web is that] two databases may use different identifiers for what is in fact the same concept, such as <⁠chair​>. A program that wants to compare or combine information across the two databases has to know whether these two terms are being used to mean the same thing, a chair to sit on or the chairman of a conference. Ideally, the program must have a way to discover such common meanings for whatever databases it encounters. A solution is provided by ontologies, the third basic component of the Semantic Web.

The real power of the Semantic Web will be realized when people create many programs that collect Web content from diverse sources, process the information and exchange the results with other programs. The effectiveness of such software agents will increase exponentially as more machine-readable Web content and automated services (including other agents) become available (Berners-Lee et. al. 2001).


1 RDF (W3C 1999b) encodes XML in sets of triples, each triple being rather like the subject, verb and object of an elementary sentence. These triples can be written using XML tags. In RDF, a document makes assertions that particular things (people, Web pages or whatever) have properties (such as "is a sister of," "is the author of") with certain values (another person, another Web page). This structure turns out to be a natural way to describe the vast majority of the data processed by machines. Subject and object are each identified by a Universal Resource Identifier (URI), just as used in a link on a Web page. The verbs are also identified by URIs, which enables anyone to define a new concept, a new verb, just by defining a URI for it somewhere on the Web. The triples of RDF form webs of information about related things. Because RDF uses URIs to encode this information in a document, the URIs ensure that concepts are not just words in a document but are tied to a unique definition that everyone can find on the Web.

2 Ontologies are a document or file that formally define the relations among terms. They can be compared to a mediator between the information seeker and the set of XML documents. The most typical kind of ontology for the Web has taxonomy and a set of inference rules.


Berners-Lee, T. Hendler, J. and Lassila,O. [sic] 2001, "The Semantic Web. A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities.," [sic] The Scientific American, May 2001.

W3C (eds) 1999b, Resource Description Framework (RDF). Model and Syntax Specification. W3C Recommendation 22 February 1999. [Online]. Available: http://www.w3.org/TR/REC-rdf-syntax/
Accessed: 3 January 2000.

[page 37]

Traditional knowledge-representation systems typically have been centralized, requiring everyone to share exactly the same definition of common concepts

[page 38]

such as “parent” or “vehicle.” But central control is stifling, and increasing the size and scope of such a system rapidly becomes unmanageable.

[...]

Two important technologies for developing the Semantic Web are already in place: eXtensible Markup Language (XML) and the Resource Description Framework (RDF). XML lets everyone create their own tags—hidden labels such as <⁠zip code​> or <⁠alma mater​> that annotate Web pages or sections of text on a page. [...] In short, XML allows users to add arbitrary structure to their documents but says nothing about what the structures mean [see “XML and the Second-Generation Web,” by Jon Bosak and Tim Bray; SCIENTIFIC AMERICAN, May 1999].

Meaning is expressed by RDF, which encodes it in sets of triples, each triple being rather like the subject, verb and object of an elementary sentence. These triples

[page 39]

can be written using XML tags. In RDF, a document makes assertions that particular things (people, Web pages or whatever) have properties (such as “is a sister of,” “is the author of”) with certain values (another person, another Web page). This structure turns out to be a natural way to describe the vast majority of the data processed by machines. Subject and object are each identified by a Universal Resource Identifier (URI), just as used in a link on a Web page. (URLs, Uniform Resource Locators, are the most common type of URI.) The verbs are also identified by URIs, which enables anyone to define a new concept, a new verb, just by defining a URI for it somewhere on the Web.

[...]

The triples of RDF form webs of information about related things. Because RDF uses URIs to encode this information in a document, the URIs ensure that concepts are not just words in a document but are tied to a unique definition that everyone can find on the Web.

[...]

OF COURSE, THIS IS NOT the end of the story, because two databases may use different identifiers for what is in fact the same concept, such as zip code. A program that wants to compare or combine information across the two databases has to know that these two terms are being used to mean the same thing. Ideally, the program must have a way to discover such common meanings for whatever databases it encounters.

A solution to this problem is provided by the third basic component of the Semantic Web, collections of information called ontologies. In philosophy, an ontology is a theory about the nature of existence, of what types of things exist; ontology as a discipline studies such theories. Artificial-intelligence and Web researchers have co-opted the term for their own jargon, and for them an ontology is a document or file that formally defines the relations among terms. The most typical kind of ontology for the Web has a taxonomy and a set of inference rules.

[page 42]

THE REAL POWER of the Semantic Web will be realized when people create many programs that collect Web content from diverse sources, process the information and exchange the results with other programs. The effectiveness of such software agents will increase exponentially as more machine-readable Web content and automated services (including other agents) become available.

Anmerkungen

The source is given, but it is not made clear that the text is so extensively copied.

Sichter
(SleepyHollow02), Schumann, WiseWoman


[4.] Svr/Fragment 019 07 - Diskussion
Zuletzt bearbeitet: 2020-02-14 12:01:45 [[Benutzer:|]]
Berners-Lee et al 2001, Fragment, Gesichtet, SMWFragment, Schutzlevel sysop, Svr, Verschleierung

Typus
Verschleierung
Bearbeiter
SleepyHollow02
Gesichtet
Yes.png
Untersuchte Arbeit:
Seite: 19, Zeilen: 7-10
Quelle: Berners-Lee et al 2001
Seite(n): 38, Zeilen: right col., 29 ff.
XML enables users to create own tags <⁠address> or <⁠name> that annotate Web pages or sections of text on a page. Scripts, or programs, can make use of these tags in sophisticated ways, but the scriptwriter has to know what the page writer uses each tag for. XML lets everyone create their own tags—hidden labels such as <⁠zip code⁠> or <⁠alma mater⁠> that annotate Web pages or sections of text on a page. Scripts, or programs, can make use of these tags in sophisticated ways, but the script writer has to know what the page writer uses each tag for.
Anmerkungen

The source is not given.

Sichter
(SleepyHollow02) Schumann


[5.] Svr/Fragment 025 23 - Diskussion
Zuletzt bearbeitet: 2020-02-10 17:40:24 [[Benutzer:|]]
Berners-Lee et al 2001, Fragment, Gesichtet, SMWFragment, Schutzlevel sysop, Svr, Verschleierung

Typus
Verschleierung
Bearbeiter
SleepyHollow02
Gesichtet
Yes.png
Untersuchte Arbeit:
Seite: 25, Zeilen: 23-27, 30-32
Quelle: Berners-Lee et al 2001
Seite(n): 42, Zeilen: right col., 18 ff., 40 ff., 47 ff.
Service discovery can happen only when there is a common language to describe a service in a way that lets other agents understand both the function offered and how to take advantage of it. In the current Web, services and agents can advertise their function by, for example, depositing such descriptions in directories analogous to the Yellow Pages. [...]

The Semantic Web, in contrast, is more flexible. The consumer and producer agents can reach a shared understanding by exchanging ontologies, which provide the vocabulary needed for discussion. Semantics also makes it easier to [take advantage of a service that only partially matches a request.]

[page 42]

This process, called service discovery, can happen only when there is a common language to describe a service in a way that lets other agents “understand” both the function offered and how to take advantage of it. Services and agents can advertise their function by, for example, depositing such descriptions in directories analogous to the Yellow Pages.

[...]

The Semantic Web, in contrast, is more flexible. The consumer and producer agents can reach a shared understanding by exchanging ontologies, which provide the vocabulary needed for discussion. [...] Semantics also makes it easier to take advantage of a service that only partially matches a request.

Anmerkungen

The source is not given.

Sichter
(SleepyHollow02) Schumann