Home Page
|
- Is there a Home Page
- Identity logo
- <title>
- Status Bar Message
- Total Links on Page
- Navigation Defined
- What is site about
- Who is site about
- What type of site is this
- Demographic
- Who are they
- What do they want
- How old are they
- How much $ do they have
- Expected type of computer
- Expected Browser
- AOL Compatible
- Is Entry prohibited in any
way is entry harder
- annoying "Pop-Up"
Messages
- Meta Tags
- Keywords
- Discription
- Copyright
- Browser check Plugg Ins
- Browser Check Version
|
Navigation
|
- Styles of links
- Cohesive
- Redundant text links to image
links
- <ALT> for images buttons
- maintains location and appearance
throughout site
- Bread crumb navigation
- Banners
|
Other Pages on Site
|
- Splash Page (text link entry)
- Shopping Cart
- Can people spend money here
Easily
- Freebees
- Q and A FAQ
- Coupons
- Promotions
- Sales Information
- How to use products
- Subscription
- Contact
- Site Map
- Company Finical Information
- IS There an INDEX.HTM page
in Every Folder
|
Images
|
- No too many to compete with
text
- Adds information
- Sized for good fit on page
- <ALT> height , width
(( size (x)k ))
- good title for search engines
- Content Useful
|
Images as Links
|
- Do they all work (updated)
- Do buttons look like buttons
- Good labels on buttons
- <ALT>
- good title for search engines
- Parallel text links
- Appearance of a LINK
- IMAGE MAP - Parallel Text
Links
|
Text and Text Links
|
- Do they all work (updated)
- Are the fonts available on
all machine
- Are ALT face specified (PC
+ MAC)
- Useful or annoying text links
in copy
- Underlined is always a LInk
|
Astetcies
- Color
- Fonts
- BgColor
- BgImage
- Animation
- Design / Look
|
- Cross platform tested
- Anti Aliais
- Dithering
- 3 colors in a color family
- 3 or less fonts
- Rainbow text
- <blink>
- <marquee>
- Sound links
- Back ground sounds
- Respect the CREASE
- Less that 4 screen scrolls
- Is the written inforamation
useful
- Professional look = level
of commerce
- Visual and structural Theme
- Dose the design of the site
help me understand the site
- Is there a Swoosh
- Do the graphic look right
(embossing)
- Form and Function - Cane I
find what I want
- Is copy in annoying columns
- 50% white space
- Image the right dimension
for the site
- Do graphics have continuity
- Jargon or typos
- New Browser if going off site
- Careful use of color for the
color blind
- Animations Sparse/Distracting
- Back Ground Image/Color Distracting
- Are color web safe
- Copyright Violation
- Copyrigt Statements
- Images and Text "Need
to Be There"
|
Server
|
- Cross Platform tested
- Did all parts download
- Load Time
- Forms / CGI
- Look at Source Code
- Dynamic Content
- PGP (pretty good protection)
security
- Can Content be add logically
|
URL
|
- Forwarded or registered
- Ownership
|
Printablity
|
- Dose the page print well (550
pixels wide)
|
Code
Scripts
|
- JAVA
- Scripts
- Needed – concise – small
- Applets
- Serviette
|
Frames
|
- Working Properly
- Home button on each frame
- <noFrames>
|
Search engines
|
|
Tables
|
|
Other
|
|
Getting Started
Introduction
How
do search engines use META tags?
How do
I use them?
Common Tags
META
- Description
META
- Keywords
META
- Robots
HTML
- Title
Special Tags
META
-Refresh
META
- Target
META
- Author / Copyright
Wrap
Up
|
|
SPLASH PAGES
Would you do this?
You sell an expensive product.
You walk into a potential client's
office for the first time, introduce yourself, and place
an information packet in front of the client.
As you start to make your big
presentation, the client reaches into the packet, extracts
the contract you hope he'll sign and grabs a pen.
As the client starts to sign
the lucrative, long-term contract, you reach over across
the table, grab the client by the throat, and yell "Not
so fast, jerk face, I haven't finished my presentation!!!"
You wouldn't do that, would
you? Then why are you using splash pages because they're
the Web equivalent of the example above. The golden rule
of doing business on the Web is "Don't do anything that
gets in the way of the sale." Splash pages get in the way
because they're an extra layer between your visitor and
your site.
I try to tell clients that
Web design should reflect the real world and you don't see
"splash pages" in the real world. Think about Wal-Mart.
When you go to Wal-Mart are you forced to wait at the front
door and watch a thirty-second movie before you're allowed
to enter the store? No. Then why would you make your visitors
wait to get inside your Web site?
Not everyone agrees with my
position about splash pages. I recently gave a speech to
a group of 50 executives. One of them complained, "But we're
trying to establish a brand and our splash page is
our branding mechanism." I retorted, "OK, what kind of brand
does your 693K Flash splash page establish? It says, 'I
don't care about how long you have to wait I want to impress
you with how cool a company we are.'" You have to remember
that nobody will write you a check because you have a cool
splash page. Nobody.
What do I tell my boss?
I'm frequently asked, "My boss
is color blind, he likes shiny things and he's a moron --
how can I convince him not to use splash pages?" Simple.
Whenever you see a design element --and a splash page is
a great example -- that you think should be eliminated ask
you boss this question: "Would Amazon.com use this design
element? No? Well, they've spent millions of dollars on
their Web site in an effort to make it easy-to-use so if
they don't use a splash page, there must be a pretty good
reason."
If your boss still doesn't
get it, well...
META TAGS
With all the new HTML tags
that are coming out, it’s easy to overlook some of the greatest
tools in our arsenal of HTML tricks. There are still a few
HTML goodies lying around that’ll help you keep your pages
more up to date, make them easier to find, and even stop
them from becoming framed. What’s more, some of these tags
have been with us since the first Web browsers were released.
META tags can be very useful
for Web developers. They can be used to identify the creator
of the page, what HTML specs the page follows, the keywords
and description of the page, and the refresh parameter (which
can be used to cause the page to reload itself, or to load
another page). And these are just a few of the common uses!
First, there are two types
of META tags: HTTP-EQUIV and META tags with a NAME
attribute.
HTTP-EQUIV
META HTTP-EQUIV tags are the equivalent of HTTP headers.
To understand what headers are, you need to know a little
about what actually goes on when you use your Web browser
to request a document from a Web server. When you click
on a link for a page, the Web server receives your browser's
request via HTTP. Once the Web server has made sure that
the page you’ve requested is indeed there, it generates
an HTTP response. The initial data in that response is called
the "HTTP header block." The header tells the Web browser
information which may be useful for displaying this particular
document
Back to META tags. Just like
normal headers, META HTTP-EQUIV tags usually control or
direct the actions of Web browsers, and are used to further
refine the information which is provided by the actual headers.
HTTP-EQUIV tags are designed to affect the Web browser in
the same manner as normal headers. Certain Web servers may
translate META HTTP-EQUIV tags into actual HTTP headers
automatically so that the user’s Web browser would simply
see them as normal headers. Some Web servers, such as Apache
and CERN httpd, use a separate text file which contains
meta-data. A few Web server-generated headers, such as "Date,"
may not be overwritten by META tags, but most will work
just fine with a standard Web server.
NAME
META tags with a NAME attribute are used for META
types which do not correspond to normal HTTP headers. This
is still a matter of disagreement among developers, as some
search engine agents (worms and robots) interpret tags which
contain the keyword attribute whether they are declared
as "name" or "http-equiv," adding fuel to the fires of confusion
Using META Tags
On to more important issues,
like how to actually implement META tags in your Web pages.
If you’ve ever had readers tell you that they’re seeing
an old version of your page when you know that you’ve updated
it, you may want to make sure that their browser isn’t caching
the Web pages. Using META tags, you can tell the browser
not to cache files, and/or when to request a newer version
of the page. In this article, we’ll cover some of the META
tags, their uses, and how to implement them.
Expires
This tells the browser the date and time when the document
will be considered "expired." If a user is using Netscape
Navigator, a request for a document whose time has "expired"
will initiate a new network request for the document. An
illegal Expires date such as "0" is interpreted by the browser
as "immediately." Dates must be in the RFC850 format, (GMT
format):
<META HTTP-EQUIV="expires" CONTENT="Wed, 26 Feb 1997
08:21:57 GMT">
Pragma
This is another way to control browser caching. To use this
tag, the value must be "no-cache". When this is included
in a document, it prevents Netscape Navigator from caching
a page locally.
<META HTTP-EQUIV="Pragma" CONTENT="no-cache">
These two tags can be used
as together as shown to keep your content current—but beware.
Many users have reported that Microsoft’s Internet Explorer
refuses the META tag instructions, and caches the files
anyway. So far, nobody has been able to supply a fix to
this "bug." As of the release of MSIE 4.01, this problem
still existed.
Refresh
This tag specifies the time in seconds before the Web browser
reloads the document automatically. Alternatively, it can
specify a different URL for the browser to load.
<META HTTP-EQUIV="Refresh" CONTENT="0;URL=http://www.newurl.com">
Be sure to remember to place
quotation marks around the entire CONTENT attribute’s value,
or the page will not reload at all.
Set-Cookie
This is one method of setting a "cookie" in the user’s Web
browser. If you use an expiration date, the cookie is considered
permanent and will be saved to disk (until it expires),
otherwise it will be considered valid only for the current
session and will be erased upon closing the Web browser.
<META HTTP-EQUIV="Set-Cookie" CONTENT="cookievalue=xxx;expires=Wednesday,
21-Oct-98 16:14:21 GMT; path=/">
Window-target
This one specifies the "named window" of the current page,
and can be used to prevent a page from appearing inside
another framed page. Usually this means that the Web browser
will force the page to go the top frameset.
<META HTTP-EQUIV="Window-target" CONTENT="_top">
PICS-Label
Although you may not have heard of PICS-Label (PICS
stands for Platform for Internet Content Selection), you
probably will soon. At the same time that the Communications
Decency Act was struck down, the World Wide Web Consortium
(W3C) was working to develop a standard for labeling online
content (see www.w3.org/PICS/
). This standard became the Platform for Internet Content
Selection (PICS). The W3C’s standard left the actual creation
of labels to the "labeling services." Anything which has
a URL can be labeled, and labels can be assigned in two
ways. First, a third party labeling service may rate the
site, and the labels are stored at the actual labeling bureau
which resides on the Web server of the labeling service.
The second method involves the developer or Web site host
contacting a rating service, filling out the proper forms,
and using the HTML META tag information that the service
provides on their pages. One such free service is the PICS-Label
generator that Vancouver-Webpages
provides. It is based on the Vancouver Webpages Canadian
PICS ratings, version 1.0, and can be used as a guideline
for creating your own PICS-Label META tag.
Although PICS-Label
was designed as a ratings label, it also has other uses,
including code signing, privacy, and intellectual property
rights management. PICS uses what is called generic and
specific labels. Generic labels apply to each document whose
URL begins with a specific string of characters, while specific
labels apply only to a given file. A typical PICS-Label
for an entire site would look like this:
<META http-equiv="PICS-Label" content='(PICS-1.1 "http://vancouver-webpages.com/VWP1.0/"
l gen true comment "VWP1.0" by "scott@hisdomain.com" on
"1997.10.28T12:34-0800" for "http://www.hisdomain.com/"
r (P 2 S 0 SF -2 V 0 Tol -2 Com 0 Env -2 MC -3 Gam -1 Can
0 Edu -1 ))'>
Keyword and Description
attributes
Chances are that if you manually code your Web pages, you’re
aware of the "keyword" and "description" attributes.
These allow the search engines to easily index your page
using the keywords you specifically tell it, along with
a description of the site that you yourself get to write.
Couldn’t be simpler, right? You use the keywords attribute
to tell the search engines which keywords to use, like this:
<META NAME ="keywords" CONTENT="life, universe, mankind,
plants, relationships, the meaning of life, science">
By the way, don’t think you
can spike the keywords by using the same word repeated over
and over, as most search engines have refined their spiders
to ignore such spam. Using the META description attribute,
you add your own description for your page:
<META NAME="description" CONTENT="This page is about
the meaning of life, the universe, mankind and plants.">
Make sure that you use several
of your keywords in your description. While you are at it,
you may want to include the same description enclosed in
comment tags, just for the spiders that do not look at META
tags. To do that, just use the regular comment tags, like
this:
<!--// This page is about the meaning of life, the
universe, mankind and plants. //--!>
More about search engines can
be found in our special
report.
ROBOTs in the mist
On the other hand, there are probably some of you who do
not wish your pages to be indexed by the spiders at all.
Worse yet, you may not have access to the robots.txt file.
The robots META attribute was designed with this
problem in mind.
<META NAME="robots" CONTENT="all | none | index |
noindex | follow | nofollow">
The default for the robot attribute
is "all". This would allow all of the files to be indexed.
"None" would tell the spider not to index any files, and
not to follow the hyperlinks on the page to other pages.
"Index" indicates that this page may be indexed by the spider,
while "follow" would mean that the spider is free to follow
the links from this page to other pages. The inverse is
also true, thus this META tag:
<META NAME="robots" CONTENT=" noindex">
would tell the spider not to
index this page, but would allow it to follow subsidiary
links and index those pages. "nofollow" would allow the
page itself to be indexed, but the links could not be followed.
As you can see, the robots attribute can be very useful
for Web developers. For more information about the robot
attribute, visit the W3C’s
robot paper.
Placement of META tags
META tags should always be placed in the head of the HTML
document between the actual <HEAD> tags, before the
BODY tag. This is very important with framed pages, as a
lot of developers tend to forget to include them on individual
framed pages. Remember, if you only use META tags on the
frameset pages, you'll be missing a large number of potential
hits.
Obscure META Tags
We’ve covered most of the popular
and useful META tags, but what about the obscure ones that
you hardly see, such as Dublin Core or rating?
If you’re a normal person (I’m
not, and I don’t know any, but I heard they do exist), then
you’re wondering just what, exactly, is Dublin Core?
No, it’s not an Irish porno movie, but rather, it’s a simple
resource description record that has come to be known as
the Dublin Core Metadata element set, or rather, Dublin
Core.
Thanks to a considerate reader,
we now know how it got its name. Dublin Core is the core
set of metadata elements which were identified by a working
group (comprised of experts drawn from the library and Internet
communities) which met in Dublin, Ohio.
Dublin Core was designed with
several issues in mind, namely to:
- enable search engines to
filter by standard fields, i.e. date and author
- Browsers could have the
ability to display metadata fields in a separate window
- enhance cross-collection,
repurposing and integrating of content
- enhance site management,
as old pages may be located more easily, etc.
If you want to see what an actual
Dublin Core META tag looks like, you can use Vancouver
Webpages’ Dublin
Core META tag generator.
Rating is basically
the same thing as PICS-Label, and can be used for
the same purpose, but PICS-Label is recommended over rating,
as it is currently recognized by more software than rating,
although it couldn’t hurt to use both.
Many of the obscure META tags
are produced by HTML authoring software. Microsoft Word
supports a number of META attributes in its HTML export
option, and if you create a document with Internet Assistant,
FrontPage, etc, you’ll notice that they automatically insert
certain META tags, such as Generator, Content-Type,
etc. into the Web page source. Other META tags are organization
or search engine specific. The RDU Metadata search engine
uses many such tags, including: contributor, custodian,
east_bounding_coordinate, north_bounding_coordinate and
others. Other obscurities are government META tags, useful
only if you are within a government intranet or system.
But then
Statistics show that only about 21% of Web pages use keyword
and description META tags. If you use them and your
competitor doesn’t, that’s one in your favor. If your competitor
is using them and you aren’t, you may now consider yourself
armed with the knowledge. META tags are something that visitors
to your Web site are usually not aware of, but ironically,
a lot of times it was those same META tags which enabled
them to find you in the first place. So for goodness’ sake,
don’t tell anyone about this….let’s just keep this our own
little secret (just kidding...make sure to send this URL
to everyone you know!).
The Law
Before we leave the topic of META tags, keep in mind that
there are several legal issues that surround the use of
these tags on your Web site. Danny Goodman, editor of SearchEngineWatch,
has put together a page
detailing the lawsuits
brought on revolving around META tags. At the present time
there have already been at least five such suits, mainly
focused on sites that utilized someone else's keywords within
their META tags. The largest of these suits brought a settlement
of $3 million dollars. Bottom line: use your own keywords,
and definitely not words that someone else has a copyright
on.
For additional META information,
be sure to check out the WebDeveloper.com META
Tag Resource Page,
as well as Galactus' META
info page, and Vancouver's
own META
tag page. If you’d
like some assistance creating the META tags, check out Andrew
Daviel’s form-based META
tag generator.
|
There are lots of things you should
and shouldn’t do to your Meta tags. Below are ten ways that could
help your search engine rankings just by optimizing your Meta tags.
1. Don't stuff every word you can think
of in your keyword tag. Search engines can penalize you for this
by giving you poor rankings. Keep between 10 - 15 keywords/phrases;
too many can dilute the effectiveness of your keywords/phrases.
2. Don't put words in your keyword
tag that have nothing to do with your site. If caught (and most
are) you can be penalized.
3. Don't repeat the same keyword more
than 3 times in your tags (esp. the keywords tag), including words
in all tenses, for example: run, running, ran. Search engine robots
and spiders can pick those kinds of things out. The more you repeat,
the less effective the word becomes when it comes time for the search
engine to rank your site.
4. Don't exceed the maximum number
of characters or words for your title, description and keyword tags.
5. I wouldn't use Meta refresh tags
on pages you plan to submit to search engines/directories. Some
search engines, like Infoseek, may not index or may give you a poor
ranking on their search engine. Infoseek especially, is one who
likes to follow links by itself, it doesn't like to be taken somewhere
automatically.
6. Your title and description tags
are very important also. I would suggest that you put your main
keywords and phrases in these tags.
7. Use keyword phrases between 2 to
3 words long. One word keywords tend to be more competitive and
aren’t usually as effective.
8. Place your most important keywords/phrases
at the beginning of your title and keyword tags.
9. It's a good idea that your title
tag is the first tag in-between the body tags on your pages.
10. Last tip, if you have a page that
you don't want to be indexed, use this tag:
META NAME="ROBOTS" CONTENT="NOINDEX"
However, not all search engines support
this tag. A way around this is to use the robots.txt convention
of blocking indexing; most of your major search engines support
this.
With a little bit of time and elbow
grease you just might be able to turn a sluggish page into a major
traffic hub! Do be aware that each search engine is different, so
what might work on one, might not work on the other. You might have
to change your keywords around a few times in a different order
to get the results your looking for. Of course, Meta tags aren’t
the only way to help get your site ranked high, but they're a good
start.
Meta Tage Generator http://www.drclue.net/F1.cgi/HTML/META/META.html
<head>
<title>title ...words</title>
<meta name="resource-type" content="document">
<meta name="distribution" content="GLOBAL">
<meta name="description" content="Discribe yr site">
<meta name="copyright" content="year">
<meta name="keywords" content="words">
<meta name="author" content="your name">
<meta http-equiv="Reply-To" content="e-mail here">
<meta http-equiv="content-type:" content="text/html; charset=ISO-8859-1">
<meta http-equiv="content-language" content="en">
</head>
http://searchenginewatch.com/
http://www.forrealfree.com/
http://www.internetday.com/archives/050699.html
à
Search engines are not built
to be the same; that
is, the reason that your page is ranked high in one search
engine, does not guarantee high position on the others. I
would like to reveal the secrets of the major search engines.
Link popularity
Search engines like Excite, Infoseek
and Lycos will check how many links there are to your site
from others. Links boost the placement of the ranking. Infoseek
has a more complex link popularity system which places emphasis
on linking site status and relevancy.
Domain Name
The use of keywords in domain
names is favored by Altavista, Hotbot, Infoseek, Lycos and
Webcrawlers. Keywords in subdomain (secondary) names help
too. Do not lump keywords together; separate words by dashes.
Meta Tags
Meta tags are always mentioned
when it comes to search engine optimization. To a certain
extent, it has been misunderstood that meta tags help in boosting
rank postings. AltaVista, Excite, Lycos, Netfind, NorthernLights
and WebCrawler have low regard for meta tags and words or
text in meta tags. Hence, they will not boost you in these
search engines.
Invisible and Tiny Text
Excite is the most "spamable"
search engine. It will index invisible and tiny text. Webcrawler
and Netfind allow invisible text and Infoseek and NothernLights
will index tiny words. Invisible text (to web browsers) can
be achieved by specifying foreground and background in the
same color. Tiny sized text is placing text on a page in a
small font size. A page with predominantly heavy tiny text
will be treated as spam by Altavista, HotBot, Lycos, MSN and
Webcrawler and they will refuse to index heavily tiny texted
web pages.
Index Comments and ALT
Some web page designers insert
keywords or phrases in various parts of the web, wanting to
give a boost to the keywords. Not all search engines recognise
comment as keywords; only HotBot does. ALT text for images
is another trick commonly used. However, only AltaVista, InfoSeek
and Lycos index ALT text.
Stemming
Infoseek, Lycos and NorthernLights
will also search for variations of a word based on its stem.
For example, searching for the word "optimization" will result
in pages containing "optimize" or "optimizes".
Case Sensitive
AltaVista is the search engine
that is case sensitive. If you search for the phrase "search
engine optimization" you would get a completely different
result than from "search engines optimization".
Consistency
Consistency of keywords throughout
the page is viewed as important by Altavista, Hotbot and Webcrawler.
Hence, keywords have to be spread over the web page, particularly
at the bottom.
That is all for today, folks.
If I discover more, I will share with you again
|
However, new HTTP headers should not be created
without checking for conflict with existing ones since it
is possible to interfere with server and proxy operation.
Content-Disposition
Source: RFC2183
- Specify application handler (Microsoft), e.g.
Content-Type: text/comma-separated-values
Content-Disposition: inline; filename=openinexcel.csv
Expires
Source: HTTP/1.1
(RFC2068)
The date and time after which the document
should be considered expired. Controls cacheing in HTTP/1.0.
In Netscape Navigator, a request for a document whose expires
time has passed will generate a new network request (possibly
with If-Modified-Since). An illegal Expires date, e.g. "0",
is interpreted as "now". Setting Expires to 0 may thus be
used to force a modification check at each visit.
Web robots may delete expired documents from
a search engine, or schedule a revisit.
Dates must be given in RFC850
format, in GMT. E.g. (META tag):
<META HTTP-EQUIV="expires" CONTENT="Wed, 26 Feb 1997 08:21:57 GMT">
or (HTTP header):
Expires: Wed, 26 Feb 1997 08:21:57 GMT
In HTTP 1.0, an invalid value (such as "0")
may be used to mean "immediately".
Note: While the Expires HTML META tag
appears to work properly with Netscape Navigator, other browsers
may ignore it, and it is ignored by Web proxies. Use of the
equivalent HTTP header, as supported by e.g. Apache, is more
reliable.
See also CacheNow
for discussion about cache control, page expiry, etc.
Pragma
Controls cacheing in HTTP/1.0. Value must
be "no-cache". Issued by browsers during a Reload request,
and in a document prevents Netscape Navigator cacheing a page
locally.
Content-Type
Source: HTTP/1.0
(RFC1945)
The HTTP content type may be extended to give
the character set. As an HTTP/1.0 header, this unfortunately
breaks older browsers. As a META tag, it causes Netscape Navigator
to load the appropriate charset before displaying the page.
E.g.
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=ISO-2022-JP">
Content-Script-Type
E.g.
<META HTTP-EQUIV="Content-Script-Type" CONTENT="text/javascript">
Source: HTML
4.0
Specifies the default scripting language in
a document. See MIMETYPES
for applicable values.
Content-Style-Type
E.g.
<META HTTP-EQUIV="Content-Style-Type" CONTENT="text/css">
Source: HTML
4.0
Specifies the default style sheet language
for a document.
Content-Language
Source: HTTP/1.0,
RFC1766
May be used to declare the natural language
of the document. May be used by robots to categorize by language.
The corresponding Accept-Language header (sent by a
browser) causes a server to select an appropriate natural
language document. E.g.
<META HTTP-EQUIV="Content-Language" CONTENT="en-GB">
or (HTTP header)
Content-language: en-GB
languages are specified as the pair (language-dialect);
here, English-British
Refresh
Source: Netscape
Specifies a delay in seconds before the browser
automatically reloads the document. Optionally, specifies
an alternative URL to load. E.g.
<META HTTP-EQUIV="Refresh" CONTENT="3;URL=http://www.some.org/some.html">
or (HTTP header)
Refresh: 3;URL=http://www.some.org/some.html
In Netscape Navigator, has the same effect
as clicking "Reload"; i.e. issues an HTTP GET with Pragma:
no-cache (and If-Modified-Since header if a cached copy exists).
Note: If a script is executed which reloads
the current document, the action of the Refresh tag may be
undefined. (e.g. <body onLoad= "document.location='otherdoc.doc'>)
Window-target
Source: Jahn
Rentmeister
Specifies the named window of the current
page; can be used to stop a page appearing in a frame with
many (not all) browsers. E.g.
<META HTTP-EQUIV="Window-target" CONTENT="_top">
or (HTTP header)
Window-target: _top
Ext-cache
Source: Netscape
Defines the name of an alternate cache to
Netscape Navigator. E.g.
<META HTTP-EQUIV="Ext-cache"
CONTENT="name=/some/path/index.db; instructions=User Instructions">
Set-Cookie
Source: Netscape
Navigator
Sets a "cookie" in Netscape Navigator. Values
with an expiry date are considered "permanent" and will be
saved to disk on exit. E.g.
<META HTTP-EQUIV="Set-Cookie"
CONTENT="cookievalue=xxx;expires=Friday, 31-Dec-99 23:59:59 GMT; path=/">
PICS-Label
Source: PICS
Platform-Independant Content rating Scheme.
Typically used to declare a document's rating in terms of
adult content (sex, violence, etc.) although the scheme is
very flexible and may be used for other purposes.
See also the PICS
HOWTO. For PICS for Medical data,
see medpics.org.
Cache-Control
Source: HTTP/1.1
Specifies the action of cache agents. Possible
values:
- Public - may be cached in public shared
caches
- Private - may only be cached in private
cache
- no-cache - may not be cached
- no-store - may be cached but not archived
Note that browser action is undefined using
these headers as META tags.
Vary
Source: HTTP/1.1
Specifies that alternates are available. E.g.
<META HTTP-EQUIV="Vary" CONTENT="Content-language">
or (HTTP header)
Vary: Content-language
implies that if a header Accept-Language
is sent an alternate form may be selected.
Lotus
The Lotus publishing tool generates Bulletin-Date
and Bulletin-Text attributes. Bulletin-Text contains a document
description.
NAME attributes
META tags with a name attribute are
used for other types which do not correspond to HTTP headers.
Sometimes the distinction is blurred; some agents may interpret
tags such as "keywords" declared as either "name" or as "http-equiv".
Robots
Source: Spidering
Controls Web robots on a per-page basis. E.g.
<META NAME="ROBOTS" CONTENT="NOINDEX,FOLLOW">
Robots may traverse this page but not index
it.
Altavista
supports:
- NOINDEX prevents anything on the page from
being indexed.
- NOFOLLOW prevents the crawler from following
the links on the page and indexing the linked pages.
- NOIMAGEINDEX prevents the images on the
page from being indexed but the text on the page can still
be indexed.
- NOIMAGECLICK prevents the use of links
directly to the images, instead there will only be a link
to the page.
Description
Source: Spidering,
AltaVista,
Infoseek.
A short, plain language description of the
document. Used by search engines to describe your document.
Particularly important if your document has very little text,
is a frameset, or has extensive scripts at the top. E.g.
<META NAME="description" CONTENT="Citrus fruit wholesaler.">
Keywords
Source: AltaVista,
Infoseek.
Keywords used by search engines to index your
document in addition to words from the title and document
body. Typically used for synonyms and alternates of title
words. E.g.
<META NAME="keywords" CONTENT="oranges, lemons, limes">
Author
Source: Publishing
tools, e.g. Netscape
Gold
Typically the unqualified author's name.
Generator
Source: Publishing
tools, e.g. Netscape
Gold, FrontPage, etc.
Typically the name and version number of a
publishing tool used to create the page. Could be used by
tool vendors to assess market penetration.
Formatter
Source: Publishing
tools - Microsoft
FrontPage
Classification
Source: Netscape
Gold
Undefined.
Copyright
Source: Publishing
tools
Typically an unqualified copyright statement.
Rating
Source:
mk-metas, Weburbia
(safe for kids)
Simple content rating.
VW96.ObjectType
Source:
mk-metas
Based on an early version of the Dublin
Core report, using a defined schema
of document types such as FAQ, HOWTO.
Defined by Queen's
University of Belfast; a restricted
set including e.g. "Contact Information", "Image".
Dublin Core
DC.TITLE, DC.CREATOR, DC.SUBJECT , DC.DESCRIPTION
, DC.PUBLISHER , DC.CONTRIBUTORS , DC.DATE , DC.TYPE, DC.FORMAT
, DC.IDENTIFIER, DC.SOURCE , DC.LANGUAGE , DC.RELATION, DC.COVERAGE,
DC.RIGHTS
Dublin Core Elements. See the Reference
Description
HTML 4.0
The HTML
4.0 Specification is now available.
HTdig
htdig-keywords, htdig-noindex
HTdig
tags. See the HTdig
META page.
DC-CHEM
DC-CHEM. See Chemical
Metadata extensions.
HTdig notification
htdig-email, htdig-notification-date, htdig-email-subject
- see HTdig
notification.
searchBC
searchBC
is a regional search engine which uses a number of common
tags such as Keywords. revisit
is used as a hint for scheduling revisits.
Apple META tags
Author-Corporate, Author-Personal, Author-Personal,
Publisher-Email, Identifier-URL, Identifier, Coverage, Bookmark
-
Kodak
EKBU, EKdocType, EKdocOwner, EKdocTech, EKreviewDate,
EKArea - as used by Eastman
Kodak.
IBM
ABSTRACT, CC, ALIAS, OWNER - as used by IBM.
Page-Enter, Page-Exit, Site-Enter, Site-Exit
Source: Microsoft
DHTML (Filters & Transitions)
Defines special effects transition; e.g.
<meta http-equiv="Page-Enter"
content="revealTrans(Duration=3.0,Transition=2)">
See e.g. Transitions
Between Pages (Ruleweb)
SHOE
Instance-Delegate, Instance-Key - see the SHOE
Project at the University of Maryland
(Simple HTML Ontology Extensions)
Microsoft Word
Microsoft Word 97 supports a number of HTML
META attributes in the HTML export option. Content-Type
is used to set the charset, Generator is set and various
other tags may optionally be set.
SIC87
1987 US SIC (Standard Industry Codes), used
in Vancouver Webpages
Classifieds. See US
SIC Codes
RDU
The Metadata
Search Engine lists many tags, including
the following:
- contributor
- custodian
- custodian_contact
- custodian_contact_position
- east_bounding_coordinate
- north_bounding_coordinate
- relation
- reply-to
- south_bounding_coordinate
- west_bounding_coordinate
Other Organisations
Agent Markup Language
See the AML
pages.
- Agent markup Language Version
GeoCities
See GeoCities
categorize.
GILS
Government Information Locator Service - a
US government initiative. See
IMS
See the IMS
Project homepage.
Fireball
The German search engine Fireball.
See the metadata
page and meta-tag
generator. Supports Author, Publisher,
Keywords, Description plus page-topic, page-type.
Geotags
Geographic
Tagging for Resource Discovery.
- Geo.Region - Geographic regions from ISO3166-2
- Geo.Placename - Free Text place name
- Geo.Position - Latitude;Longitude in decimal
degrees using the WGS84 datum.
Miscellaneous
- Version
- Template
- Operator
- Creation
- Host
- Document
- Subject
- Build
- Distribution - global,local, iu
- Resource-type
- document (for ALIWeb)
- Location (geographic location; from Sympatico)
Deprecated:
- Random Text (e.g., META NAME="Tom Jones
|
Search Engines
For the latest news and reviews about these
services, see the Search
Engine Report archives.
Results 1 - 24 of 24
AOL
Search
AOL Search allows its members to search across the web and
AOL's own content from one place. The "external" version,
listed above, does not list AOL content. The main listings
for categories and web sites come from the Open Directory
(see below). Inktomi (see below) also provides crawler-based
results, as backup to the directory information. Before the
launch of AOL Search in October 1999, the AOL search service
was Excite-powered AOL NetFind.
AltaVista
AltaVista is consistently one of the largest search engines
on the web, in terms of pages indexed. Its comprehensive coverage
and wide range of power searching commands makes it a particular
favorite among researchers. It also offers a number of features
designed to appeal to basic users, such as "Ask AltaVista"
results, which come from Ask Jeeves (see below), and directory
listings from the Open Directory and LookSmart. AltaVista
opened in December 1995. It was owned by Digital, then run
by Compaq (which purchased Digital in 1998), then spun off
into a separate company which is now controlled by CMGI. AltaVista
also operates the Raging Search service, below.
Ask
Jeeves
Ask Jeeves is a human-powered search service that aims to
direct you to the exact page that answers your question. If
it fails to find a match within its own database, then it
will provide matching web pages from various search engines.
The service went into beta in mid-April 1997 and opened fully
on June 1, 1997. Some results from Ask Jeeves also appear
within AltaVista.
Direct
Hit
Direct Hit measures what people click on in the search results
presented at its own site and at its partner sites, such as
HotBot. Sites that get clicked on more than others rise higher
in Direct Hit's rankings. Thus, the service dubs itself a
"popularity engine." Aside from running its own web site,
Direct Hit provides the main results which appear at HotBot
(see below) and is available as an option to searchers at
MSN Search. Direct Hit is owned by Ask Jeeves (above). See
the Using
Direct Hit Results page to learn more
about Direct Hit.
Excite
Excite is one of the more popular search services on the web.
It offers a fairly large index and integrates non-web material
such as company information and sports scores into its results,
when appropriate. Excite was launched in late 1995. It grew
quickly in prominence and consumed two of its competitors,
Magellan in July 1996, and WebCrawler in November 1996. These
continue to run as separate services.
FAST
Search
Formerly called All The Web, FAST Search aims to index the
entire web. It was the first search engine to break the 200
million web page index milestone and consistently has one
of the largest indexes of the web. The Norwegian company behind
FAST Search also powers some of the results that appear at
Lycos (see below). FAST Search launched in May 1999.
Go
/ Infoseek
Go is a portal site produced by Infoseek and Disney. It offers
portal features such as personalization and free e-mail, plus
the search capabilities of the former Infoseek search service,
which has now been folded into Go. Searchers will find that
Go consistently provides quality results in response to many
general and broad searches, thanks to its ESP search algorithm.
It also has an impressive human-compiled directory of web
sites. Go officially launched in January 1999. It is not related
to GoTo, below. The former Infoseek service launched in early
1995.
GoTo
Unlike the other major search engines, GoTo sells its main
listings. Companies can pay money to be placed higher in the
search results, which GoTo feels improves relevancy. Non-paid
results come from Inktomi. GoTo launched in 1997 and incorporated
the former University of Colorado-based World Wide Web Worm.
In February 1998, it shifted to its current pay-for-placement
model and soon after replaced the WWW Worm with Inktomi for
its non-paid listings. GoTo is not related to Go (Infoseek).
Google
Google is a search engine that makes heavy use of link popularity
as a primary way to rank web sites. This can be especially
helpful in finding good sites in response to general searches
such as "cars" and "travel," because users across the web
have in essence voted for good sites by linking to them. The
system works so well that Google has gained wide-spread praise
for its high relevancy. Google also has a huge index of the
web and provides some results to Yahoo and Netscape Search.
HotBot
HotBot is a favorite among researchers due to its many power
searching features. In most cases, HotBot's first page of
results comes from the Direct Hit service (see above), and
then secondary results come from the Inktomi search engine,
which is also used by other services. It gets its directory
information from the Open Directory project (see below). HotBot
launched in May 1996 as Wired Digital's entry into the search
engine market. Lycos purchased Wired Digital in October 1998
and continues to run HotBot as a separate search service.
IWon
Backed by US television network CBS, iWon has a directory
of web sites generated automatically by Inktomi, which also
provides its more traditional crawler-based results. iWon
gives away daily, weekly and monthly prizes in a marketing
model unique among the major services. It launched in Fall
1999.
Inktomi
Originally, there was an Inktomi
search engine at UC Berkeley. The creators then formed their
own company with the same name and created a new Inktomi index,
which was first used to power HotBot. Now the Inktomi index
also powers several other services. All of them tap into the
same index, though results may be slightly different. This
is because Inktomi provides ways for its partners to use a
common index yet distinguish themselves. There is no way to
query the Inktomi index directly, as it is only made available
through Inktomi's partners with whatever filters and ranking
tweaks they may apply.
LookSmart
LookSmart is a human-compiled directory of web sites. In addition
to being a stand-alone service, LookSmart provides directory
results to MSN Search, Excite and many other partners. Inktomi
provides LookSmart with search results when a search fails
to find a match from among LookSmart's reviews. LookSmart
launched independently in October 1996, was backed by Reader's
Digest for about a year, and then company executives bought
back control of the service.
Lycos
Lycos started out as a search engine, depending on listings
that came from spidering the web. In April 1999, it shifted
to a directory model similar to Yahoo. Its main listings come
from the Open Directory project, and then secondary results
come from the FAST Search engine. Some Direct Hit results
are also used. In October 1998, Lycos acquired the competing
HotBot search service, which continues to be run separately.
MSN
Search
Microsoft's MSN Search service is a LookSmart-powered directory
of web sites, with secondary results that come from Inktomi.
RealNames and Direct Hit data is also made available. MSN
Search also offers a unique way for Internet Explorer 5 users
to save past searches.
NBCi
NBCi is a human-compiled directory of web sites, supplemented
by search results from Inktomi. Like LookSmart, it aims to
challenge Yahoo as the champion of categorizing the web. NBCi
launched in late 1997 and is backed by NBC. It was formerly
known as Snap but had a name change in late 2000.
Netscape
Search
Netscape Search's results come primarily from the Open Directory
and Netscape's own "Smart Browsing" database, which does an
excellent job of listing "official" web sites. Secondary results
come from Google. At the Netscape Netcenter portal
site, other search engines are also
featured.
Northern
Light
Northern Light is another favorite search engine among researchers.
It features a large index of the web, along with the ability
to cluster documents by topic. Northern Light also has a set
of "special collection" documents that are not readily accessible
to search engine spiders. There are documents from thousands
of sources, including newswires, magazines and databases.
Searching these documents is free, but there is a charge of
up to $4 to view them. There is no charge to view documents
on the public web -- only for those within the special collection.
Northern Light opened to general use in August 1997.
Open
Directory
The Open Directory uses volunteer editors to catalog the web.
Formerly known as NewHoo, it was launched in June 1998. It
was acquired by Netscape in November 1998, and the company
pledged that anyone would be able to use information from
the directory through an open license arrangement. Netscape
itself was the first licensee. Lycos and AOL Search also make
heavy use of Open Directory data, while AltaVista and HotBot
prominently feature Open Directory categories within their
results pages.
Raging
Search
Operated by AltaVista, Raging Search uses the same core index
as AltaVista and virtually the same ranking algorithms. Why
use it? AltaVista offers it for those who want fast search
results, with no portal features getting in the way.
RealNames
The RealNames system is meant to be an easier-to-use alternative
to the current web site addressing system. Those with RealNames-enabled
browsers can enter a word like "Nike" to reach the Nike web
site. To date, RealNames has had its biggest success through
search engine partnerships. See the Using
RealNames Links page for more information
about RealNames.
WebCrawler
WebCrawler has the smallest index of any major search engine
on the web -- think of it as Excite Lite. The small index
means WebCrawler is not the place to go when seeking obscure
or unusual material. However, some people may feel that by
having indexed fewer pages, WebCrawler provides less overwhelming
results in response to general searches. WebCrawler opened
to the public on April 20, 1994. It was started as a research
project at the University of Washington. America Online purchased
it in March 1995 and was the online service's preferred search
engine until Nov. 1996. That was when Excite, a WebCrawler
competitor, acquired the service. Excite continues to run
WebCrawler as an independent search engine.
Yahoo
Yahoo is the web's most popular search service and has a well-deserved
reputation for helping people find information easily. The
secret to Yahoo's success is human beings. It is the largest
human-compiled guide to the web, employing about 150 editors
in an effort to categorize the web. Yahoo has over 1 million
sites listed. Yahoo also supplements its results with those
from Google (beginning in July 2000, when Google takes over
from Inktomi). If a search fails to find a match within Yahoo's
own listings, then matches from Google are displayed. Google
matches also appear after all Yahoo matches have first been
shown. Yahoo is the oldest major web site directory, having
launched in late 1994.
WebTop
WebTop is a crawler-based search engine that claims an extremely
large index. In addition to listing web pages, WebTop also
provides information from news sources, company information
and WAP-related content in its search results. The company
also offers the WebCheck tool (formerly called k-check), which
is an Alexa-like search and discovery tool. WebTop is backed
by Bright Station, the company that acquired some search technology
and other resources from the former Dialog Corporation. The
Dialog search service itself is now owned by a different company,
the Thomson Corporation.
|
Five Easy Steps to Setting Up Shop
Online
Michael J. Miller, Editor-In-Chief
PC Magazine
At first glance, it may seem impossibly
complex to set up an online store. But the actual process isn’t
quite so daunting. In preparation for PC Magazine’s recent
evaluation of seven online commerce software packages, we asked
ecommerce expert Mark Childers to lay it all out for us in a few
easy steps. Here’s what he told us:
Step 1. Web Server: You host
your Web storefront on a Web server. This consists of the software
that will serve your application to site visitors and the hardware
that will host your server and application. The average hardware
should have 128 MB
of RAM
and anywhere from 150 MB to 1 GB
of free hard disk space. Server software
options vary depending on which storefront application you've used,
but Microsoft Internet Information Server, Netscape Enterprise Server
and the freeware Apache are all good solutions. You can either host
it yourself, or find a hosting company that will host it for you,
and may offer other services as well.
Step 2. Payment Server: In order
to accept credit cards, you must open an Internet merchant account
with a bank. You can't communicate directly with your bank, so you
need to submit secure credit-card transactions to a transaction-processing
service such as PaymentNet or CyberCash. These services will, in
turn, send transactions to a payment-processing network like FirstData
with your merchant account information for authorization. Once the
product ships, the transaction is submitted for settlement and the
payment-processing network charges the customer's credit card and
submits payment to your bank account.
Step 3. Order Fulfillment: Depending
on the type of products you’re selling, you'll fulfill orders either
via download for electronic goods, or via physical shipment for
hard goods. One of the benefits of downloadable products is that
you can submit the credit card for authorization and settlement
immediately. For shipped products, credit cards can be authorized
but not submitted for settlement until the product ships.
If the products you’re selling can be
downloaded over the Internet, you need a mechanism such as an FTP
site. Your Web server likely includes
FTP capabilities. For hard goods, you need to fulfill the order
and ship it directly to the customer. You may want to tie your system
into FedEx or UPS
so that customers can get live shipping
estimates when placing their orders and can use your site to check
on the shipping status of their orders.
Step 4. Site Promotion: For
your site to be successful, people need to visit it. Generating
traffic can be a daunting task. You may want to get tools for search-engine
submission and monitoring, such as WebPosition Gold. A banner exchange
service such as Link Exchange is a low-cost way to generate site
traffic and make your site look more professional. In exchange for
displaying other companies' banners on your site via the exchange
service, your banner will be displayed on other participating company
sites.
Step 5. Site Monitoring and Analysis:
Keeping track of who's coming to your site, how they're navigating
it, and how they found it (via a banner ad or a search engine, for
example) is key to determining how your site promotion efforts are
faring. You'll need a log analysis tool, such as WebTrends, that
will give you reports analyzing your traffic so that you can make
any changes to the site or tweak your marketing efforts.
For much more about online stores and
the software products that make them possible, be sure to see PC
Magazine’s evaluation of Web storefront tools. Click
for more.
http://chkpt.zdnet.com/chkpt/adfeprst/www.anchordesk.com/cgi-bin/print_story.cgi?story=story_2876http://chkpt.zdnet.com/chkpt/adfeprst/www.anchordesk.com/cgi-bin/print_story.cgi?story=story_2876
/anchordesk/whoiswe/talkbackform.html/anchordesk/whoiswe/talkbackform.html
/anchordesk/glossary/glossaryindex.html/anchordesk/glossary/glossaryindex.html
|