METHODS USED IN SEO STUDIES
Getting indexed
he leading
search engines, such as Google, Bing and Yahoo!, use crawlers to find pages for
their algorithmic search results. Pages that are linked from other search
engine indexed pages do not need to be submitted because they are found
automatically.
Search
engine crawlers may look at a number of different factors when crawling a site.
Not every page is indexed by the search engines. The distance of pages from the
root directory of a site may also be a factor in whether or not pages get
crawled.
Today,
most people are searching on Google using a mobile device.In November 2016,
Google announced a major change to the way crawling websites and started to
make their index mobile-first, which means the mobile version of a given
website becomes the starting point for what Google includes in their index. In
May 2019, Google updated the rendering engine of their crawler to be the latest
version of Chromium (74 at the time of the announcement). Google indicated that
they would regularly update the Chromium rendering engine to the latest
version. In December of 2019, Google began updating the User-Agent string of
their crawler to reflect the latest Chrome version used by their rendering
service. The delay was to allow webmasters time to update their code that
responded to particular bot User-Agent strings. Google ran evaluations and felt
confident the impact would be minor.
Preventing
crawling
To avoid
undesirable content in the search indexes, webmasters can instruct spiders not
to crawl certain files or directories through the standard robots.txt file in
the root directory of the domain. Additionally, a page can be explicitly
excluded from a search engine's database by using a meta tag specific to robots
(usually <meta name="robots" content="noindex"> ).
When a search engine visits a site, the robots.txt located in the root
directory is the first file crawled. The robots.txt file is then parsed and
will instruct the robot as to which pages are not to be crawled. As a search
engine crawler may keep a cached copy of this file, it may on occasion crawl
pages a webmaster does not wish crawled. Pages typically prevented from being
crawled include login specific pages such as shopping carts and user-specific
content such as search results from internal searches. In March 2007, Google
warned webmasters that they should prevent indexing of internal search results
because those pages are considered search spam.
Increasing
prominence
A variety
of methods can increase the prominence of a webpage within the search results.
Cross linking between pages of the same website to provide more links to
important pages may improve its visibility.Writing content that includes frequently
searched keyword phrase, so as to be relevant to a wide variety of search
queries will tend to increase traffic.Updating content so as to keep search
engines crawling back frequently can give additional weight to a site. Adding
relevant keywords to a web page's metadata, including the title tag and meta
description, will tend to improve the relevancy of a site's search listings,
thus increasing traffic. URL canonicalization of web pages accessible via
multiple URLs, using the canonical link element or via 301 redirects can help
make sure links to different versions of the URL all count towards the page's
link popularity score.
White
hat versus black hat techniques
SEO
techniques can be classified into two broad categories: techniques that search
engine companies recommend as part of good design ("white hat"), and
those techniques of which search engines do not approve ("black
hat"). White hats tend to produce
results that last a long time, whereas black hats anticipate that their sites
may eventually be banned either temporarily or permanently once the search
engines discover what they are doing.
An SEO
technique is considered white hat if it conforms to the search engines'
guidelines and involves no deception. As the search engine guidelines are not
written as a series of rules or commandments, this is an important distinction
to note. White hat SEO is not just about following guidelines but is about
ensuring that the content a search engine indexes and subsequently ranks is the
same content a user will see. White hat advice is generally summed up as
creating content for users, not for search engines, and then making that
content easily accessible to the online "spider" algorithms, rather
than attempting to trick the algorithm from its intended purpose. White hat SEO
is in many ways similar to web development that promotes accessibility,although
the two are not identical.
Black hat
SEO attempts to improve rankings in ways that are disapproved of by the search
engines, or involve deception. One black hat technique uses hidden text, either
as text colored similar to the background, in an invisible div, or positioned
off screen. Another method gives a different page depending on whether the page
is being requested by a human visitor or a search engine, a technique known as
cloaking. Another category sometimes used is grey hat SEO. This is in between
black hat and white hat approaches, where the methods employed avoid the site
being penalized but do not act in producing the best content for users. Grey
hat SEO is entirely focused on improving search engine rankings.
Search
engines may penalize sites they discover using black or grey hat methods,
either by reducing their rankings or eliminating their listings from their
databases altogether. Such penalties can be applied either automatically by the
search engines' algorithms, or by a manual site review. One example was the
February 2006 Google removal of both BMW Germany and Ricoh Germany for use of
deceptive practices.Both companies, however, quickly apologized, fixed the
offending pages, and were restored to Google's search engine results page.
ÜNDER, Ü. (tarih yok). www.academia.edu.