If the owner of a web resource wants to announce all new materials, he creates an RSS feed. This is a program that, when new content appears, takes from it the title, part (possibly all) of the content and records the information in XML format.
What is an RSS aggregator?
RSS-aggregator is the same parser. It reads the feed (one or more) offered in the settings and forms a document in XML format. That’s why people call them “readers”. They are:
- client-side;
- installed on web resources.
The average Internet user does not need to create an RSS-aggregator. It can be installed as a widget (add-on) to the browser. Firefox already has an aggregator built in. If you go to a web page from the content of which is formed RSS-feed, then in the panel “Bookmarks” will be active button “subscribe to news feeds”.
Aggregators installed on web resources, broadcast an announcement of the material in the form of a link to it, and sometimes part of the content. Such a “reader” makes the site-acceptor interesting, filled with relevant and rapidly changing information. There are also aggregators on Internet portals. The “News” service has a subsection “My News”, where the Internet user subscribes to the delivery of information on the topic of interest.
Why subscription?
The term “RSS subscription” is conditional. It is an analog of postal “subscription” to newspapers and magazines. News delivered via RSS channel is free of charge. No signature or commitment is required from the user.
RSS and SEO
RSS feed actively participates in the process of indexing the site. A user who subscribes to the news from the site becomes its regular audience and increases its traffic. Links leading from outside resources to the site with a feed are not considered by search engine robots as “selling”. They increase TIC and PageRank. However, the opinion that the channel of fresh news from the site – the key to its rapid “promotion” is incorrect.
Search engine robots may consider that the news announcements taken from the RSS channel, belong to the site where they first found them. Collision “whose material?” is solved in favor of the site on which the news is faster indexed.
Going to the link to the source site, indexed later, the robot believes that before him plagiarism. As a result – a decrease in rating and even a possible ban (if most of the materials are considered plagiarized).