Menu

Search

  |   Technology

Menu

  |   Technology

Search

Rating, ranking and recommending: Three R's for the internet age

0
comments

This holiday season, when we Google for the most trending gifts, compare different items on Amazon or take a break to watch a holiday movie on Netflix, we are making use of what might be called “the three R’s” of the Internet Age: rating, ranking and recommending.

Much like the traditional “three R’s” of education – “reading, ’riting and ’rithmetic” – no modern education is complete without understanding how websites’ algorithms combine, process and synthesize information before presenting it to us.

As we explore in our new book, “The Power of Networks: Six Principles that Connect Our Lives,” the three tasks of rating, ranking and recommending are interdependent, though it may not be initially obvious. Before we can rank a set of items, we need some measure by which they can be ordered. This is really a rating of each item’s quality according to some criterion.

With ranked lists in hand, we may turn around and make recommendations about specific items to people who may be interested in purchasing them. This interrelationship highlights the importance of how the quality and attractiveness of an item is quantified into a rating in the first place.

Ranking

What consumers and internet users often call “rating,” tech companies may call “scoring.” This is key to, for example, how Google’s search engine returns high-quality links at the top of its search results, with the most relevant information usually contained in the first page of responses. When a person enters a search query, Google assigns two main scores to each page in its database of trillions, and uses these to generate the order for its results.

The first of these scores is a “relevance score,” a combination of dozens of factors that measure how closely related the page and its content are to the query. For example, it takes into account how prominently placed search keywords are on the result page. The second is an “importance score,” which captures the way the network of webpages are connected to one another via hyperlinks to quantify how important each page is.

The combination of these two scores, along with other information, gives a rating for each page, quantifying how useful it might be to the end user. Higher ratings will be placed toward the top of the search results. These are the pages Google is implicitly recommending that the user visit.

Rating

The three Rs also pervade online retail. Amazon and other e-commerce sites allow customers to enter reviews for products they have purchased. The star ratings contained in these reviews are usually aggregated into a single number representing customers’ overall opinion. The principle behind this is called “the wisdom of crowds,” the assumption that combining many independent opinions will be more reflective of reality than any single individual’s evaluation.

Key to the wisdom of crowds is that the reviews accurately reflect customers’ experiences, and are not biased or influenced by, say, the manufacturer adding a series of positive assessments to its own items. Amazon has mechanisms in place to screen out these sorts of reviews – for example, by requiring a purchase to have been made from a given account before it can submit a review. Amazon then averages the star ratings for the reviews that remain.

Averaging ratings is fairly straightforward. But it’s more complicated to figure out how to effectively rank products based on those ratings. For example, is an item that has 4.0 stars based on 200 reviews better than one that has 4.5 stars but only from 20 reviews? Both the average rating and sample size need to be accounted for in the ranking score.

There are even more factors that may be taken into consideration, such as reviewer reputation (ratings based on reviewers with higher reputations may be trusted more) and rating disparity (products with widely varying ratings may be demoted in the ordering). Amazon may also present products to different users in varying orders based on their browsing history and records of previous purchases on the site.

Recommending

The prime example of recommendation systems is Netflix’s method for determining which movies a user will enjoy. Algorithms predict how each specific user would rate different movies she has not yet seen by looking at the past history of her own ratings and comparing them with those of similar users. The movies with the highest predictions are those that will then make the final cut for a particular user.

The quality of these recommendations depends heavily on the algorithm’s accuracy and its use of machine learning, data mining and the data itself. The more ratings we start with for each user and each movie, the better we can expect the predictions to be.

A simple rating predictor might assign one parameter to each user that captures how lenient or harsh a critic she tends to be. Another parameter might be assigned to each movie, capturing how well-received the movie is relative to others. More sophisticated models will identify similarities among users and movies – so if people who like the kinds of movies you like have given a high rating to a movie you haven’t seen, the system might suggest you’ll like it too.

This can involve hidden dimensions that underlie user preferences and movie characteristics. It can also involve measuring how the ratings for any given movie have changed over time. If a previously unknown film becomes a cult classic, it might start appearing more in people’s recommendation lists. A key aspect of dealing with several models is combining and tuning them effectively: The algorithm that won the Netflix Prize competition of predicting movie ratings in 2009, for example, was a blend of hundreds of individual algorithms.

This combination of rating, ranking and recommendation algorithms has transformed our daily online activities, far beyond shopping, searching and entertainment. Their interconnection brings us clearer – and sometimes unexpected – insights into what we want and how we get it.

The ConversationChristopher Brinton works for and owns shares in Zoomi Inc.

Princeton EDGE Lab receives funding from US government research funding agencies such as National Science Foundation.

This article was originally published on The Conversation. Read the original article.The Conversation

  • ET PRO
  • Market Data

Market-moving news and views, 24 hours a day >

Fintech Week Chicago

April 28 21:00 UTC Released

MXFiscal Balance (Pesos)*

Actual

340.61 bln MXN

Forecast

Previous

-1.93 bln MXN

April 28 21:00 UTC Released

CNFiscal Balance (Pesos)*

Actual

340.61 bln MXN

Forecast

Previous

-1.93 bln MXN

April 30 01:00 UTC 568568m

CNNBS Non-Mfg PMI*

Actual

Forecast

Previous

55.10 %

April 30 01:00 UTC 568568m

CNNBS Manufacturing PMI*

Actual

Forecast

51.6 bln $

Previous

51.8 bln $

April 30 23:30 UTC 19181918m

AUAIG Manufacturing Index

Actual

Forecast

Previous

57.5 %

May 1 00:00 UTC 19481948m

KRExport Growth Prelim*

Actual

Forecast

15.3 %

Previous

13.6 %

May 1 00:00 UTC 19481948m

KRImport Growth Prelim*

Actual

Forecast

21.0 %

Previous

27.7 %

May 1 00:00 UTC 19481948m

KRTrade Balance Prelim*

Actual

Forecast

Previous

6.27 bln $

May 1 00:30 UTC 19781978m

USNikkei Mfg PMI

Actual

Forecast

Previous

52.8 %

May 1 00:30 UTC 19781978m

JPNikkei Mfg PMI

Actual

Forecast

Previous

52.8 bln $

Fintech Week Chicago
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.