Data Science with Apache Spark

Searchβ¦

Page Rank with Apache Spark Graphx

What is Page Rank?

The PageRank algorithm outputs a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value.

A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a document with a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to said document.

Companies run search engine can set the prices for placing ad on a web page based on page rank of the web page, placing ad on higher traffic web pages, conceivably with higher page ranks will cost more.

Below is a simple example:

Suppose you have a website that has 4 pages, there are links from each web pages. For simplicity, just assume these links are static (hard coded links). In the real world, links (URLs) to the web pages are more dynamically rendered for example, rather than hard coded one. Page ranks are actually dynamic, not static, need to be computed anytime when pages and links are rendered.

Looking at page products.html, which has 2 outbound URL links, 1 to index.html and 1 to services.html

Similarly, Index.html has 3 outbound URL links, 1 to products.html, 1 to services.html and 1 to investor.html

Services.html has 1 outbound URL links to products.html

Investor.html has 2 outbound URL links, 1 to products.html and 1 to index.html

Given all other pages have links to products.html, therefore, calculation of PageRank of products.html is denoted as PR(products.html) as below:

PR(products.html)=PR(index.html)/3+PR(services.html)/1+PR(investor.html)/2

Why PR(index.html)/3? Because index.html has 3 outbound links, only 1 pointing to products.html, hence only 1/3 of its PR value contributes to PR(products.html)

Likewise, for PR(services.html)/1 and PR(investor.html)/2

Computing Page Rank:

The PageRank algorithm outputs a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. Since it is probability, Page Rank should be anywhere between 0 and 1, right?

Here is the scala code create a random graph of 10 vertices and outputs the page rank for each of the vertices and sort the page rank in descending order

1

import org.apache.spark.graphx

2

import org.apache.spark.graphx.impl._

3

import org.apache.spark.graphx.lib._

4

import org.apache.spark.graphx.util._

5

import org.apache.spark.sql._

6

β

7

β

8

val graph: Graph[Double, Int] =

9

GraphGenerators.logNormalGraph(sc, numVertices = 10).mapVertices( (id, _) => id.toDouble ).partitionBy(PartitionStrategy.EdgePartition2D,4)

10

graph.pageRank(0.0001).vertices.sortBy(-_._2).collect.foreach(println)

11

β

12

/*

13

Output Tuple pair, 1st value is Vertex Id, 2nd value is Page Rank

14

(8,1.3816084012350922)

15

(2,1.2167791912510777)

16

(4,1.1607761148828422)

17

(7,1.0003408285776794)

18

(6,0.9886969400377069)

19

(0,0.9724138586272979)

20

(5,0.9366520865492737)

21

(9,0.896870650813802)

22

(3,0.7478478664160348)

23

(1,0.6980140616091922)

24

β

25

β

26

*/

27

β

28

β

Copied!

I notice some page rank is > 1. I also notice if I add up all page ranks, it is equal to the number of vertices, by following code:

1

graph.pageRank(0.0001).vertices

2

.sortBy(-_._2).toDF

3

.withColumnRenamed("_1","VertexId")

4

.withColumnRenamed("_2","PageRank")

5

.createOrReplaceTempView("pagerank")

6

β

7

spark.sql("select sum(PageRank) from pagerank").show()

8

β

9

/*

10

Output:

11

+-------------+

12

|sum(PageRank)|

13

+-------------+

14

| 10.0|

15

+-------------+

16

β

17

*/

18

β

Copied!

Math Behind Page Rank

Here is from Wikipedia

The PageRank theory holds that an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue is a damping factor d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85. The damping factor is subtracted from 1 (and in some variations of the algorithm, the result is divided by the number of documents (N) in the collection) and this term is then added to the product of the damping factor and the sum of the incoming PageRank scores.

Formula 1:

This is probability-based Page Rank, because add up all Page Ranks evaluate to 1, Page Rank for each Vertex must be within 0 and 1

However, according to original research paper from Google, the formula to calculate page rank is

(Formula 2):

Formula 2 is Formula 1 times N, meaning, adding up all page ranks evaluating to N, (N is number of web pages or number of vertices in the Graph). Therefore, it is possible for Page Rank for some vertices may be greater than 1, as long as sum up all of the page ranks being equal to N. Hence the page rank by formula 2 is no longer probability value, you can, however, derive the probability by dividing page rank value by N.

Spark Graphx Implementation of Page Rank

It is likely pageRank method from Spark Graphx is based on formula 2. To prove it, look at the relevant open source codes that compute page rank of vertex in a Graph:

- 1.pageRank is a method exposed in the abstract class Graph:

1

abstract class Graph[VD: ClassTag, ED: ClassTag] {

2

def pageRank(tol: Double, resetProb: Double = 0.15): Graph[Double, Double]

3

}

4

β

Copied!

2. method pageRank is implemented in class GraphOps

1

class GraphOps[VD: ClassTag, ED: ClassTag](graph: Graph[VD, ED]) extends Serializable {

2

/**

3

* Run a dynamic version of PageRank returning a graph with vertex attributes containing the

4

* PageRank and edge attributes containing the normalized edge weight.

5

*

6

* @see [[org.apache.spark.graphx.lib.PageRank$#runUntilConvergence]]

7

*/

8

def pageRank(tol: Double, resetProb: Double = 0.15): Graph[Double, Double] = {

9

PageRank.runUntilConvergence(graph, tol, resetProb)

10

}

11

β

Copied!

3. Actual implementation method runUntilConvergence is in object pageRank:

1

/**

2

* PageRank algorithm implementation.

3

* β¦.

4

* The second implementation uses the `Pregel` interface and runs PageRank until

5

* convergence:

6

*

7

* {{{

8

* var PR = Array.fill(n)( 1.0 )

9

* val oldPR = Array.fill(n)( 0.0 )

10

* while( max(abs(PR - oldPr)) > tol ) {

11

* swap(oldPR, PR)

12

* for( i <- 0 until n if abs(PR[i] - oldPR[i]) > tol ) {

13

* PR[i] = alpha + (1 - \alpha) * inNbrs[i].map(j => oldPR[j] / outDeg[j]).sum

14

* }

15

* }

16

* }}}

17

*

18

* `alpha` is the random reset probability (typically 0.15), `inNbrs[i]` is the set of

19

* neighbors which link to `i` and `outDeg[j]` is the out degree of vertex `j`.

20

*

21

* @note This is not the "normalized" PageRank and as a consequence pages that have no

22

* inlinks will have a PageRank of alpha.

23

*/

24

object PageRank extends Logging {

25

/**

26

* Run a dynamic version of PageRank returning a graph with vertex attributes containing the

27

* PageRank and edge attributes containing the normalized edge weight.

28

*

29

* @tparam VD the original vertex attribute (not used)

30

* @tparam ED the original edge attribute (not used)

31

*

32

* @param graph the graph on which to compute PageRank

33

* @param tol the tolerance allowed at convergence (smaller => more accurate).

34

* @param resetProb the random reset probability (alpha)

35

*

36

* @return the graph containing with each vertex containing the PageRank and each edge

37

* containing the normalized weight.

38

*/

39

def runUntilConvergence[VD: ClassTag, ED: ClassTag](

40

graph: Graph[VD, ED], tol: Double, resetProb: Double = 0.15): Graph[Double, Double] =

41

{

42

runUntilConvergenceWithOptions(graph, tol, resetProb)

43

}

44

/**

45

* Run a dynamic version of PageRank returning a graph with vertex attributes containing the

46

* PageRank and edge attributes containing the normalized edge weight.

47

*

48

* @tparam VD the original vertex attribute (not used)

49

* @tparam ED the original edge attribute (not used)

50

*

51

* @param graph the graph on which to compute PageRank

52

* @param tol the tolerance allowed at convergence (smaller => more accurate).

53

* @param resetProb the random reset probability (alpha)

54

* @param srcId the source vertex for a Personalized Page Rank (optional)

55

*

56

* @return the graph containing with each vertex containing the PageRank and each edge

57

* containing the normalized weight.

58

*/

59

def runUntilConvergenceWithOptions[VD: ClassTag, ED: ClassTag](

60

graph: Graph[VD, ED], tol: Double, resetProb: Double = 0.15,

61

srcId: Option[VertexId] = None): Graph[Double, Double] =

62

{

63

require(tol >= 0, s"Tolerance must be no less than 0, but got ${tol}")

64

require(resetProb >= 0 && resetProb <= 1, s"Random reset probability must belong" +

65

s" to [0, 1], but got ${resetProb}")

66

β

67

val personalized = srcId.isDefined

68

val src: VertexId = srcId.getOrElse(-1L)

69

β

70

// Initialize the pagerankGraph with each edge attribute

71

// having weight 1/outDegree and each vertex with attribute 0.

72

val pagerankGraph: Graph[(Double, Double), Double] = graph

73

// Associate the degree with each vertex

74

.outerJoinVertices(graph.outDegrees) {

75

(vid, vdata, deg) => deg.getOrElse(0)

76

}

77

// Set the weight on the edges based on the degree

78

.mapTriplets( e => 1.0 / e.srcAttr )

79

// Set the vertex attributes to (initialPR, delta = 0)

80

.mapVertices { (id, attr) =>

81

if (id == src) (0.0, Double.NegativeInfinity) else (0.0, 0.0)

82

}

83

.cache()

84

β

85

// Define the three functions needed to implement PageRank in the GraphX

86

// version of Pregel

87

def vertexProgram(id: VertexId, attr: (Double, Double), msgSum: Double): (Double, Double) = {

88

val (oldPR, lastDelta) = attr

89

val newPR = oldPR + (1.0 - resetProb) * msgSum

90

(newPR, newPR - oldPR)

91

}

92

β

93

def personalizedVertexProgram(id: VertexId, attr: (Double, Double),

94

msgSum: Double): (Double, Double) = {

95

val (oldPR, lastDelta) = attr

96

val newPR = if (lastDelta == Double.NegativeInfinity) {

97

1.0

98

} else {

99

oldPR + (1.0 - resetProb) * msgSum

100

}

101

(newPR, newPR - oldPR)

102

}

103

β

104

def sendMessage(edge: EdgeTriplet[(Double, Double), Double]) = {

105

if (edge.srcAttr._2 > tol) {

106

Iterator((edge.dstId, edge.srcAttr._2 * edge.attr))

107

} else {

108

Iterator.empty

109

}

110

}

111

β

112

def messageCombiner(a: Double, b: Double): Double = a + b

113

β

114

// The initial message received by all vertices in PageRank

115

val initialMessage = if (personalized) 0.0 else resetProb / (1.0 - resetProb)

116

β

117

// Execute a dynamic version of Pregel.

118

val vp = if (personalized) {

119

(id: VertexId, attr: (Double, Double), msgSum: Double) =>

120

personalizedVertexProgram(id, attr, msgSum)

121

} else {

122

(id: VertexId, attr: (Double, Double), msgSum: Double) =>

123

vertexProgram(id, attr, msgSum)

124

}

125

β

126

val rankGraph = Pregel(pagerankGraph, initialMessage, activeDirection = EdgeDirection.Out)(

127

vp, sendMessage, messageCombiner)

128

.mapVertices((vid, attr) => attr._1)

129

β

130

// SPARK-18847 If the graph has sinks (vertices with no outgoing edges) correct the sum of ranks

131

normalizeRankSum(rankGraph, personalized)

132

}

133

β

Copied!

Apache Spark Graphx Open source code reference:

Conclusion:

Tracing the code:

pageRank API method in class Graph ->

PageRank.runUntilConvergence in class GraphOps ->

runUntilConvergence in object PageRank ->

runUntilConvergence invokes runUntilConvergenceWithOption in object PageRank ->

runUntilConvergenceWithOption in object PageRank that execute essentially below logic:

1

PageRank of the vertex = resetProb+(1-resetProb)*sum (<PageRank>/<number of outbound links>)

Copied!

Therefore, PageRank implemented by Apache Spark Graphx is:

1

PageRank of a vertex = resetProb+(1-resetProb)*sum (<PageRank>/<number of outbound links>)

2

= 1-d + d* sum (<PageRank>/<number of outbound links>)

Copied!

This is exactly the formula from Google (formula 2):

Conclusion

That explains page rank code above produces page rank that can be greater than 1 and total of page rank added together is N (N=number of vertices in the Graph). In fact, whether or not page rank being a probability value is not important, the relative significance of page rank value is. The higher the page rank of a vetex (web page), the more visiting traffic the page is likely to have, therefore, set the price accordingly for anyone placing ads, that is what matters.

β

Previous

Spark Graphx Describes Organization Chart Easy and Fast

Next

bulk synchronous parallel with Google Pregel Graphx Implementation Use Cases

Last modified 1yr ago

Copy link