Enumerating Trillion Subgraphs On Distributed Systems

Ha-Myung Park, Francesco Silvestri, Rasmus Pagh, Chin-Wan Chung, Sung-Hyon Myaeng, U Kang

Overview

How can we find patterns from an enormous graph with billions of vertices and edges? The subgraph enumeration, which is to find patterns from a graph, is an important task for graph data analysis with many applications, including analyzing the social network evolution, measuring the significance of motifs in biological networks, observing the dynamics of Internet, and so on. Especially, the triangle enumeration, a special case of the subgraph enumeration, where the pattern is a triangle, has many applications such as identifying suspicious users in social networks, detecting web spams, and finding communities. However, recent networks are so large that most of the previous algorithms fail to process them. Recently, several MapReduce algorithms have been proposed to address such large networks; however, they suffer from the massive shuffled data resulting in a very long processing time.

In this article, we propose scalable methods for enumerating trillion subgraphs on distributed systems. We first propose PTE (Pre-partitioned Triangle Enumeration), a new distributed algorithm for enumerating triangles in enormous graphs by resolving the structural inefficiency of the previous MapReduce algorithms. PTE enumerates trillions of triangles in a billion scale graph by decreasing three factors: the amount of shuffled data, total work, and network read. We also propose PSE (Pre-partitioned Subgraph Enumeration), a generalized version of PTE for enumerating subgraphs that match an arbitrary query graph. Experimental results show that PTE provides 79 times faster performance than recent distributed algorithms on real-world graphs, and succeeds in enumerating more than 3 trillion triangles on the ClueWeb12 graph with 6.3 billion vertices and 72 billion edges. Furthermore, PSE successfully enumerates 265 trillion clique subgraphs with 4 vertices from a subdomain hyperlink network, showing 47 times faster performance than the state of the art distributed subgraph enumeration algorithm.



Paper

Our proposed method is described in the following paper.
@article{TKDD18PSE, author = {Park, Ha-Myung and Silvestri, Francesco and Pagh, Rasmus and Chung, Chin-Wan and Myaeng, Sung-Hyon and Kang, U}, title = {Enumerating Trillion Subgraphs On Distributed Systems}, journal = {TKDD}, issue_date = {October 2018}, volume = {12}, number = {6}, month = oct, year = {2018}, issn = {1556-4681}, pages = {71:1--71:30}, articleno = {71}, numpages = {30}, publisher = {ACM} }

Code

PSE for Hadoop: pse-1.0.tar.gz [download]
PSE for Spark: pse-spark-1.0.tar.gz [download]

Datasets

Name #Nodes #Edges Description Source
Skitter1.7M11M Internet topology graph SNAP - Skitter
Youtube3.2M12M Friendship network in YouTube Konect
LiveJournal4.8M69M LiveJournal online social network SNAP - LiveJornal1
Orkut3.1M117M Orkut online social network SNAP - Orkut
Twitter42M1.2B Following network of Twitter Kwak10www - Twitter
Friendster66M1.8B Friendster online social network SNAP - Friendster
SubDomain0.1B1.9B Links among subdomains on the Web WDC - Subdomain/Host Graph
YahooWeb1.4B6.6B Page level hyperlink network on the Web Yahoo-webscope
ClueWeb094.8B7.9B Page level hyperlink network on the Web ClueWeb09 Wiki
ClueWeb126.3B72B Page level hyperlink network on the Web ClueWeb12 Web Graph

People