Skip to content

nodef/gve.cxx

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A high-performance parallel Graph interface supporting efficient Dynamic batch updates.

Research in graph-structured data has grown rapidly due to graphs' ability to represent complex real-world information and capture intricate relationships, particularly as many real-world graphs evolve dynamically through edge/vertex insertions and deletions. This has spurred interest in programming frameworks for managing, maintaining, and processing such dynamic graphs. In our report, we evaluate the performance of PetGraph (Rust), Stanford Network Analysis Platform (SNAP), SuiteSparse:GraphBLAS, cuGraph, Aspen, and our custom implementation in tasks including loading graphs from disk to memory, cloning loaded graphs, applying in-place edge deletions/insertions, and performing a simple iterative graph traversal algorithm. Our implementation demonstrates significant performance improvements: it outperforms PetGraph, SNAP, SuiteSparse:GraphBLAS, cuGraph, and Aspen by factors of 177x, 106x, 76x, 17x, and 3.3x in graph loading; 20x, 235x, 0.24x, 1.3x, and 0x in graph cloning; 141x/45x, 44x/25x, 13x/11x, 28x/34x, and 3.5x/2.2x in edge deletions/insertions; and 67x/63x, 86x/86x, 2.5x/2.6x, 0.25x/0.24x, and 1.3x/1.3x in traversal on updated graphs with deletions/insertions.

Below, we plot the runtime (in seconds, logarithmic scale) for loading a graph from file into memory with PetGraph, SNAP, SuiteSparse:GraphBLAS, cuGraph, Aspen, and Our DiGraph for each graph in the dataset.

Image

Next, we plot the runtime (in milliseconds, logarithmic scale) of deleting a batch of 10^−7|𝐸| to 0.1|𝐸| randomly generated edges into a graph, in-place, in multiples of 10. Here, we evaluate PetGraph, SNAP, SuiteSparse:GraphBLAS, cuGraph, Aspen, and Our DiGraph on each graph in the dataset. The left subfigure presents overall runtimes using the geometric mean for consistent scaling, while the right subfigure shows runtimes for individual graphs.

Image

Below, we plot the runtime of inserting a batch of edges into a graph, in-place, using PetGraph, SNAP, SuiteSparse:GraphBLAS, cuGraph, Aspen, and Our DiGraph.

Image

Finally, we plot the runtime of traversing a graph using a simple iterative algorithm (42-step reverse walks from each vertex in a graph) on graphs with edge deletions. We evaluate PetGraph, SNAP, SuiteSparse:GraphBLAS, cuGraph, Aspen, and Our DiGraph on each graph in the dataset.

Image

Refer to our technical report for more details:
Performance Comparison of Graph Representations Which Support Dynamic Graph Updates.



Installation

Run:

$ npm i gve.cxx

And then include gve.hxx as follows:

// main.c
#include "node_modules/gve.cxx/gve.hxx"

int main() { /* ... */ }

And then compile with clang or gcc as usual.

$ clang -std=c++17 -target x86_64-pc-windows-msvc main.cxx  # or, use gcc

You may also use a simpler approach:

// main.c
#include <gve.hxx>

int main() { /* ... */ }

If you add the path to node_modules/gve.cxx to your compiler's include paths.

$ clang -I./node_modules/gve.cxx -std=c++17 -target x86_64-pc-windows-msvc main.cxx

Example

#include <iostream>
#include <gve.hxx>

using namespace std;


int main() {
  // Create a directed graph with 5 vertices
  gve::DiGraph<int> graph;

  // Add edges to the graph
  graph.addEdge(0, 1);
  graph.addEdge(1, 2);
  graph.addEdge(2, 3);
  graph.addEdge(3, 4);
  graph.addEdge(4, 0);

  // Update the graph.
  gve::updateU(graph);

  // Print the number of vertices and edges
  cout << "Number of vertices: " << graph.order() << endl;
  cout << "Number of edges: " << graph.size() << endl;

  return 0;
}


References



ORG

About

A high-performance parallel Graph interface supporting efficient Dynamic batch updates.

Resources

License

Stars

Watchers

Forks

Languages