mardi 27 janvier 2015

Why does this implementation of Dijkstra's algorithm work in O(n^2)?


Here is the code I use for implementing Dijkstra's algorithm. Consider a graph with n vertices and m edges. Shouldn't it run in O(n^2 m) ? Someone may say that there are n vertices and each edge gets processed once, therefore it is O(n m). But the while loop can run at most n times, the 1st for loop at most n times and the second for loop at most m times. Therefore, this should be an O(n^2 m) implementation.


Here X is a vector, head[] and shortest[] are arrays.



X.push_back(1);
head[1] = true;

while(X.size()!=MAX) {
min = INT_MAX;

for(j=0;j<X.size();j++) {
node = X[j];

for(k=0;k<graph[node].size();k++) {
v = graph[node][k];

if(head[v]==true)
continue;

if(min>(weight[node][k]+shortest[node])) {
pos = v;
min = weight[node][k]+shortest[node];
}
}
}

shortest[pos] = min;
head[pos] = true;

X.push_back(pos);
}




Aucun commentaire:

Enregistrer un commentaire