Thursday, 12 December 2013

Check if graph is semiconnected

22.5-7
A directed graph G=(V,E) is semiconnected if, for all pairs of vertices (u,v)
we have u to v or v to u. Give an efficient algorithm to determine whether
or not G is semiconnected. Prove that your algorithm is correct, and analyze its
running time.


Call STRONGLY-CONNECTED-COMPONENTS.

Form the component graph.

Topologically sort the component graph. It is possible to topologically sort the component graph, because component graph is always a DAG (Directed Acyclic Graph)


 Verify that the sequence of vertices (v1,v2,...vk) given by topological sort forms a linear chain in the component graph. That is, verify that the edges (v1,v2),(v2,v3) ,(v (k-1),vk)  exist in the component graph. If the vertices form a linear chain, then the original graph is semiconnected; otherwise it is not.

Because we know that all vertices in each SCC are mutually reachable from each other, it suffices to show that the component graph is semiconnected if and only if it contains a linear chain.
Total running time is O(V+E).

Check if graph is singly connected

A directed graph G = (V,E) is singly connected if there is at most one
directed path from u to v for all vertices u, v  in V . Give an efficient algorithm to determine whether or not a directed graph is singly connected.

Solution. For each vertex u in V , perform a DFS on the given graph G. Check if there are any foward edges or cross edges (in the same component) in any one of the searches. If no such edges exist, then the graph is singly connected, else not.
Time complexity: O(|V |(|V | + |E|)).
The graph is singly connected even with back edges existed. Since back edges implies there is a path u to  v and v to u, which is consistent with the definition of single connectedness.


DFS using stack not recursion.

s[x] = start time of x;
f[x]= finish time of x
color[x] - stores the state of the vertex
WHITE: undiscovered
GRAY: still processing
BLACK: finished
time - global variable to update start and finish times

 DFS(G)
         for each u in V do
                  color[u]=WHITE;
                  p[u]=NIL;
         end for
         for each u in V do
                  if color[u]=WHITE do
                         DFS-VISIT(G,u)
                  end if
         end for

DFS-VISIT(G,u)
         stack S=nil
         push(S,u)
         while S is not empty do
                   x=pop(S)
                  if color[x]=WHITE do
                            time=time+1
                            s[x]=time
                            color[x]=GRAY
                            push(S,x)
                            for each v in Adj[x] do
                                     if color[v]=WHITE do
                                              p[v]=x;
                                              push(S,v);                                             
                                    end if
                           end for
                  else if colox[x]=GRAY do
                           time=time+1
                            f[x]=time
                            color[x]=BLACK
                   end if
          end while

Check if a graph is bipartite/ determines whether it is possible to designate some of the wrestlers as good guys and the remainder as bad guys such that each rivalry is between a good guy and a bad guy.

Algorithm:
Perform as many BFS's as needed to cover all vertices. Assign all wrestlers whose distance is even as good guys and all wrestlers whose distance is odd to be bad guys.Then check each edge to verify that it goes from a good guy to a bad guy. 

O(V+E) : BFS using adjacency lists
O(V): designate a wrestler as good or bad
0(E): check edges.



Simpler implementation : Uses adjacency matrix.


// C++ program to find out whether a given graph is Bipartite or not
#include <iostream>
#include <queue>
#define V 4
using namespace std;
// This function returns true if graph G[V][V] is Bipartite, else false
bool isBipartite(int G[][V], int src)
{
    // Create a color array to store colors assigned to all veritces. Vertex
    // number is used as index in this array. The value '-1' of  colorArr[i]
    // is used to indicate that no color is assigned to vertex 'i'.  The value
    // 1 is used to indicate first color is assigned and value 0 indicates
    // second color is assigned.
    int colorArr[V];
    for (int i = 0; i < V; ++i)
        colorArr[i] = -1;
    // Assign first color to source
    colorArr[src] = 1;
    // Create a queue (FIFO) of vertex numbers and enqueue source vertex
    // for BFS traversal
    queue <int> q;
    q.push(src);
    // Run while there are vertices in queue (Similar to BFS)
    while (!q.empty())
    {
        // Dequeue a vertex from queue ( Refer http://goo.gl/35oz8 )
        int u = q.front();
        q.pop();
         // Find all non-colored adjacent vertices
        for (int v = 0; v < V; ++v)
        {
            // An edge from u to v exists and destination v is not colored
            if (G[u][v] && colorArr[v] == -1)
            {
                // Assign alternate color to this adjacent v of u
                colorArr[v] = 1 - colorArr[u];
                q.push(v);
            }
            //  An edge from u to v exists and destination v is colored with
            // same color as u
            else if (G[u][v] && colorArr[v] == colorArr[u])
                return false;
        }
    }
    // If we reach here, then all adjacent vertices can be colored with
    // alternate color
    return true;
}
// Driver program to test above function
int main()
{
    int G[][V] = {{0, 1, 0, 1},
        {1, 0, 1, 0},
        {0, 1, 0, 1},
        {1, 0, 1, 0}
    };
    isBipartite(G, 0) ? cout << "Yes" : cout << "No";
    return 0;
}
Time complexity: O(V^2) (BFS with adjacency matrix)
Better : O(V+E): BFS with adjacency list

Tuesday, 10 December 2013

Show that determining whether a directed graph G contains a universal sink – a vertex of indegree |V| - 1 and outdegree 0 – can be determined in time O(V ).

How many sinks could a graph have?
  • 0 or 1
How can we determine whether a given vertex u is a universal sink?
  • The u row must contain 0’s only
  • The u column must contain 1’s only
  • A[u][u]=0
How long would it take to determine whether a given vertex u is a universal sink?
  • O(V) time 
UNIVERSAL-SINK(A)
let A be |V|x|V|
i=j=1
while i<=|V| and j<=|V| 
      if aij =1
          then i=i+1;
          else j=j+1;
if i>|V|
   return "no universal sink";
if ISSINK(A,i) =false
   return "no universal sink";
else 
  return i;

ISSINK(A,k)
let A be |V|x|V|
for j=1 to |V|
   if akj=1 or (ajk=0 and j!=k)
       return false
return true  




Tuesday, 22 October 2013

Matrix exponentiation

The Tribonacci Numbers:

F(n) = F(n-1) + F(n-2) + F(n-3),  F(1) = 1; F(2) = 1; F(3) = 2
 

* for N as large as 10^15, using dp will always be very slow, regardless of the time limit. 

*The basic idea behind matrix exponentiation,  is to use the base cases of the recurrence relationship in order to assemble a matrix which will allow us to compute values fast.

On our case we have:
F(1) = 1
F(2) = 1
F(3) = 2
And we now have a relationship that will go like this:
|f(4)|    = MATRIX * |f(1)|
|f(3)|               |f(2)|
|f(2)|               |f(3)|
Now all that is left is to assemble the matrix... and that is done based both on the rules of matrix multiplication and on the recursive relationship... Now, on our example we see that to obtain f(4), the 1st line of the matrix needs to be composed only of ones, as f(4) = 1 f(3) + 1 f(2) + 1* f(1).
Now, denoting the unknown elements as *, we have:
|f(4)|   = |1 1 1| * |f(1)|
|f(3)|     |* * *|   |f(2)|
|f(2)|     |* * *|   |f(3)|
For the second line, we want to obtain f(3), and the only possible way of doing it is by having:
  |f(4)|   = |1 1 1| * |f(1)|
  |f(3)|     |0 0 1|   |f(2)|
  |f(2)|     |* * *|   |f(3)|
To get the value of f(2), we can follow the same logic and get the final matrix:
|1 1 1|
|0 0 1|
|0 1 0|
To end it, we now need to generalize it, and, as we have 3 base cases, all we need to do to compute the Nth tribonacci number in O(logN) time, is to raise the matrix to the power N -3 to get:

|f(n)|   =   |1 1 1|^(N-3) * |f(1)|
|f(n-1)|     |0 0 1|         |f(2)|
|f(n-2)|     |0 1 0|         |f(3)|

The power of the matrix can now be computed in O(logN) time using repeated squaring applied to matrices and the method is complete... Below is the Python code that does this, computing the number modulo 10000000007.


def matrix_mult(A, B):
  C = [[0, 0, 0], [0, 0, 0], [0, 0, 0]]
  for i in range(3):
    for j in range(3):
      for k in range(3):
        C[i][k] = (C[i][k] + A[i][j] * B[j][k]) % 1000000007
  return C

def fast_exponentiation(A, n):
  if n == 1:
    return A
  else:
    if n % 2 == 0:
      A1 = fast_exponentiation(A, n/2)
      return matrix_mult(A1, A1)
    else:
      return matrix_mult(A, fast_exponentiation(A, n - 1))

def solve(n):
    A = [[1,1,1],[0,0,1],[0,1,0]]
    A_n = fast_exponentiation(A,n-3)
    return A_n[0][0] + A_n[0][1] + A_n[0][2]*2




OTHER LINKS:
http://discuss.codechef.com/questions/2263/crowd-editorial

Sunday, 1 September 2013

Median of medians

Median of Medians Algorithm

Quickselect is linear-time on average, but it can require quadratic time with poor pivot choices. This is because quickselect is a decrease and conquer algorithm, with each step taking O(n) time in the size of the remaining search set. If the search set decreases exponentially quickly in size (by a fixed proportion), this yields a geometric series times the O(n) factor of a single step, and thus linear overall time. However, if the search set decreases slowly in size, such as linearly (by a fixed number of elements, in the worst case only reducing by one element each time), then a linear sum of linear steps yields quadratic overall time. For example, the worst case occurs when pivoting on the smallest element at each step, such as applying quickselect for the maximum element to already sorted data and taking the first element as pivot each time.

If one instead consistently chooses "good" pivots, this is avoided and one always gets linear performance even in the worst case. A "good" pivot is one for which we can establish that a constant proportion of elements fall both below and above it, as then the search set decreases at least by a constant proportion at each step, hence exponentially quickly, and the overall time remains linear. The median is a good pivot – the best for sorting, and the best overall choice for selection – decreasing the search set by half at each step. Thus if one can compute the median in linear time, this only adds linear time to each step, and thus the overall complexity of the algorithm remains linear.

The median-of-medians algorithm does not actually compute the exact median, but computes an approximate median, namely a point that is guaranteed to be between the 30th and 70th percentiles . Thus the search set decreases by a fixed proportion at each step, namely at least 30% (so at most 70% left). Lastly, the overhead of computing the pivot consists of recursing in a set of size 20% the size of the original search set, plus a linear factor, so at linear cost at each step, the problem is reduced to 90% (20% and 70%) the original size, which is a fixed proportion smaller, and thus by induction the overall complexity is linear in size.


Median-of-medians algorithm:
  • Line up elements in groups of five (this number 5 is not important, it could be e.g. 7 without changing the algorithm much). Call each group S[i], with i ranging from 1 to n/5.
  • Find the median of each group. (Call this x[i]). This takes 6 comparisons per group, so 6n/5 total (it is linear time because we are taking medians of very small subsets).
  • Find the median of the x[i], using a recursive call to the algorithm. If we write a recurrence in which T(n) is the time to run the algorithm on a list of n items, this step takes time T(n/5). Let M be this median of medians.
  • Use M to partition the input and call the algorithm recursively on one of the partitions, just like in quickselect.
The chosen pivot is both less than and greater than half of the elements in the list of medians, which is around n/10 elements (½×n/5) for each half. Each of these elements is a median of 5, making it less than 2 other elements and greater than 2 other elements outside the block. Hence, the pivot is less than 3(n/10) elements outside the block, and greater than another 3(n/10) elements inside the block. Thus the chosen median splits the elements somewhere between 30%/70% and 70%/30%, which assures worst-case linear behavior of the algorithm. To visualize:
One iteration on the list {0,1,2,3,...99}

12
15
11
2
9
5
0
7
3
21
44
40
1
18
20
32
19
35
37
39

13
16
14
8
10
26
6
33
4
27
49
46
52
25
51
34
43
56
72
79
Medians 17
23
24
28
29
30
31
36
42
47
50
55
58
60
63
65
66
67
81
83

22
45
38
53
61
41
62
82
54
48
59
57
71
78
64
80
70
76
85
87

96
95
94
86
89
69
68
97
73
92
74
88
99
84
75
90
77
93
98
91
(red = "(one of the two possible) median of medians", gray = "number < red", white = "number > red")

Proof of O(n) running time

The median-calculating recursive call does not exceed worst-case linear behavior because the list of medians is 20% of the size of the list, while the other recursive call recurses on at most 70% of the list, making the running time
T(n) \leq T(n \cdot 2/10) + T(n \cdot 7/10) + c \cdot n.
The O(n) term c n is for the partitioning work (we visited each element a constant number of times, in order to form them into n/5 groups and take each median in O(1) time).
From this, using induction – or summing the geometric series, obtaining 1/(1-9/10) = 1/(1/10) = 10 for overall scaling factor – one can easily show that
T(n) \leq 10 \cdot c \cdot n \in O(n).

Code
// selects the median of medians in an array
int select(int *a, int s, int e, int k){
    // if the partition length is less than or equal to 5
    // we can sort and find the kth element of it
    // this way we can find the median of n/5 partitions
    if(e-s+1 <= 5){
        sort(a+s, a+e);
        return s+k-1;
    }
    
    // if array is bigger 
    // we partition the array in subarrays of size 5
    // no. of partitions = n/5 = (e+1)/5
    // iterate through each partition
    // and recursively calculate the median of all of them
    // and keep putting the medians in the starting of the array
    for(int i=0; i<(e+1)/5; i++){
        int left = 5*i;
        int right = left + 4;
        if(right > e) right = e;
        int median = select(a, 5*i, 5*i+4, 3);
        swap(a[median], a[i]);
    }
    
    // now we have array 
    // a[0] = median of 1st 5 sized partition
    // a[1] = median of 2nd 5 sized partition
    // and so on till n/5
    // to find out the median of these n/5 medians
    // we need to select the n/10th element of this set (i.e. middle of it)
    return select(a, 0, (e+1)/5, (e+1)/10);
}

int main(){
    int a[] = {6,7,8,1,2,3,4,5,9,10};
    int n = 10;
    
    int mom = select(a, 0, n-1, n/2);
    cout<<"Median of Medians: " << a[mom] << endl;
    return 0;
}