Posts

The connected components

Image
  3 The connected components   Molloy and Reed [1995] showed that for a random graph with vertices of degree i, where   are non-negative values which sum to 1, the giant component emerges when So long as the maximum degree is less than   They also show that almost surely there is no giant component when and maximum degree less than Here we compute Q for our graphs. We are thus led to consider the value   which is a solution to If We first summarize the results here: 1.    When   the random graph a. s. has no giant component. When   there is almost surely a unique giant component. 2.    When     almost surely the second largest components have size   there is almost surely a component of size x. 3.       When   almost surely the second largest components are of size For any   there is almost surely a component of size x. 4.      When   the second largest components are a. s. of size   The graph is almost surely not connected. 5.        When 0 <β<

Difference between Batch Gradient Descent and Stochastic Gradient Descent

Image
  Difference between Batch Gradient Descent and Stochastic Gradient Descent In order to train a Linear Regression model, we have to learn some model parameters such as feature weights and bias terms. An approach to do the same is Gradient Descent which is an iterative optimization algorithm capable of tweaking the model parameters by minimizing the cost function over the train data. It is a complete algorithm i.e it is guaranteed to find the global minimum (optimal solution) given there is enough time and the learning rate is not very high. Two Important variants of Gradient Descent which are widely used in Linear Regression as well as Neural networks are Batch Gradient Descent and Stochastic Gradient Descent(SGD).  Batch Gradient Descent:  Batch Gradient Descent involves calculations over the full training set at each step as a result of which it is very slow on very large training data. Thus, it becomes very computationally expensive to do Batch GD. However, this is great for convex