Posted by: anindyade | November 15, 2009

In his honour

I am sure that this is not the first piece that is being written in his honour and yet I believe that any accolade for the Master Blaster is short of what he deserves. His cricketing exploits can easily fill scores of books but then there is something about him, that makes Sachin Tendulkar transcends the boundaries of sport and become what can be described in mortal terms as artist (though this expression is surely inadequate).

What makes Sachin more special is that in a country like India where its very very easy to say that there is “lack of ooportunity”,  “too much nepotism” and similar such things, this man represents hope. He shows that true brilliance can never be suppressed and if you have merit, means can never be an obstacle. And beyond the fact, that talent like him comes once in a century or may be even rarer, his dedication and commitment to the game is something that mere mortals like me can learn from.

Even on a personal front, there is a lot to learn from this man. When Raj Thackeray’s vicious campaign on Mumbai for Marathis was (and still is) going on, probably Bombay’s most famous son without mincing words said that he is an Indian first and a Marathi later, and Bombay is a part of India.  One can only wish that Thackeray and his goons get some sense in their head to realize that what can make Maharashtra a proud state and India a proud country, are achievers like Sachin and not hatred preachers like him who at times seem to be worse than those terrorists from Pakistan.

And last but not the least, as cricket is soon going to cease to be a gentleman’s game and turn into an entertainment show of 3 hours where we will have Page 3 celebrities, cheergirls, and some people who basically just know how to slog. And we will have men who will probably be wearing the same Indian colors that fine gentlemen like Sachin, Dravid and Kumble did (unless of course, we get rid of the national side and only have the mockery of cricket called IPL) and who in the name of being tough can just swear and behave like a bunch of wild animals (after all, they want to be like the Aussies !),  cricket’s greatest ambassador Sachin Tendulkar will be thoroughly missed whenever there is a game of cricket.


Thank you Sachin for giving us countless memories to cherish over the last 20 years. Thank you.

Posted by: anindyade | June 7, 2009

An elegant construction

A blog post has been long overdue. Given the miseries I had to go through this semester, (which are better not recalled), I think reflecting on them  is not the best thing I can do. So, let me write about a very elegant construction I read this morning. The result is due to Ruzsa and Szemeredi and deals with construction of bipartite graphs which can be written as a disjoint union of induced matchings. Turns out this is extremely helpful in construction of efficient PCPs. Anyway, I will just write about the former (reason being that I barely understand the latter).

The problem is as follows: Give an explicit construction of a dense bipartite graph such that it can be written as a sum of disjoint matchings each of which are very large. In particular, give a construction of a bipartite graph on 2k vertices such that there are \Theta(k) disjoint induced matchings and each is of size k^{1-o(1)}. By an induced matching, we mean that there exists a set of vertices A such that induced subgraph is a matching on A. The construction is as follows.

Let the graph G be described on vertex set V_1 \times V_2 such that both are of size 3n. Now, let A be any subset of the first n positive integers which does not have a three term arithmetic progression. Now for every i \leq n, join (i+a) \in V_1 with (2i+a) \in V_2 for every a \in A. Clearly, each such set of edges is an matching. The only thing we need to check is that they are induced matchings. Assume for the sake of contradiction that they are not. Then there exists i, j \in [n] and a_1,a_2,a_3 \in A, such that (i+a_1,2i+a_1) is an edge and so is (i+a_2,2i+a_2). However, there is also an edge (i+a_1,2i+a_2) is the matching corresponding to j. This implies i+a_1=j+a_3 and 2i+a_2=2j+a_3. This implies that 2a_1=a_2+a_3 which means there is A.P. in A and hence we reach a contradiction. So, the task boils down to constructing large A.P. free sets in [n].  One can get such sets of size n^{1-o(1)}. See This completes the construction

Posted by: anindyade | January 18, 2009

An unexpected result !

I have just put my clothes in the dryer and that means I have to stay awake for at least half an hour. So, decided to blog about a nice result I read today. Probably the only tricky part in the result is this simple arithmetic problem. Suppose x is some number modulo p^2 where p is some prime.  Can you give a nice expression involving x which is 0 modulo p iff x is 0 modulo p^2. Well, of course, nice is qualitative word, but then when you see it, you kn0w it. So here is the most compact expression I know of. It is a conjunction of 2 conditions.

x is 0 modulo p^2 iff a) x is 0 modulo p and  b) \binom{x}{p} is non zero modulo p. This leads to the following beautiful complexity theoretic application.

Define MOD_n to consist of languages L such that for each L, there is a non-deterministic Turing machine M_L and x \in L iff the number of accepting paths for M_L on x is non-zero modulo m. With the above observation, we will prove that MOD_{p^2} \subset MOD_{p}. In fact, they are equal. The other direction is easy (and left as an exercise :) ). Now, we make two important observations. First that given a Turing machine M, there is some Turing machine M_1 whose number of accepting paths is \binom{M(x)}{p}. The second observation is that if we have Turing machines M_1 and M_2, then there is a Turing machine M_3 such that on input x, M_3(x) has M_1(x)M_2(x) accepting paths.  These two observations,  help us to combine the criterion we had for a number to be non-zero modulo p^2 and proves the aforesaid claim.

I guess my clothes have dried now, so I must go down.

Signing off,


Posted by: anindyade | January 8, 2009

Useful result of the day !!

In course of trying to prove a result, I needed to get a tail inequality on the sum of n random variables such that they are only k-wise independent. I was sure that a result existed but had to spend 20 minutes looking for it. Anyway, here is the result by Bellare and Rompel. I hope someone who might be needing this, will spend<20 minutes (hopefully because google will give a link to my blog)

Let X_1,\ldots,X_n be k-wise independent random variables and X=\sum_{i=1}^n X_i with E[X]=\mu. Then Pr[|X-\mu|>\lambda] \leq C\sqrt{k}\left( \frac{\lambda^2}{n k} \right)^{-\frac{k}{2}} where C is some constant.

Happy randomizing !!

Posted by: anindyade | December 22, 2008

Frozen aircraft or how I began to hate snow and love Berkeley

A post was long overdue and I could not find a better time to come with one. Before I start, any apparent similarities between the title of this post and a popular film by Kubrick is purely coincidental. In fact, I am of the opinion that Kubrick might have read this post and got inspiration for the title. Never mind that his movie was released 44 years ago. I can come up with a nice explanation for the time shift and still manage to argue that there was no infringement of copyrights at my end. Anyway, enough of gibberish, I may as well come to the topic of the post. 

As some of my friends live at Purdue, I decided to take a few days off from my schedule and spend some time in Purdue. Basically do some work, catch up with old friends and most importantly, have my first encounter with sub zero temperatures. Before this, twice in Ranchi and once in Kanpur, temperatures came close to zero but never quite hit zero. So, I was all prepared to be dazzled by snow except that I had never thought that sub zero temperatures can be a bit painful. When the aircraft landed at Chicago, initially we were told that aircraft has not been assigned a gate. Then, a technical problem was cited. But finally, they came to the issue. The door of the craft had frozen because of snow !!! A thought flashed across my mind – If steel doors can, why can’t humans of flesh and blood? At that moment, I was quite unsure whether I actually want to leave the plane. I mean it was quite warm inside. Living in Ranchi/Kanpur/Berkeley, I always felt I was one of those who are insensitive to temperatures. However, I guess the challenge was yet to come. So, finally the door got opened – the shuttle I was supposed to take to Purdue had already left and there was none till morning. At this point, the temperature in Chicago was -20 C if not lower (Saumyadip told me some -32 but I am not sure what scale was he referring to). What was making things much worse was the wind ! Anyway, I finally managed to get a shuttle to Purdue (paid him some 30$ more than the written service charge but I guess its better to live and be poorer by 30$ rather than die in that pursuit). There was a store at which the shuttle stopped and I went to get some food for myself. I would have given description of a rather interesting incident which happened but well, I guess I am supposed to maintain some civility. so, that wont go here. Finally, when I reached Purdue I saw Saumyadip standing outside his apartment (in shorts!!). I am sure he had frozen but was acting that it was not so :) Anyway, he in fact had and so finally after a rather interesting journey, I ended in pawan’s and soumyadip’s place. The four of us (Vaibhav – I know him from Ranchi) chatted, ate. Then after a long time, Soumyadip, Pawan and I saw a movie (Sholay – I guess each of had seen it C times for C tending to infinity) But what was new this time were finding critical flaws in the movie which I guess people from IITK are good at :) These critical flaws are not usually taken care of by critics but are rather left to people like us – Just to give a sample – What happened to the horse “Dhanno” at the end of the movie :) and things in similar spirit. Anyway, looking forward to the next 10 days in Purdue. 


Signing off,


Posted by: anindyade | August 17, 2008

Hello from UC Berkeley

So, another phase starts in life. This time its Graduate school and as it happened with the previous one (i.e. IITK), the first part has been very hectic. I arrived in United States on 11th August. Soham came to pick me up from airport and even though he had warned me that it will not be feasible for me to go to Berkeley (he stays in SF) the next day, I dared to do the impossible ! However, things ended on a rather modest note with me ending up sleeping for most of the next day (leave alone going to Berkeley). Next morning, I alone went to Berkeley (on BART). As Madhur is interning in India, I decided to stay in his room. The day was hectic which included making Calcard (ID card), opening account in bank and attending orientation program. Soham did come in the evening with the bags. The next day, I went almost everywhere (Walking for 10kms or more) searching for a house. Finally, we got a house. The rent is high but well, everyone has a separate bedroom. Also got a phone connection. With all that settled, I made the necessary pilgrimage to Soda Hall (Computer Science Dept. Building) and as many say “It looks like a bathroom inside out”. Reason: The exterior is made with tiles – green ones- the same as ones used in bathroom. Anyway, I hope I live up to the reputation of Soda Hall and the illustrious people who have occupied the place before me.

The past week was a proud one for every Indian (and hence for me) as Abhinav Bindra clinched the first individual gold medal ever by an Indian at the Olympics. I hope this is the first of many more to come. And lets hope, this time it comes with a better frequency. Apart from being an outstanding achievement, credit goes to Abhinav for surmounting the challenges posed by politics usually prevalent in sports. And I think a quote by Amitabh Bacchchan concisely sums up my feeling “He made every Indian walk a bit taller”. Apart from celebrating this feat, we should take some time to ponder why it took 100 years for a billion people to get their first gold medal.

With an obvious scarcity of friends, missing India and family and friends,

Posted by: anindyade | August 1, 2008

Some parting thoughts and Approximation algorithms

So, the time has come for me to leave India and go to a foreign land (I hope it is not for ever but nevertheless, its for a substantial time). This is probably the last post I am writing before going to Berkeley. Though I will be in home till next Saturday, I believe next week will be very hectic and I shall have no time to write posts. So, what will I miss about India? Well, first thing is India itself ! Then staying in home for the past 2 months, I have again got used to good food which I guess might become a rarity once I am in US. Adjustment to a new system, new set of people, new food, new weather etc. I guess the only thing which I am accustomed to and which will also be there in US is myself. But then, changes are inevitable. So, the best thing is to not resist it and better accept it with a smile. After all, 4 years back, I did not expect that I will have any regret when I leave IITK, but when I was actually leaving IITK, I did feel sad. So, may be 4-5 years later, when I leave Berkeley, I might feel sad for Berkeley as well.

Enough of emotional stuff, lets get back to business. I always wanted to know a bit about Approximation algorithms but never had the opportunity to learn about it in too much detail. Last week, apart from thinking about some circuit complexity problems, I read about two simple but elegant approximation algorithms. The first one is a greedy algorithm for Set-Cover problem and the second one is the Goemans-Williamson SDP algorithm for MAX-CUT. The first one has a simpler description and a somewhat simpler proof, so let me explain that. But before that, let me start with Approximation algorithms. Since the seminal works of Cook, Levin and subsequently Karp, it is known that there are problems which have efficiently verifiable solutions but are not solvable in polynomial time under a widely believed assumption (namely P<>NP). Today there are literally thousands of problems (of theoretical as well as practical importance) which are known to be NP-complete i.e. unless P=NP, these problems cannot be solved in polynomial time. Hence, the corresponding optimization problems (i.e. maximization and minimization problems) also cannot be solved in polynomial time (unless again P=NP). So, we should give up the hope of getting efficient algorithms for these problems. However, it may be the case that though you cannot have an exact algorithm, you can have an efficient algorithm which returns an answer which is guaranteed to be close to the optimal value (for example, in a maximization problem, it may happen that the answer is always within a factor of 2 of the optimum). These algorithms will be of tremendous importance and which is what has given rise to the flourishing field of approximation algorithms and its complexity sibling – Hardness of approximation (and of course the PCP theorem). I will now describe a O(\log n) approximation algorithm for set-cover. The algorithm is very simple though the analysis is tricky.

First, I will describe the problem. Let C=\{S_1,S_2,S_3,\ldots , S_k\} be a collection of subsets of set U. Without loss of generality, we may also assume that \bigcup_{i=1}^k S_i = U. Goal is to find the minimum number of subsets S_{i_1},S_{i_2},S_{i_3} \ldots S_{i_m} \in C such that \bigcup_{j=1}^m S_{i_j}=U. In other words, find the minimum number of subsets in the collection C such that their union is U itself. For example, let C=\{\{1\},\{2\},\{3\},\{1,3\}\}. Then the minimum set cover is \{\{2\},\{1,3\}\}. The algorithm for this is very simple – It is a greedy algorithm.

Let R=U and A=\Phi. Find the set S_i \in C such that size of S_i \cap R is largest among all sets in C. Let A=A \cup S_i and R=R-S_i. If A covers U, then stop else start the algorithm with new R and A. So, it is a very simple greedy algorithm for set cover. It is also clear that the algorithm runs in polynomial time. Now comes the tricky part – analysis of the approximation.

The first trick is that it is rather hard to analyze the algorithm in terms of the optimal solution. Rather it is easier to analyze the optimal solution in terms of the algorithm. Let there be t iterations in the algorithm and in the i^{th} iteration, let set S be added to A. Also, just prior to adding the set S, let size of S \cap R be s. Hence, we can see that for adding s elements to the set cover, we are spending a cost of \frac{1}{s} per element. Let c_a denote the cost spent for element a (as described above). Now consider any arbitrary set S' \in C. Let S'=\{s_1,s_2,s_3,\ldots,s_h\} where the order is defined in the order in which elements are added to the set cover by the greedy algorithm with ties being broken arbitrarily. Consider the element s_j. Clearly, by the way the algorithm works, we can say that c_{s_j} \leq \frac{1}{h-j+1}. This immediately implies that \sum_{s \in S'} c_{s} \leq \sum_{j=1}^h \frac{1}{h-j+1}. The last sum can be bounded by \log h. Now let O=\{S'_1,S'_2,S'_3,\ldots , S'_d\} be the optimum solution to set cover. Clearly, the optimum solution has size d. Further \sum_{i=1}^d \sum_{s \in S'_i} c_s \geq \sum_{s \in U} c_s. This inequality follows from the fact that O is a set cover. However, we also know that log \ n \geq log \|S'_i\| \geq \sum_{s \in S'_i} c_s. Here \|U\|=n. It is also easy to see that the way c_s is defined, \sum_{s \in U} c_s is the cost of the solution by the greedy algorithm (which we denote by d_1. Hence, by the aforementioned inequalities, we can conclude that d \ log \ n \geq d_1. Hence, the greedy algorithm is log \ n approximation algorithm.

This is in fact the best that can be done i.e. Feige proved that unless NP \subset DTIME(n^{O(\log \log n)}) (It is almost as bad as P=NP), Set cover is hard to approximate to (1-\epsilon) \log n for any \epsilon > 0. Anyway, that is for another day. As of now, I am done. Next post from Berkeley. Till then, bye bye.

Posted by: anindyade | July 24, 2008

Fourier analysis of boolean functions

At STOC 2008, Ryan O’ Donnell gave a very beautiful tutorial on analysis of boolean functions. Though I did not get everything that he discussed there, last week I decided to go over the material again and this time, things became much clearer. So, I decided that may be I should blog about this very elegant and useful technique in theoretical computer science.

Usually, whenever we consider a boolean function f, we consider it as a mapping from \{0,1\}^n \rightarrow \{0,1\}. Since, there is nothing hard and fast about using 0,1, we might as well use -1 and 1 instead. Hence, now we may consider a boolean function f:\{-1,1\}^n \rightarrow \{-1,1\}. Also, now we will consider the domain and range of f to be embedded in \mathcal{R}^n and \mathcal{R} respectively i.e. f:\mathcal{R}^n \rightarrow \mathcal{R}. This proves to be a masterstroke as we can now use fourier analytic techniques for analysis of boolean functions. To see how simple some functions might look in this setting, consider the parity function over n variables x_1,x_2,x_3, \ldots x_n. In this setting, the parity function PARITY(x_1,x_2,x_3,\ldots , x_n)=\prod_{i=1}^n x_i. However, in the earlier setting, using just AND, OR and NOT operations, PARITY does not have that simple an expression. Anyway, that is not exactly a mathematical reason for considering the functions in this new setting. So, we will try to find a few more properties of functions in this new representation.

The first property is that given a function f:\{-1,1\}^n \rightarrow \mathcal{R}, there is a unique multilinear polynomial (i.e. a polynomial such that none of the monomials has a quadratic or higher degree term in any variable) g: \mathcal{R}^n \rightarrow \mathcal{R} such that g(\overline{x})=f(\overline{x}) for \overline{x} \in \{-1,1\}^n. Corresponding to S \subseteq [n], we have a unique monomial \prod_{i \in S} x_i. Coefficient of this monomial in g is denoted as g_S=f_S and is called a fourier coefficient. The reason it is called a fourier coefficient has to do with representation theory.

The next observation to be made is that the set of all functions from \{-1,1\}^n \rightarrow \mathcal{R} forms a linear vector space over \mathcal{R} of dimension 2^n. We can also define an inner product here (sorry for using IP as an abbreviation of inner product as it is difficult to get the standard notation here): IP(f,g) = \mathbb{E} f( \overline{x} ) g( \overline{x} ). Now define functions \chi_S(\overline{x})=\prod_{i \in S} x_i. It is easy to see that there are 2^n of these functions. If we can prove that these functions are linearly independent, then it shows that these functions form a basis for functions from \{-1,1\}^n \rightarrow \mathcal{R}. Now, it is easy to check that if S \not = S', then IP(\chi_S , \chi_{S'})=0. Also, IP(\chi_S , \chi_{S})=1. This shows that the set of \chi_S for all S \subseteq [n] forms an orthonormal basis for all functions from \{-1,1\}^n \rightarrow \mathcal{R}. For a function f:\{-1,1\}^n \rightarrow \mathcal{R}, f_S is actually the coefficient of \chi_S in this basis (which the more observant reader may notice is nothing but the character group of functions in the given domain and range). Anyway, that is almost all the theory one needs to understand about fourier analysis so as to understand some basic results.

While there are plenty of results in theory that use fourier analysis, one particular example in property testing is very elegant. so, given a function f:\{-1,1\}^n \rightarrow \{-1,1\}, we need to decide if the function is linear or not i.e. whether \forall x \forall y f(xy)=f(x)f(y). Here xy denotes the termwise product in \{-1,1\}^n. One can effectively decide with high probability using just 3 queries whether f is linear or is far from being linear. The test is to just take two random points x, y and query for the values of f(x), f(y) and f(xy) and check if f(xy)=f(x)f(y). It is trivial to see that if f is linear, then this test returns yes with probability 1 while using fourier analysis, it may be shown that if f is \epsilon far from being linear, then the test returns no with probability \epsilon. The rough outline of the proof is that one may show that the only linear functions are \chi_S for different S \subseteq [n]. Further, it may be shown that f_S=1-2\delta where \delta is the distance between f and \chi_S. Finally, it may be shown that the rejection probability can be upper bounded in terms of the largest fourier coefficient. From this, we may derive the aforementioned claim.

There are many applications of fourier analysis especially in the theory of social choice and also in additive combinatorics. May be some day I will blog about Roth’s proof of Szemeredi’s theorem which also uses fourier analysis. As usual, comments and criticisms welcome.

Posted by: anindyade | July 19, 2008

My years at IITK

One of my friends recently wrote about his four years at IITK. So, I decided that I should also take this opportunity (provided by a period of sheer joblessness) to express my gratitude towards my alma mater and the people who made that stay unforgettable. Well, so what was it that prompted me to come to IITK. Nothing much really – My neighbor graduated from IITK (Y98 batch) and when I was in a dilemma on whether to go with CS@KGP or take EE@IITK, I asked him and he suggested IITK (Eventually, though I switched to CSE). Further, for some reasons which are best not elaborated, I somehow have a dislike for IITKGP. The initial few days were tough for me as I guess they were for everyone else. New place, some bit of ragging and bad food etc. – nothing made my life easy. But somehow slowly things started to fall in place and I did not realize when but suddenly one day I realized that I no longer considered that place alien.

So, what was great about IITK – Well, nothing! – Because you don’t consider a place to be your own when things are great. You consider a place to be your own when everything looks very natural around you. I remember commenting to someone that I did not like Stanford simply for the reason that everything looked so picture-perfect, too much picture-perfect for my liking. In contrast, everything at IITK was so natural, everyone was so natural.

So, what will I remember about IITK? – (Disclaimer: If I missed writing someone’s name which shall be true for 1-o(1) fraction of the people at IITK, it does not mean I don’t remember you or like you) – Well, many things – like Cherian telling me “how you become responsible when you go to a gymnasium”, Soumyadip’s LBW, Lendu and Marda for their antics during CS330 project (believe me guys, in retrospect, I miss it), discussion with wingmates in Muski’s room(especially Abhishek, Manjesh, Gaurevendra, Muski, Lalbabu, Sudhir, Mishraji), wonderful bull sessions in CSE ground floor lab with Gelara, Abhinav, Nishith, Siddharth (of course, cherian included who was the victim in all such discussions) and of course the wonderful friends I had in Sidhant, Pawan, Abhik, Vivek, Mayank and everyone whose name I am missing right now for I am feeling tired to write.

Things will not be complete if I don’t mention (I wont go upon elaborating on the names) that some of the people who taught me at IITK are not just some of the smartest but also some of the most down to earth and wonderful people I have ever had the chance of meeting.

Life was so good at IITK that it probably cannot be expressed in words.I did not quite realize when the four years got over and it was time to go but well I am sure things will hang in memory for a long long time to come. Thank you everyone who made that stay such a memorable affair.

Posted by: anindyade | July 11, 2008

Graduate school

Being down with influenza is the worst that could have happened. Not only am I having slight fever but almost every joint of my body is aching along with a blocked nose and a sore throat. So, you can just imagine how wonderful I am feeling right now. Anyway, though I had been thinking on Pseudorandom generators for some time, but having a fever means I am just not feeling enthusiastic to think about anything. Given the quality of movies on TV, I decided to just blog about something and that something happens to be graduate school.

As grad school applications are pretty fresh in my memory, let me say something about how I chose where to apply and how I made the final decision process. Choosing the right graduate school for oneself happens to be a very critical decision and so one should be careful of what he/she wants out of graduate school. I believe the most common answer is “good research” and if that be your aim, then my personal opinion is to first select people with whom you would like to work and then just applying to the relevant universities. In my case, I wanted to mainly work in the area of Computational Complexity and hence most of my choices were made accordingly. I have heard many a times people choosing graduate schools on a very abstract thing called “ranking”. Well, it is easy to understand a layman on the street looking at rankings given by an equally (or more) lay  newspaper journalist and who does not understand anything about how research goes in a graduate school. But it makes it difficult to make sense when prospective graduate students take a same view of the event. When I was applying, I often heard statements from fellow applicants like “Among the top 10, I am applying to 6 and then 2 from the next 10″. Personally, I applied to place like Harvard which may not rank very highly in a news ranking but certainly the theory group at Harvard has excellent people like Salil Vadhan, Leslie Valiant, Michael Mitzenmacher among others. But then that should be a natural thing to do. I mean why on earth should you give a damn to these stupid news rankings. Another thing I have seen is people taking thousand opinions regarding their SOP. Taking one or may be a couple of opinions sounds justified but you should try to be yourself and not someone else in your SOP. Believe me, honesty is always appreciated rather than trying to get a made-up image in your SOP.

Finally about choosing the correct school once you have got all your offers. First of all, if you are dependent on financial aid, you should certainly make sure that they are giving you enough funds. Once that is done, decide on the preference order of people with whom you will be working. Finally, just choose the university corresponding to the person first in the order. Sometimes, it happens that you are equally inclined to work with anyone (It did happen in my case with almost 4 -5 universities) . Then just may be take the place with a larger group in your area of interest. Once even this is done, flip a coin and decide.
At times I have seen students going as far as thinking about the job prospects once they graduate from there. Well as per my opinion, it is too early to think. To summarise it, just aim to do good research in your grad school and keep bull shit to minimum

Older Posts »



Get every new post delivered to your Inbox.