Category Archives: physics

Monsoon Clouds

The time when water vapor is no longer a constraint on cloud formation, the Monsoon provides for some spectacular visuals. The clouds are mainly formed due to convection over the Indian Ocean, which is why all the normal signatures of cloud formation like thunder and lightning are not very common. It simply pours. The time when thunder and lightning are common is when clouds form locally, during the months of April and May.

Space Craft?

This monstrosity is probably many kilometers long. An interesting thing to notice is the base of clouds, which is always nearly flat. The base is probably around 2.5 to 3 kilometers above the earths surface. This marks the top of the atmospheric boundary layer. Clouds this big are always bad news and seem to be cumulus congestus in the picture. This will eventually grow and bear rain. The formless sheets in front seem to be altostratus and normally dont bear rain. It is probably only in this season that you can see cumuli and stratus clouds together.

Same as above, more dispersed cumuli

Some really high clouds were also visible, indicating strong convection somewhere else. Cirrus clouds are normally found at a height of 5-6 kilometers and look very peaceful.

Cirrus, fibre like

And another shot of cumulus clouds, the smaller variety.

Cumului marching

Compare these with the clouds normally seen a few months before. The sky is way busier than in the months preceding the monsoon. Blame it on the winds.

On Evolution

History and Evolution (from biology) are somewhat similar in that both are quite descriptive, and involve in their description some events that cannot really be predicted from any known first principles. Indeed, it will not be too much of a sin to say history is evolution of some kind by itself.

This being the case, there are atleast two ways in which one can approach evolutionary topics, which can complement or (probably more common) confront each other. All evolutionary topics that we are interested in always suffer from a lack of complete data about them. As mentioned a couple of posts back, this has motivated people to assume an underlying ‘model’, and use it to string a story through the facts which seems plausible. Another, equally interesting method to deal with the same would use the notion of a ‘constitutive absence’.

‘Constitutive absence’ is simply a way of saying that what is absent from the data collected or story woven is as important as what is present. This motivates us to ask the question ‘Why not this?’, as opposed to the question ‘Why this?’. It is my suspicion that looking at history and biological evolution in this manner will be able to structure our thinking about these subjects in a more constructive manner. For example, instead of asking ‘Why do birds have two wings?’, it may be more constructive to ask the question ‘Why are there no birds with six wings?’. It may just so happen that we have missed finding these flying critters, or there is some other reason. This to me is more in line with the theory of natural selection — Natural selection can only select against, not select for. To select for something implies evolution should ‘know’ what to select for, which is obviously nonsense. In the game of natural selection, there are no winners, only survivors. However, most biology literature seems to try and explain why a particular trait is present in an organism. Schrodinger came to the conclusion that the molecules that carry life (The structure of DNA was unknown then) must be large by asking why cannot there be small molecules of life (Answer being related to the fact that at smaller scales, Brownian motion dominates and smaller the molecule, the more suspceptible it is to change (mutation) by bombardment of other molecules).

The picture that this provides us with is not of an all-encompassing Story of Everything, but about the constraints that are put on organisms which prevent any other possible scenario from being viable. It is somewhat like trying to understand how water flows — If you look at water in a large river, trying to follow one blob of water may be hopeless, but in a stream you may have better chances. The constraints of a narrower channel makes this easy for us. Even so, you may not be able to predict precisely how the blob of water flows, but you know that it will remain within a confined boundary.

However, you will need to admit that beyond a certain stage you can’t really be sure about things when you take up this approach, whereas the Big Story approach will try and explain everything.

History is no different. Instead of asking ‘Why did the Europeans have an Industrial Revolution?’ one can ask ‘Why did the Indians and the Chinese not have an industrial Revolution?’. Instead of asking ‘Why is India mainly vegetarian?’ one can ask ‘Why did not Indians develop a meat dominated cuisine?’. You can probably see how just framing the question differently leads one to think very differently about the same problem. If you appeal to the physics of complex systems, then you are acknowledging that the trajectory of any complex system is inherently hard to predict, but constraints on the system make certain trajectories highly unlikely, and everything that happens does so within the phase space that is still viable. Historically we hoped to find laws of Nature and Society that would enable us to see forward and back in time. Unfortunately, we know now that these laws, if they exist, are probably too complicated for us to comprehend, and so a ‘constitutive absence’ is a more sensible way to move forward.

Moral stories in the age of computers

All of us have been brought up listening of reading some or the other kind of moral stories –  Panchatantra, Aesop’s fables, Bible stories and so on. They are part of our standard training while learning to live in the world. All moral stories are motivated by some ultimate aim of human life, though these are never explicit or overshadowed by talking animals and trees. Our morals do not develop in a vacuum – they are shaped strongly by our socio-cultural and geographical locations, and moral stories are among the more effective means towards our ‘shaping’. Not only that, like everything else in the world, they evolve, though not necessarily in the Darwinian sense of the word. Aristotle and Plato may have condoned slavery, but not Adam Smith and his ilk. Even then, considering that Aesop’s fables and the Bible provide relevant advice even to this day, there seem to be some things that are eternal, like numbers.

From where do we derive our ethical codes? The most abundant source is of course our own history. When viewed from a certain lens (which comes from a certain metaphysical position about man and his relationship with other humans and the rest of the universe), history can give us all the lessons we need. Which is why it is said that people who forget history are condemned to repeat it – not that we have progressed linearly from being barbarians to civilized people, it is just that we are animals with an enormous memory, most of it outside our heads and in books, and preservation or changing of such a legacy necessarily requires engagement with it. Therefore, ethics and epistemology have always gone hand in hand.

Our times are unique from any other in history simply due to the predominance of science in determining what we know – Ancient Greeks or Indians would do physics and metaphysics simultaneously without necessarily putting one or the other on a pedestal. Scientific method and mystical revelation were both valid ways at getting to the truth. Nowadays, of course, the second would hardly be considered a valid method for getting at anything at all, let alone the truth. Hard to say whether this is good or bad – evolution does not seem to have a sense of morality.

The Newtonian and Darwinian revolutions have had important implications for the modes of moral story telling: First, they remove the notion of an ultimate purpose from our vocabulary. Newton’s ideal particles and forces acting on them removed any ideas of the purpose of the universe, and the correspondence between particle<->force of Newton and Darwin’s phenotype<->natural selection is straightforward. Thus, biology or life itself lost any notion of ultimate purpose. Economists extended it to humans, and we get a human<->pain/pleasure kind of model of ourselves (pain/pleasure is now cost/benefit, of course). All in all, there are some kind of ‘particles’ and some ‘forces’ acting on them, and these explain everything from movement of planets to why we fall in love.

Secondly, history is partially or wholly out of the picture – at any given instant, given a ‘particle’ and a ‘force’ acting on it, we can predict what will happen in the next instant, without any appeal to its history (or so is the claim). Biology and Economics use history, but only to the extent to claim that their subject matter consists of random events in history, which therefore cannot be subsumed into physics.

If life has no ultimate purpose, or to put it in Aristotle’s language, no final cause, and is completely driven by the efficient cause of cost/benefit calculations, then why do we need morals? And how can one justify moral stories any longer?

The person of today no longer sees himself as a person whose position in life is set by historical forces or karma, depending on your inclination, but as an active agent who shapes history. Thus, while the past may be important, the future is much more so. He wants to hear stories about the future, not about the past.

This is exactly where computers come in. If we accept a particle<->force model for ourselves, then we can always construct a future scenario based on certain values for both particles and forces. We can take a peek into the future and include that into our cost-benefit calculations (using discount rates and Net Present Value etc etc.,). Be it climate, the economy or the environment, what everyone wants to know are projections, not into the past, but the future. The computation of fairytales about the future may be difficult, but not impossible, what with all the supercomputers everybody seems to be in a race to build.

The notion of a final cause is somewhat peculiar – it is the only one which is explained in terms of its effect. If I have a watch and ask why it is ticking, I can give a straightforward efficient cause saying because of the gear mechanisms. On the other hand, If I ask why are the gear mechanisms working the way they do, I can only answer by saying to make the clock tick – by its own effect. Thus, if we see the future a computer simulates and change our behavior, we have our final cause back again – we can say to increase future benefit, we change our present way of life. The effect determines the cause.

Corporations, Countries, Communities are faced with the inevitable choice of using a computer to dictate their moral stance. However, one can always question the conception of a human being (or other life for that matter) as doing cost benefit calculations as their ultimate goal. If we need a more textured model of a human, writing an algorithm for it remains an impossibility to this day.

For example, one can argue that the ultimate pupose of life is to live in harmony with nature or that we should ‘manage’ nature sustainably. The former does not need (indeed, does not have at present) a computer model, whereas the other does. One is within the reach of every person, the latter is only accessible to a technological high-priesthood. Which should we choose? at a future time, which one will we be forced to choose?

Therefore, in this post-Darwinian world, can we imagine an ultimate purpose for ourselves that will enable us to act on our own, or will we be guided by supercomputers simulating caricatures of ourselves? Time will tell.

Evolution – Variation and Similarity

Evolutionary thinking (due to Darwin) is no doubt one of those paradigm shifting moments in scientific history, changing how we conceive of the world around us and ourselves. The idea of ‘Descent through Modification’ is now well established and accepted.

While evolution is not a disputable fact, a major source of debate a few decades ago (and even nowadays, to some extent) has been the causes for evolution. Enter a evolutionary biology class and you will see that everyone tries to explain observable traits (non-jargon way of saying phenotypes) using fitness arguments – how this or that trait was required for survival and reproductive success, and hence it is here today. These arguments stem from a view that is called the ‘Modern Synthesis’ – evolution happens primarily through natural selection, and natural selection requires a set of variants to select from, and this variation within a population is given by random genetic mutation. It is called the ‘Synthesis’ since it combined ideas from evolution and genetics to give a plausible answer to the mechanism of evolution. The whole idea of evolutionary game theory rests on this hypothesis, and so does evolutionary psychology.

However, a physicist or a mathematician or anyone else who tries to look for patterns in phenomena will tend to be exasperated by natural selection arguments for everything: in some cases, it is obvious that natural selection caused evolution, while it is not so in others. However, a knee-jerk answer to any evolutionary question by a biologist will invoke natural selection. Now, most of these answers are plausible, but that does not mean anything. For example, a crash in a predator population can easily be put down to a lack in fitness, but everyone who has studied the predator prey model will tell you that this crash comes about due to interactions between predator and prey populations, and has nothing to do with genes or natural selection.

Creating evolutionary fairy tales frees the biologist from looking at a phenomena at a deeper level, and sometimes one feels that depth is what is lacking when one reads up evolutionary biology. The oft quoted example is of the Fibonacci spirals in plants – this shows up everywhere, from shapes of galaxies to arrangement of seeds in flowers. A hardliner selectionist would tell you that this is because there were many variants of the universe and ours was the only one that managed to survive (reproduce?), and thus all such successful survivors will have Fibonacci spirals because of their ‘fitness improvement’. Now, one cannot disprove this, no doubt, but the question is whether one should accept it.

For me atleast, the answer is no – while selection of variants has its place in biology and (I sceptically say this) in other fields, it cannot explain the unity underlying phenomena: Certain things ‘just happen’ to look/behave/think similarly, and this evolution via selection cannot explain. Are there physical, chemical, informational constraints on a living being that simply does not allow certain variants? Are ‘gaps in the fossil record’ actually ‘gaps’ –  is there a step jump from one form to another? Answering these questions is way harder than coming up with ‘plausible’ selectionist arguments, and has very rarely been attempted in the history of biology. However, if evolutionary theory has to have the depth seen in physics or mathematics, such work has to inevitably happen.

Situating the Mind

One of interesting themes that emerged from a workshop that I attended recently is the problem of placing the Mind in a certain place. Up front, one must assume that it is sensible to separate the Mind from the Brain, at least for purposes of analysis if nothing else. Neuroscientists may have problems with this, but that is their problem.

The first approach to this problem was to deny that anything ever happened within the Mind – the brain was a simple input/output machine, put in stimulus and get out behavior. There is nothing called mental states and anything ‘unobservable’ had no real existence. This was the approach of the Behaviorists, and this is what gave rise to traditional psychology, with its ideas of conditioning and behavioral modification. This view is quite defunct especially after Chomsky and others at MIT and Harvard came into the picture.

The second dig at the problem was taken by the cognitive scientists from the Chomsky tribe. This still dominant view considers the brain to be a computer (note that they do not think of a computer as a good model, but rather that the brain is a computer). While the particulars of implementation may be debated (Turing Machine vs. Neural Networks), the idea is that the Mind has certain internal states, which when combined with sensory inputs gives you all the rich everyday experience that any person is familiar with. To a first approximation, and as a working hypothesis, this is extremely useful and has led to great insights about the functioning of the Mind, especially with regard to perception and language. Computer vision is the brainchild of this era, and its results are there for anyone to see.

However, biologists were probably dissatisfied by this ‘disembodied’ mind that the computer scientists had come up with. This would imply that the relation between the mind’s functioning and the environment in which it evolved is very small. No self respecting biologist can ever accept such a claim, and this led to an ’embodied’ concept of the mind, where perception (for example) was not the output of an algorithm but a combination of body states (walking, running, eating, etc.,) and mental states, and one cannot separate the two out since the mind did not evolve on its own, but rather developed as part of a whole.

Thus, we see a trajectory of thinking about the Mind, which moves from a complete denial of it, to a disembodied version to one (now popular) version which places it firmly within an organism. In some sense, the complexity which one attributed to the mind has increased over time.

The next step came from (obviously) the philosophers, some of whom claimed that the Mind does not exist within the person, but is a combination of the organism and its environment. What they say is that the environment does not simply affect cognitive processes, but is a part of them. Thus, no environment means some processess simply will not work.

Thus, the Mind is no longer a localized entity but which is distributed over space and (maybe!) time. One hopes this gradual extrapolation does not lead to Deepak Chopra like new-age mysticism and leads to claims that can actually be tested for their truth value. But again, one sees that this is a step up the complexity ladder. Slowly, the study of the Mind has gone from simple to very complicated ideas about its location, forget about function.

This is in contrast to historical developments in say physics, where complicated phenomena were ultimately explained using a small set of concepts which were considered fundamental. As with the study of the Mind, the study of the Earth system has run into difficulties. What one hoped would end at studying large scale motions of the atmosphere and ocean is nowadays studying phytoplankton and its effects on global climate!

Intuitively, there seems to be something very different about the phenomena that we are trying to study in the mind sciences or earth system science than the atoms or celestial objects that physics studied. You cannot study vision and learn things that extrapolate to the mind in general, just as you cannot study a liver and tell what the organism is likely to be. The fact that even the question of what to study is not well defined leads to very dubious research which gives the whole field a bad name. Do we, as our predecessors did, study the liver, pancreas and heart and say well, put all this together and you get a living being? Or do we try to answer the question ‘what is life’?

It does not seem very clear as to how the present range of scientific methods can help answer a question like the latter. The study of the Mind, the Earth or even Biology is at a stage similar to maybe where Mechanics was at the time of Kepler. People are looking at various ways to chip away at the same problem, some traditional, some extremely offbeat, in the hope that what one considers valid questions will be answered. Whether they will be answered or shown to be invalid, time will tell.

Tales Clouds Narrate

Been meaning to write this for some time, but did not have enough pretty pictures to put in, so kept delaying it. Now that I think the pictures are pretty enough, here we go.

Clouds have fascinated people of various inclinations – from poets to physicists to farmers to nature lovers. They symbolize freedom, fertility and make sunsets magnificent. Apart from the metaphorical tales that poets imagine clouds to carry, they do tell us a lot of things, some of which I hope to convey through pretty pictures.

Clouds are among the most important factors driving our climate and definitely the least understood. The fact that they are so important derives from that fact that the climate is determined to a large extent simply by the amount of solar radiation that is trapped on earth, and clouds play a huge role in determining that amount. The atmosphere is almost completely transparent to incoming sunlight. The only thing that can reflect sunlight back to space are clouds, cooling the earth. Interestingly, clouds also absorb almost all the infrared that the earth’s surface emits, thus heating it. Depending on what type of cloud it is, clouds can either cause a net heating or cooling of the earth.

As a fluid dynamical problem, clouds are in a league all of their own. They display a nice interplay between dynamics and thermodynamics and are insanely difficult to model. If you look at the IPCC reports, the uncertainty due to clouds dominates any other factor. Of course, you can just numerically integrate equations of motion to see cloud development, but qualitative understanding is a recent thing.

Another useful thing about clouds is that they are the only opaque object in the atmosphere, and can be used to understand the state of the atmosphere. Since air is a fluid like water, it supports waves which carry momentum from one place to another and the best place to look for wave signatures is in clouds. The main waves in the atmosphere are Rossby and Internal Gravity waves. Rossby waves are huge, with wavelengths in thousands of kilometers, whereas Gravity waves are small enough to be seen. They are seen as regular patterns of cloud-no cloud and easy to spot, especially during the evenings.

Textbook picture!
Textbook Gravity Wave signature

The above is about as good a picture as you will find. Gravity waves normally will occur when denser air from the bottom will rise up and eventually come down due to their weight as compared to their surroundings. When I say small wavelength, it still is in the order of hundreds of meters! A couple more, but not as distinct.

Scattered, but still visible!
Still fainter, but nice looking cloud :)

Stratus clouds are thin (relatively!) and very large in spatial extent. They are a sign that vapor is unable to rise high due to strong density differences in the atmosphere, i.e, the atmosphere is strongly stratified. Thus, instead of rising high, it simply spreads out to a thin, large layer. This shows that the atmosphere is stable to rising motion – which is a bad thing! A stable atmosphere discourages convection of water vapor, which means less rain.

A more familiar sight in tropical areas like ours are cumulus clouds, which are normally associated with convective activity and rainfall. On nice and windy days, they can ‘march’ in step, as shown below.

Cloud March near home!

These can also develop into the thunderstorm version, which are called cumulonimbus.

Eye Candy!

Ok, that’s not a cumulo-nimbus tower, but you get the idea. Also, the sunset picture was too good to not put into the post :D Cumuli indicate either strong convection/ a weakly stratified atmosphere (which are actually cause and effect).

The height at which clouds start forming is another indication of the stability of the atmosphere. Cloud formation implies water vapor must condense, and this implies that the temperature must be low enough. If the clouds are low, it means that there are low temperatures at a low altitude, thus the atmosphere is quite stable. Higher up means the atmosphere has been well heated by the surface, like a kettle on a stove.

All in all, literally having your head in the clouds is good time pass!

Bridging Nature and Humanity

I personally find it quite strange to think of humans as apart from nature and vice versa, but after many interactions with people who think otherwise, it seems that I’m in a minority. If evolution is to be believed, we as a species (Dawkins would say individuals!) have evolved mechanisms to improve our survival rate, to the extent that we are now the most dominant species in terms of geographical reach and resource use.

However, our genes seem to have forgotten to encode limiting behavior, atleast with respect to resource utilization, which would enable us to live sustainably. Therefore, we have to resort to non-biological notions like stewardship and animal rights to keep ourselves in check. From where such notions arise, one really does not know. Nevertheless, questions in ethics, epistemology and ontology have interested us as much as questions in physics, math or chemistry.

Ancient scholarship, both Western and Eastern, never viewed either category as seperate from the other and, to quote a friend, did both physics and metaphysics. It is only recently that our world view has taken a schizophrenic turn, looking at billiard balls using differential equations (bottom-up) and guiding human behavior using teleology (top-down). It has been notoriously hard to reconcile these world views and thus each developed practically independent of the other.

No doubt, there have been attempts by one to encroach upon the other’s turf. Dawkins and like minded compatriots went one way, while the Christian Right in USA and Astrology try going the other. All in all, it seems unlikely that one or the other will have total dominance anytime in the near future.

Thus we are stuck with quarks on the one hand and The Goal Of Human Life on the other. For example, mainstream economics ignores nature by invoking the Axiom of Infinite Substitutability (One kind of good can always be substituted for another, thanks to human ingenuity), so if rainforests go, then we can always conjure up something to take its place. Marxist thinking takes the view that all human development is the result of economic processes, so trees and animals don’t even merit a mention – they are simply unimportant as far as human society’s development goes. On the other hand, we have climate models which put in a large amount of CO2 into the model atmosphere and see how things change, as though humans are just passive CO2 emitters who cannot recognize calamities and adapt their behavior (This seems ominously probable nowadays!). Each approach has value, no doubt, but it is obvious that neither economics nor climate modelling can actually solve the problems we face today.

One solution is for people with different outlooks to sit down and reach a consensus. My last experience with such an experiment was not very encouraging, and the recent spat between Rajendra Pachauri and Jairam Ramesh did nothing to to encourage anyone about interactions between politicians and scientists, I’m sure. The other solution, of which one is more optimistic, is for researchers to break  the new barriers and go back to a world view where one can engage with physics and metaphysics without being called a witch-doctor. Natural and social sciences are ripe for such a synthesis — we have finally reached a state where our metaphysics (explicit or otherwise) is affecting the earth’s chemistry and biology, maybe even the physics: while I don’t think we can change the Gravitational Constant anytime soon, but a few thermonuclear warheads here and there could change g=9.8 m/s2 to something substantially smaller!

Little known but impotant steps towards such a synthesis are being seen — ecological economics is bound to be mainstream before we kill ourselves, social ecology is bound to be important in the future too. Scientists seem to be getting more comfortable doing politics outside their institutions and politicians are learning some thermodynamics, thank heavens. The principle of  learning two subjects well, one closer to quarks and the other closer to the God side of the spectrum of human thought will serve researchers well in the future. Oh, and present day economics does not count on either side of the spectrum.

Rural energetics

All energy we use is sunshine. Apart from nuclear, that is. Oil, Gas, Wind, Hydro, Coal, all these are direct or indirect products of the 1300 W/sq mt that we get from the sun. Since energy in some form or the other is crucial for life, boredom and joblessness encouraged an analysis of the flow of energy in a rural system.

The study of any kind of matter/energy cycle usually requires three concepts: stock, flow and flux. Stock is a reservoir of material/energy. Flow is matter/energy in motion, and is interconvertible with a stock, a lake and a feeder stream for example. Flux is matter/energy in motion that cannot be converted to a stock, like insolation or wind.

Most of the easily available sources of energy are fluxes, which can be tapped as the move past us for our use. For example, birds use the rising thermals to facilitate flying (An interesting thing is that only birds which have very small chances of eating daily use the thermals most efficiently, like eagles and kites. Herbivores have far more rapid wing-beating, since they are always close to the ground and food is abundant). Similarly, we use windmills, grow crops for the same reason.

The most common energy source in a rural system is biomass (including animals). Remember, energy is not only what we use to switch on our bulbs, but also what we eat in the form of rice/wheat etc., Biomass in various forms like trees and their products, animal meat, oil derivatives from plants, dung and foodgrains. Since plants also tap fluxes of energy from the sun and water, this implies that most rural activity happens by harvesting energy. Unlike other organisms which use most of their energy in the search of more energy, most energy needs of humans are for modifying ecosystems. We build houses and spend energy keeping it habitable. In rural systems, most energy is spent on agriculture. Crop ecosystems are highly unstable and need continuous inputs of energy in the form of human intervention to keep them stable. Leave a standing crop unattended for enough time and it gets decimated by ‘weeds’ and ‘pests’. Weeds and pests are names given to organisms which are usually adapted better to survive a competition with food crops for scarce nutrients and water in a given ecosystem. Thus, it is hardly any surprise that rural life and culture revolves around agriculture (especially harvests, which are the fruit of intensive energy inputs).

Another noticeable feature of rural energetics is the lack of any significant stocks of energy. Most people will keep just enough food grains to feed their families and sell the surplus, if any. Dead biomass is understandably difficult to store, and few landlords have large amounts of animals. Most rural energy needs are met, even today by the very primitive method of harvesting fluxes available everywhere. The few stocks of energy which are used, like firewood have very less energy content and are therefore extremely inefficient methods of storing energy. This lack of significant stocks of energy to exploit leads to a typically low energy lifestyle : The plough, the hand chisel, the bullock cart, lack of 100 storey buildings, low population densities, durable household articles which are used for generations, all are signs of cultures which understand their energy scarcity and dare not waste too much energy for frivolous purposes. Since grains and grass are the most significant and accessible stocks available, most things are done by human or animal power.

Coming to the flows of energy, flows can occur only between points of different potential, everyone knows that much from high school physics or using common sense. Water can only flow if there is a gradient, food will cook only if the fire is hotter than what is in the vessel. For energy flows, potential is usually measured indirectly by temperature. The hotter a substance is, the more energy it is said to contain. Since like we mentioned previously, stocks of energy are very small in rural systems, thus most energy flows happen between very small temperature differentials, mostly around room temperature. If one thinks about it, this is the way Nature operates. One can hardly find natural processes which happen across large temperature differentials.

The implications of such an energy profile is the following: Since the energy fluxes being tapped are inherently unpredictable and outside our control, nothing can happen in a rural system which depends on reliable sources of energy. Therefore, understanding the periodic nature of these fluxes and aligning human activity around them is an important facet of life. This being the case, any changes in this periodicity hits rural areas badly. The recent rains in India are an example of what can happen. Like we all know, climate change is going to affect food production. Rural industries which depend upon raw materials harvested from seasonal forest produce are also under risk if global energy and biogeochemical cycles get altered significantly. Therefore, rural systems should have a greater interest in maintaining Nature and her status quo than the urban sprawl. Understanding that overdependence on such energy flows is dangerous, those in the village that can afford it, build up stocks of energy. However, most cannot afford such stockpiling, and are therefore going to be the most affected in this era of climate change.

Knowing these facts, it is easy to imagine how an urban energetics would be like: the exact opposite of whatever has been said. What this implies and what lessons are to be learnt, later.

PS: I have not taken electricity into account, which is an extremely concentrated, non-thermal energy source. One can say that this analysis is somewhat relevant for a traditional rural society. Effects of electricity in the following post.

Information and Energy, and Entropy!

What follows is quite incomprehensible if you do not have some idea of maths/physics, but read on anyways :)The question that I have been asking for quite some time is : “What is information?”, from an abstract point of view. Would have hardly expected a physicist to answer this, but they did. Looks like there is a connection between the energy of a system and the information it contains. More precisely, the number of states that a system can take is directly proportional to the information it contains. This was put forward by Boltzmann, who is rightly well known for his huge contributions to physics. Take for example two people, A who sits on a chair the whole day and B who keeps running around the whole day. If someone tells you A is sitting on the chair, you would already know it, so it is nothing new to you. However, news about B’s whereabouts will always be new information to you. Therefore, an energetic system tends to contain more information than a static one.

One could also understand this by looking at a storage cell, which can contain n bits. As n increases, amount of information stored increases, so does amount of energy it contains (Not to be confused with present day memories like RAM, where most power is consumed by resistance and (silicon) crystal imperfections.) Boltzmann stated that if a system can have M mutually distinguishable states, then the amount of entropy is given by \log{M} . Entropy can be called a measure of the randomness in a system, that part of the system’s energy that is unavailable for useful work.

Claude Shannon, founder of practically everything we know (if we exaggerate a bit), generalized Boltzmann’s hypothesis to a case where we have some idea of the probability of a state of the system occuring, i.e, we have a probability distribution of the states of the system. In Boltzmann’s view, the distribution was uniform, and hence there was the \log{M} result. In a probabilistic system, one can only talk of the expected information, and that is what we get: Information entropy is given by I = -\Sigma_x p_x \log{p_x} . Note that this reduces to the previous form if p_x = \frac{1}{x} . Here, since the system behaves in ways we understand, the total entropy has been reduced by a factor I . If the base of the logarithm being taken is 2, the unit of the value is called bit. If natural logarithms are used, it is called a nat. Nat, interestingly,  corresponds to the Boltzmann constant when used to in an entropy context. Thus, we see the beginning of a link between entropy and information. We reduce the entropy of a system by gaining more information about the states in which it can be present. If we know exactly what state it is in, the amount of entropy is effectively zero.

How do we find the distribution of states ? we have to measure. Therefore, what is being implied is that measurement reduces the entropy of a system, or that measurement reduces the uncertainty we have about a system, which seems intuitively correct. If system A measures system B, B’s entropy reduces, whereas the entropy of the system consisting of both A and B does not (from the viewpoint of a system C which has not measured either A or B). The information got via measurement must be stored (and/or erased) somewhere, and this requires energy. This could be seen as the solution for the famous Maxwell’s Demon paradox, which claimed to violate the second law of thermodynamics.

These views have importance in the theory of computation, especially in the lower limits of energy required for computation. Say, I have a system that takes in 2 bits and gives out 1 bit (like an OR gate), then the amount of energy expended must be atleast equal to the difference in information entropy, which is 1 bit. Similarly, if information entropy increases, the system must take in energy. You can read all this and more (especially qualifications) in this paper.