Category Archives: modelling

Moral stories in the age of computers

All of us have been brought up listening of reading some or the other kind of moral stories –  Panchatantra, Aesop’s fables, Bible stories and so on. They are part of our standard training while learning to live in the world. All moral stories are motivated by some ultimate aim of human life, though these are never explicit or overshadowed by talking animals and trees. Our morals do not develop in a vacuum – they are shaped strongly by our socio-cultural and geographical locations, and moral stories are among the more effective means towards our ‘shaping’. Not only that, like everything else in the world, they evolve, though not necessarily in the Darwinian sense of the word. Aristotle and Plato may have condoned slavery, but not Adam Smith and his ilk. Even then, considering that Aesop’s fables and the Bible provide relevant advice even to this day, there seem to be some things that are eternal, like numbers.

From where do we derive our ethical codes? The most abundant source is of course our own history. When viewed from a certain lens (which comes from a certain metaphysical position about man and his relationship with other humans and the rest of the universe), history can give us all the lessons we need. Which is why it is said that people who forget history are condemned to repeat it – not that we have progressed linearly from being barbarians to civilized people, it is just that we are animals with an enormous memory, most of it outside our heads and in books, and preservation or changing of such a legacy necessarily requires engagement with it. Therefore, ethics and epistemology have always gone hand in hand.

Our times are unique from any other in history simply due to the predominance of science in determining what we know – Ancient Greeks or Indians would do physics and metaphysics simultaneously without necessarily putting one or the other on a pedestal. Scientific method and mystical revelation were both valid ways at getting to the truth. Nowadays, of course, the second would hardly be considered a valid method for getting at anything at all, let alone the truth. Hard to say whether this is good or bad – evolution does not seem to have a sense of morality.

The Newtonian and Darwinian revolutions have had important implications for the modes of moral story telling: First, they remove the notion of an ultimate purpose from our vocabulary. Newton’s ideal particles and forces acting on them removed any ideas of the purpose of the universe, and the correspondence between particle<->force of Newton and Darwin’s phenotype<->natural selection is straightforward. Thus, biology or life itself lost any notion of ultimate purpose. Economists extended it to humans, and we get a human<->pain/pleasure kind of model of ourselves (pain/pleasure is now cost/benefit, of course). All in all, there are some kind of ‘particles’ and some ‘forces’ acting on them, and these explain everything from movement of planets to why we fall in love.

Secondly, history is partially or wholly out of the picture – at any given instant, given a ‘particle’ and a ‘force’ acting on it, we can predict what will happen in the next instant, without any appeal to its history (or so is the claim). Biology and Economics use history, but only to the extent to claim that their subject matter consists of random events in history, which therefore cannot be subsumed into physics.

If life has no ultimate purpose, or to put it in Aristotle’s language, no final cause, and is completely driven by the efficient cause of cost/benefit calculations, then why do we need morals? And how can one justify moral stories any longer?

The person of today no longer sees himself as a person whose position in life is set by historical forces or karma, depending on your inclination, but as an active agent who shapes history. Thus, while the past may be important, the future is much more so. He wants to hear stories about the future, not about the past.

This is exactly where computers come in. If we accept a particle<->force model for ourselves, then we can always construct a future scenario based on certain values for both particles and forces. We can take a peek into the future and include that into our cost-benefit calculations (using discount rates and Net Present Value etc etc.,). Be it climate, the economy or the environment, what everyone wants to know are projections, not into the past, but the future. The computation of fairytales about the future may be difficult, but not impossible, what with all the supercomputers everybody seems to be in a race to build.

The notion of a final cause is somewhat peculiar – it is the only one which is explained in terms of its effect. If I have a watch and ask why it is ticking, I can give a straightforward efficient cause saying because of the gear mechanisms. On the other hand, If I ask why are the gear mechanisms working the way they do, I can only answer by saying to make the clock tick – by its own effect. Thus, if we see the future a computer simulates and change our behavior, we have our final cause back again – we can say to increase future benefit, we change our present way of life. The effect determines the cause.

Corporations, Countries, Communities are faced with the inevitable choice of using a computer to dictate their moral stance. However, one can always question the conception of a human being (or other life for that matter) as doing cost benefit calculations as their ultimate goal. If we need a more textured model of a human, writing an algorithm for it remains an impossibility to this day.

For example, one can argue that the ultimate pupose of life is to live in harmony with nature or that we should ‘manage’ nature sustainably. The former does not need (indeed, does not have at present) a computer model, whereas the other does. One is within the reach of every person, the latter is only accessible to a technological high-priesthood. Which should we choose? at a future time, which one will we be forced to choose?

Therefore, in this post-Darwinian world, can we imagine an ultimate purpose for ourselves that will enable us to act on our own, or will we be guided by supercomputers simulating caricatures of ourselves? Time will tell.

Situating the Mind

One of interesting themes that emerged from a workshop that I attended recently is the problem of placing the Mind in a certain place. Up front, one must assume that it is sensible to separate the Mind from the Brain, at least for purposes of analysis if nothing else. Neuroscientists may have problems with this, but that is their problem.

The first approach to this problem was to deny that anything ever happened within the Mind – the brain was a simple input/output machine, put in stimulus and get out behavior. There is nothing called mental states and anything ‘unobservable’ had no real existence. This was the approach of the Behaviorists, and this is what gave rise to traditional psychology, with its ideas of conditioning and behavioral modification. This view is quite defunct especially after Chomsky and others at MIT and Harvard came into the picture.

The second dig at the problem was taken by the cognitive scientists from the Chomsky tribe. This still dominant view considers the brain to be a computer (note that they do not think of a computer as a good model, but rather that the brain is a computer). While the particulars of implementation may be debated (Turing Machine vs. Neural Networks), the idea is that the Mind has certain internal states, which when combined with sensory inputs gives you all the rich everyday experience that any person is familiar with. To a first approximation, and as a working hypothesis, this is extremely useful and has led to great insights about the functioning of the Mind, especially with regard to perception and language. Computer vision is the brainchild of this era, and its results are there for anyone to see.

However, biologists were probably dissatisfied by this ‘disembodied’ mind that the computer scientists had come up with. This would imply that the relation between the mind’s functioning and the environment in which it evolved is very small. No self respecting biologist can ever accept such a claim, and this led to an ’embodied’ concept of the mind, where perception (for example) was not the output of an algorithm but a combination of body states (walking, running, eating, etc.,) and mental states, and one cannot separate the two out since the mind did not evolve on its own, but rather developed as part of a whole.

Thus, we see a trajectory of thinking about the Mind, which moves from a complete denial of it, to a disembodied version to one (now popular) version which places it firmly within an organism. In some sense, the complexity which one attributed to the mind has increased over time.

The next step came from (obviously) the philosophers, some of whom claimed that the Mind does not exist within the person, but is a combination of the organism and its environment. What they say is that the environment does not simply affect cognitive processes, but is a part of them. Thus, no environment means some processess simply will not work.

Thus, the Mind is no longer a localized entity but which is distributed over space and (maybe!) time. One hopes this gradual extrapolation does not lead to Deepak Chopra like new-age mysticism and leads to claims that can actually be tested for their truth value. But again, one sees that this is a step up the complexity ladder. Slowly, the study of the Mind has gone from simple to very complicated ideas about its location, forget about function.

This is in contrast to historical developments in say physics, where complicated phenomena were ultimately explained using a small set of concepts which were considered fundamental. As with the study of the Mind, the study of the Earth system has run into difficulties. What one hoped would end at studying large scale motions of the atmosphere and ocean is nowadays studying phytoplankton and its effects on global climate!

Intuitively, there seems to be something very different about the phenomena that we are trying to study in the mind sciences or earth system science than the atoms or celestial objects that physics studied. You cannot study vision and learn things that extrapolate to the mind in general, just as you cannot study a liver and tell what the organism is likely to be. The fact that even the question of what to study is not well defined leads to very dubious research which gives the whole field a bad name. Do we, as our predecessors did, study the liver, pancreas and heart and say well, put all this together and you get a living being? Or do we try to answer the question ‘what is life’?

It does not seem very clear as to how the present range of scientific methods can help answer a question like the latter. The study of the Mind, the Earth or even Biology is at a stage similar to maybe where Mechanics was at the time of Kepler. People are looking at various ways to chip away at the same problem, some traditional, some extremely offbeat, in the hope that what one considers valid questions will be answered. Whether they will be answered or shown to be invalid, time will tell.

Epistemic limits of scientific enquiry

Had attended a talk the other day by Dr. Jayant Haritsa from the CSA department, on using textual representations of Carnatic music (Music written as Sa Ri Ga Ma etc.,) to determine what is the ‘Aarohana’ and ‘Avarohana’ (the equivalent of scale in Western music) of a given Raaga or identifying the raaga itself, given another piece of music, outside the ones used to train the identification system. Among other aims than the ones given above, was to provide a ‘scientific basis’ for the raagas, based on the statistics of usage of notes in various compositions, and maybe, provide a better Arohana/Avarohana for the raaga itself than the one received from tradition.

The talk was itself quite interesting and the system seems to do pretty well. In the Q&A session, a lot of concern was generated as to whether the ‘better’ Arohana/Avarohana proposed by the system would capture the ‘mood’ of the raaga, which seems to be an essential part of each raaga. Haritsa was of the opinion that as scientific researchers, we must not take things for granted and must try to question tradition using tools of science.

The essential issue, which one can generalize to things further than just music and its analysis, is the question of what is knowledge and/or Truth. More specifically in this context, one can ask the question as to what type of knowledge can we obtain using the scientific method, and whether this is the only kind which is ‘reliable’, the rest being ‘subjective’  is useless in a more general context, i.e, whether Truth in all its glory is best sought out using the scientific method.

Upfront, one must understand the fundamental premise of the scientific method, even leaving out its reductionist inclinations — Nature is not random: it follows some logic, some pattern which by large number of observations and/or experiments is discovered and this knowledge (from observation/experimentation) eventually can be called Truth. This is not hard to justify: we can see patterns everywhere in Nature and can build quite accurate models of the same. The reliability of scientific knowledge depends hugely on the concept of measurement – representing natural phenomena as cardinal numbers – numbers we can use to say something about the size of the measured phenomenon. No observation or experiment can be called a success/failure if it does not produce some kind of number. For example, Haritsa’s system produces a number per candidate scale for a raaga — higher the number, more likely it is the correct scale.

Immediately, one can see phenomena that the scientific method cannot be used to investigate : Emotions, ethics, likes, dislikes, etc., etc., Not only are these immeasurable (neuroscientists may disagree!) quantities, but they are also incommensurable: a statement like 2.5\times Happiness \geq 0.5\times Sadness makes absolutely no sense. Also, science can give no answers to statements like ‘The world is Maaya’, or ‘What we perceive is not what Is’. These statements belong to the same class of knowledge that the fundamental ‘axiom’ of science belongs to — you cannot prove or disprove them within the logical system that is built upon that axiom.

Now, music is a strange beast. It is highly patterned (scientists like to talk about its ‘mathematical’ structure), but at the same time, its main (probably only) value is in the emotion that it evokes: it is not coincidence that music is an essential part of religious worship, especially of the Bhakti variety. Therefore, no musical education is complete without a good understanding of both the patterns and the emotions (Bhaava) associated with music. Now, scientists are uncomfortable (or dismissive) about things they cannot measure, and musicians are uncomfortable (or dismissive!) of statistical analyses of their art. Therefore, it is not surprising to for each to value one of the two more. Haritsa’s and the audience’s apprehensions merely betrays their respective inclinations.

With the advent of huge computing power, a scientist’s optimism in understanding the universe has understandably increased. It is a common notion that failure of mathematical models is simply due to the ‘exclusion of some variable’ from the model. With more information/data, one can do arbitrarily well. This attitude conveniently ignores the fact that some quantities are not measurable and even if some quantitative representation is possible, they might be incommensurable. This can be seen best in sciences dealing with human tastes and values, like economics, sociology or anthropology. Subjects like econometrics, social psychology seem to be treading a fine line that distinguishes scientific knowledge from gobbledygook. For example, if one surveys 100 students asking them to rate the facilities at the hostel on a scale of 1 to 10, and we conclude that the average score is 8 and so most are satisfied (assume a score greater than 7 implies satisified), we are making two assumptions : we can add the satisfaction of 100 people and divide that number by 100, and that one student’s rating of 7 is the same as another student’s rating of 7. Though there have been arguments justifying such an approach, it is upto the individual to decide how seriously to take such surveys.

The dominant paradigm of our times is that of scientific optimism, and most appeals to emotion or morals are considered ‘woolly’ and ‘unscientific’. But one must realise that unless there is a healthy engagement with both pattern finding and moralising, the Truth can never emerge.

Systems modelling: how useful is it ?

Modelling complex systems, be they social, economic, terrorist-ic, environmental, ecological, whatever seems to be all the rage nowadays. Everyone (including myself!) seems to be so intrigued with the future that they cannot wait until it arrives.

But what is a model ? and how can/should it be used ? these are questions which are normally unanswered and can lead to disasters like the current financial circus. A systems model is, at its most abstract, a simplification of reality which can help us to answer questions about the future in a computationally tractable way (since everything at the end of the day gets reduced to number crunching, except maybe philosophy). It focusses on internal variables (state), inputs and outputs of a system that are considered important to understand its behavior. The contribution of a model is two-fold: to understand the underlying (simplified) structure, and to use this to answer questions that interest us.

We have to face that we cannot really understand even moderately complex systems properly, and we make certain idealizing assumptions (like there won’t erupt a World War), to make things simple. We then use an intuitive understanding (sometimes called pre-analytic vision) of the system structure to decide how to build the model. For example, economics models are built using the vision that man wants to maximize something (which makes it easy to use calculus of variations or mathematical programming), atmospheric models have to obey the laws of physics, and so on.

Once we identify a structure which can be represented using available mathematical tools, we put them in a computer and start crunching. If they can be represented using nice equations (called deterministic model), you would use differential equations or some cousin or mathematical programming. If it cannot, then you don’t give up, you simply say that it is random but with a structure and use stochastic models, using Monte Carlo methods or time series analysis or some such thing (Read this for a critique of the latter).

Before one gets immersed in fascinating mathematical literature, one must understand that each model comes with an ‘if’ clause: If such-and-such conditions are satisfied, then things might look like what we get here. Which is why I get irritated with both MBAs who talk about ‘market fundamentals being good’ and environmentalists who predict that I’m going to need a boat soon – neither qualifies results which come out a black box. Even worse, there are those who compare results which come out of different black boxes, which need not be comparable at all, just because they, like the modellers, have no idea what is going on. Atleast the modellers admit to this, but those who use these models for political purposes cannot dare to admit shades of gray.

Different models can take the same data and give you radically different answers – this is not due to programming errors, but the pre-analytic vision that the model is based upon. The reason why climate change remained a debate for so long is because of such issues, and vested interests, of course. Therefore, we see that the ‘goodness’ of a model depends critically on the assumed ontology and epistemology, even more than the sobriety of its programmer (well, maybe not).

Thus, as intelligent consumers of data that models spew out everyday, we should make sure that such ‘studies’ do not over-ride our common sense. But in the era of Kyunki Saas bhi, one cannot hope for too much.

Spherical Cows and Geometry

The first part of the name comes from the famous book on Environmental problem solving called Consider a spherical cow by John Harte. According to the book, the title comes from a joke within the environmental science community about a group of scientists who are approached by a farmer to ask why his cow was producing less milk. After large amounts of deliberation, the scientists call the farmer and start explaining : “Consider a spherical cow….”

The title is to reinforce the fact that certain problems cannot be solved exactly, and idealizations need to be made so that the solution is computable. In fact, the book (just started reading it!) has quite a few problems which do not expect answers exact to the 100th decimal point but an answer in the orders of magnitude is sufficient, i.e, 41,041  can be written as 10^4 . If one begins to wonder what kind of quackery this fellow preaches, consider the first problem statement that he puts forward: Calculate the number of cobblers in the US! no other info, one has to make amazing assumptions and only then even start to solve the problem. This, for me, is to drive in home the fact that solving problems in as large and complex a system as the environment is simply too hard to do precisely as a watchmaker might like to. One has to rely on intuition and great deal of ‘common sense’ to go ahead without being bogged down by the finer details. It is uncertainties like this which are really making a debate of global warming possible. People who say that humans are putting too much crap into the atmosphere, are ‘reasonably certain’ that humans will destroy the planet. Sceptics say that there is evidence to show that global warming and cooling cycles have always been happening, and this is not too great a cause for concern. The debate, however, seems to be converging in the favor of the ‘we are suicidal ‘ camp, due to more and more data pointing to the adverse effects of anthropogenic intervention in the global climate system.

Thus, until one gets more data, one guesses, and more data only refines the guess to a state where we can be ‘reasonably confident’ (or ‘quite sure’, especially when giving lectures or talking to mediapersons ;). This is where the role of geometry comes in, hidden sometimes in algebra, but ever-present in any situation which calls for a guess. One of the fundamental beliefs of the natural sciences is that nature follows a pattern, and that can be discovered and propounded as laws. Therefore, given a set of data or readings regarding any phenomenon, they would try and see if they can fit some known functions (or shapes, to the mathematically challenged), which can help them find intermediate values, and also help predict future outcomes. If the model (that is what fitting a function would be called) is bad, chuck and find another more suitable function, and hopefully get more readings to confirm their hypothesis. Thus, if anyone hears a scientist talking about ‘exponentially increasing population’ or ‘linearly changing system’, you can be sure that the above is what they have probably done.

Polynomials, exponentials, sines, cosines, are among the darlings of the geometric modelling community, and you will be hearing more about them here, hopefully with as little math and as many pretty curves as possible :)