# On Evolution

History and Evolution (from biology) are somewhat similar in that both are quite descriptive, and involve in their description some events that cannot really be predicted from any known first principles. Indeed, it will not be too much of a sin to say history is evolution of some kind by itself.

This being the case, there are atleast two ways in which one can approach evolutionary topics, which can complement or (probably more common) confront each other. All evolutionary topics that we are interested in always suffer from a lack of complete data about them. As mentioned a couple of posts back, this has motivated people to assume an underlying ‘model’, and use it to string a story through the facts which seems plausible. Another, equally interesting method to deal with the same would use the notion of a ‘constitutive absence’.

‘Constitutive absence’ is simply a way of saying that what is absent from the data collected or story woven is as important as what is present. This motivates us to ask the question ‘Why not this?’, as opposed to the question ‘Why this?’. It is my suspicion that looking at history and biological evolution in this manner will be able to structure our thinking about these subjects in a more constructive manner. For example, instead of asking ‘Why do birds have two wings?’, it may be more constructive to ask the question ‘Why are there no birds with six wings?’. It may just so happen that we have missed finding these flying critters, or there is some other reason. This to me is more in line with the theory of natural selection — Natural selection can only select against, not select for. To select for something implies evolution should ‘know’ what to select for, which is obviously nonsense. In the game of natural selection, there are no winners, only survivors. However, most biology literature seems to try and explain why a particular trait is present in an organism. Schrodinger came to the conclusion that the molecules that carry life (The structure of DNA was unknown then) must be large by asking why cannot there be small molecules of life (Answer being related to the fact that at smaller scales, Brownian motion dominates and smaller the molecule, the more suspceptible it is to change (mutation) by bombardment of other molecules).

The picture that this provides us with is not of an all-encompassing Story of Everything, but about the constraints that are put on organisms which prevent any other possible scenario from being viable. It is somewhat like trying to understand how water flows — If you look at water in a large river, trying to follow one blob of water may be hopeless, but in a stream you may have better chances. The constraints of a narrower channel makes this easy for us. Even so, you may not be able to predict precisely how the blob of water flows, but you know that it will remain within a confined boundary.

However, you will need to admit that beyond a certain stage you can’t really be sure about things when you take up this approach, whereas the Big Story approach will try and explain everything.

History is no different. Instead of asking ‘Why did the Europeans have an Industrial Revolution?’ one can ask ‘Why did the Indians and the Chinese not have an industrial Revolution?’. Instead of asking ‘Why is India mainly vegetarian?’ one can ask ‘Why did not Indians develop a meat dominated cuisine?’. You can probably see how just framing the question differently leads one to think very differently about the same problem. If you appeal to the physics of complex systems, then you are acknowledging that the trajectory of any complex system is inherently hard to predict, but constraints on the system make certain trajectories highly unlikely, and everything that happens does so within the phase space that is still viable. Historically we hoped to find laws of Nature and Society that would enable us to see forward and back in time. Unfortunately, we know now that these laws, if they exist, are probably too complicated for us to comprehend, and so a ‘constitutive absence’ is a more sensible way to move forward.

# Situating the Mind

One of interesting themes that emerged from a workshop that I attended recently is the problem of placing the Mind in a certain place. Up front, one must assume that it is sensible to separate the Mind from the Brain, at least for purposes of analysis if nothing else. Neuroscientists may have problems with this, but that is their problem.

The first approach to this problem was to deny that anything ever happened within the Mind – the brain was a simple input/output machine, put in stimulus and get out behavior. There is nothing called mental states and anything ‘unobservable’ had no real existence. This was the approach of the Behaviorists, and this is what gave rise to traditional psychology, with its ideas of conditioning and behavioral modification. This view is quite defunct especially after Chomsky and others at MIT and Harvard came into the picture.

The second dig at the problem was taken by the cognitive scientists from the Chomsky tribe. This still dominant view considers the brain to be a computer (note that they do not think of a computer as a good model, but rather that the brain is a computer). While the particulars of implementation may be debated (Turing Machine vs. Neural Networks), the idea is that the Mind has certain internal states, which when combined with sensory inputs gives you all the rich everyday experience that any person is familiar with. To a first approximation, and as a working hypothesis, this is extremely useful and has led to great insights about the functioning of the Mind, especially with regard to perception and language. Computer vision is the brainchild of this era, and its results are there for anyone to see.

However, biologists were probably dissatisfied by this ‘disembodied’ mind that the computer scientists had come up with. This would imply that the relation between the mind’s functioning and the environment in which it evolved is very small. No self respecting biologist can ever accept such a claim, and this led to an ’embodied’ concept of the mind, where perception (for example) was not the output of an algorithm but a combination of body states (walking, running, eating, etc.,) and mental states, and one cannot separate the two out since the mind did not evolve on its own, but rather developed as part of a whole.

Thus, we see a trajectory of thinking about the Mind, which moves from a complete denial of it, to a disembodied version to one (now popular) version which places it firmly within an organism. In some sense, the complexity which one attributed to the mind has increased over time.

The next step came from (obviously) the philosophers, some of whom claimed that the Mind does not exist within the person, but is a combination of the organism and its environment. What they say is that the environment does not simply affect cognitive processes, but is a part of them. Thus, no environment means some processess simply will not work.

Thus, the Mind is no longer a localized entity but which is distributed over space and (maybe!) time. One hopes this gradual extrapolation does not lead to Deepak Chopra like new-age mysticism and leads to claims that can actually be tested for their truth value. But again, one sees that this is a step up the complexity ladder. Slowly, the study of the Mind has gone from simple to very complicated ideas about its location, forget about function.

This is in contrast to historical developments in say physics, where complicated phenomena were ultimately explained using a small set of concepts which were considered fundamental. As with the study of the Mind, the study of the Earth system has run into difficulties. What one hoped would end at studying large scale motions of the atmosphere and ocean is nowadays studying phytoplankton and its effects on global climate!

Intuitively, there seems to be something very different about the phenomena that we are trying to study in the mind sciences or earth system science than the atoms or celestial objects that physics studied. You cannot study vision and learn things that extrapolate to the mind in general, just as you cannot study a liver and tell what the organism is likely to be. The fact that even the question of what to study is not well defined leads to very dubious research which gives the whole field a bad name. Do we, as our predecessors did, study the liver, pancreas and heart and say well, put all this together and you get a living being? Or do we try to answer the question ‘what is life’?

It does not seem very clear as to how the present range of scientific methods can help answer a question like the latter. The study of the Mind, the Earth or even Biology is at a stage similar to maybe where Mechanics was at the time of Kepler. People are looking at various ways to chip away at the same problem, some traditional, some extremely offbeat, in the hope that what one considers valid questions will be answered. Whether they will be answered or shown to be invalid, time will tell.

# The Subject-Object distinction

A basic ontological position that is taken up in the quest for knowledge is that of  Subject and Object. The Subject is the Observer, the Object the Observed, and there has to be a definite distinction between both. Once this is setup, the observer uses some means of acquiring knowledge about the observed, be it meditation, divine revelation or the new-fangled thing called the scientific method. The knowledge acquired about the Object, through a means that is independent of the Subject is thus ‘objective knowledge’. This kind of knowledge is supposed to reflect reality as it truly is, without contamination by the biases of the observer.

It is easy to see why the scientific method of repeated observation and experimentation is the preferred mode knowledge acquisition – God apparently reveals to people of every religion that theirs is the true religion or that theirs is the superior religion, and obviously not everyone can be right, i.e, there is some ‘contamination’. Of course, the previous statement implicity assumes that there is actually a single reality, but without that assumption, one falls into the critical theory mire, which to me is the worse of the two alternatives. Thus, the scientific method, atleast in theory, can be relied upon to produce subject independent knowledge about some object.

The crucial thing, again, is the fact that we must be able to provide a clear separation between the observer and the observed for this to work. Without this separation, the scientific method is as good as divine revelation. There are quite a few objects that are amenable to this separation – the solar system, atoms, molecules, plants, animals, ice-cream, among other things. However, there are certain objects that do not allow such a distinction (Of course, you cannot call it an object anymore, but Im retaining the nomenclature and discarding the ontological connotation).

For example, the stock market – If someone gives you ‘objective knowledge’ that there is a good chance of the stock market crashing and you pass on that information and you and your friends selling all your holdings triggering a crash, there is absolutely no way of telling whether the crash would have happened if you did not know that it would happen. Another example would be the ‘study of the Self’ – If you figure out through psychoanalysis or meditation or something else that ‘humans are essentially xyz’, and you begin to see yourself acting (or trying to act) in that manner, it is difficult to gauge whether behavior follows the statement or vice versa. This is not to say that humans are not xyz, but whether they are only xyz. If someone subscribes to the Freudian prescription of  the mating instinct dominating our actions or the Christian one that Man is incomplete without God’s grace, and tries to interpret his everyday action through such a framework, then he is likely to see that everything ‘fits’. But it is evident that there is no way that this is objective knowledge.

The previous paragraphs can be considered as a very short summary J. Krishnamurti’s line of thinking – that there exist situations where the subject-object distinction does not hold and thus statments about objectivity or subjectivity make no sense. The critical theorists in addressing the same issue come to the conclusion that everything is subjective – made famous by the statment ‘ Death of the Author’, but the issue to me cannot be interpreted from the subject-object perspective – the negation of objectivity need not only be subjectivity but also lack of both.

Take the example of a drama – one may imagine that there is a clear distinction here between the observer and the observed. But if one takes another look, the drama is written and produced keeping the audience in mind, for otherwise there is no point in it being performed, and thus the audience is also part of the play – the observer is also the observed. The drama, as it unfolds, is a dialogue between the performers and the audience and can thus be interpreted only as a whole. A ‘flop’ is one which fails to bring about this unity, with the dramatist complaining about how backward his audiences are. The drama is simply not situated within the correct context, which alienates the audience from the drama.

Similar questions arise in other places as well – can historical records and religious texts be interpreted by an observer who is not also the observed ? In India, the interpretation of history is a huge controversy. But neither the Hindutva glorification of the spiritual nor the Marxist focus on the material can do justice, since neither ‘lives’ the history – it is an exercise in textual interpretation. The only true history can come from someone who actually lives it. Similarly, atheists/rationalists tearing apart religious texts serves little more than angering others.

Another interesting place to look at is music. It is well known that most classical music is also religious music – some of the finest music has been in the praise of God (regardless of definition). Is is possible to appreciate Handel or Tyagaraja without sharing the intense experience of divinity (again, regardless of how you define divinity) that lead to the actual creation of the music ? Bland technical music criticism leads to a ‘fossilization’ of the music just as textual criticism of religion only shows a religion that is ‘dead’ – both lead to unnatural and normally harmful ideas of  ‘purity’ which do not allow any evolution of the object under scrutiny. A true purist will try and maintain continuity rather than stasis – not hinder evolution, but participate in deciding its direction.

This question is more important now than ever, given that natural scientists and engineers are called to take on the burden of examining and interpreting phenomena that are complex beyond comparison to the objects of study which they initially started off with, which decided their methodology. Unless we evolve new ways to understand reality, all we will be doing is tuning zillions of parameters, looking for an Objective Model of the World.

# The problem with nonlinearity (AKA why I cannot predict the weather)

Being from an engineering background, and having mainly engineers for friends, I normally get asked why I cannot predict tomorrow’s weather, and jibes as to how weather prediction is a pseudo-science etc etc., Thus, I decided to just rant about how life is so difficult for me.

Engineers of all kinds like to work with computationally friendly methods of analysis. One way to ensure this is to use mathematical maps that are linear in nature, and preferably orthogonal. What I mean by this is that it should be representable by a matrix, and all columns should have a zero inner product with every other column but itself. The classic example is the Discrete Fourier Transform. One of the most important properties (atleast to me!) of a linear system is that of superposition, i.e, if $x$ and $y$ are any two ‘signals’ or vectors, and $F$ is a linear transform, then $F(x+y) = F(x) + F(y)$. This property tremendously simplifies the analysis of the behavior of the transform. It is easy to identify ‘problematic’ vectors and do away with them.

For example, if im building a music system and I have a linear amplifier which I know goes nuts if I input music containing the 2 Khz frequency, I can remove that frequency in advance so that there are no problems in performance. Thus, a signal localised in a certain frequency band will not ‘leak’ to other bands. The case is not so in nonlinear systems. There is a transfer of energy from one part of the spectrum to another (eg: the Kolmogorov spectrum in turbulence), and thus there is no guarantee that your amplifier will be well behaved for all time.

This also implies that the superposition principle no longer applies. Since energy in one frequency invariably finds its way to other places, there is interaction between different frequencies and thus the resulting behavior of the system is not just the addition of the behavior of the system with the individual frequencies as inputs, i.e, $F(x+y) \neq F(x) + F(y)$. Thus, the resulting behavior is not easy to predict in advance, and pretty much impossible if the number of interacting components is huge, like in an ecosystem or the climate. This is called emergent behavior, since it cannot be predicted by looking at the individual components themselves.

If losing superposition was a problem, the problem of chaos is as bad, if not worse. Chaos is a fancy way of saying that nonlinear systems are extremely sensitive to their inputs and their mathematical formulation. For example, if you had perfect knowledge about every quantity but not a perfect model of the phenomenon being observed, you will make errors in prediction, which are huge. Similarly, if your models were perfect, but you were not able to measure accurately enough, the same fate. In real life, both are true. We don’t understand natural phenomena well enough (Of course, dam builders will disagree), nor do we have measurements that are accurate enough. Thus, even the fact that we can say whether tomorrow will be cloudy or not with reasonable confidence is a testament to how well weathermen have learnt to live with nonlinearity.

And if all this was not enough, there is the problem of phenomena occuring at multiple scales. A typical cyclone has a horizontal extent of around 1000 km, while the convection that drives it is of the order of 1 km. There are planetary waves that have a wavelength of 10000 km, and they are dissipated by turbulence acting at the micrometer level. Any model that tries to incorporate the largest and the smallest scales will probably tell us about tomorrow’s weather sometime in the next century!!

And coming to the worst problem of all, rain.While one can say with reasonable confidence about whether it will rain or not, since that is constrained by the first law of thermodynamics and behavior of water vapor, it probably is next to impossible to predict when or how much. Quite amazingly, there still does not seem to have been found a sufficient condition for rainfall to occur: the necessary conditions are known, and still we don’t know when it will rain.

Interestingly, average behavior is more predictable, since averaging ‘smooths” out the nonlinearity in the system, and thus we are able to reasonably estimate climate, which is a long time-average of weather. The constraints of thermodynamics, which seem to be the only thing that will never be violated, are stronger as we go into longer and longer time scales.

Handling nonlinear systems is hard, but we are getting there! (In a century or so.)

# Epistemic limits of scientific enquiry

Had attended a talk the other day by Dr. Jayant Haritsa from the CSA department, on using textual representations of Carnatic music (Music written as Sa Ri Ga Ma etc.,) to determine what is the ‘Aarohana’ and ‘Avarohana’ (the equivalent of scale in Western music) of a given Raaga or identifying the raaga itself, given another piece of music, outside the ones used to train the identification system. Among other aims than the ones given above, was to provide a ‘scientific basis’ for the raagas, based on the statistics of usage of notes in various compositions, and maybe, provide a better Arohana/Avarohana for the raaga itself than the one received from tradition.

The talk was itself quite interesting and the system seems to do pretty well. In the Q&A session, a lot of concern was generated as to whether the ‘better’ Arohana/Avarohana proposed by the system would capture the ‘mood’ of the raaga, which seems to be an essential part of each raaga. Haritsa was of the opinion that as scientific researchers, we must not take things for granted and must try to question tradition using tools of science.

The essential issue, which one can generalize to things further than just music and its analysis, is the question of what is knowledge and/or Truth. More specifically in this context, one can ask the question as to what type of knowledge can we obtain using the scientific method, and whether this is the only kind which is ‘reliable’, the rest being ‘subjective’  is useless in a more general context, i.e, whether Truth in all its glory is best sought out using the scientific method.

Upfront, one must understand the fundamental premise of the scientific method, even leaving out its reductionist inclinations — Nature is not random: it follows some logic, some pattern which by large number of observations and/or experiments is discovered and this knowledge (from observation/experimentation) eventually can be called Truth. This is not hard to justify: we can see patterns everywhere in Nature and can build quite accurate models of the same. The reliability of scientific knowledge depends hugely on the concept of measurement – representing natural phenomena as cardinal numbers – numbers we can use to say something about the size of the measured phenomenon. No observation or experiment can be called a success/failure if it does not produce some kind of number. For example, Haritsa’s system produces a number per candidate scale for a raaga — higher the number, more likely it is the correct scale.

Immediately, one can see phenomena that the scientific method cannot be used to investigate : Emotions, ethics, likes, dislikes, etc., etc., Not only are these immeasurable (neuroscientists may disagree!) quantities, but they are also incommensurable: a statement like $2.5\times Happiness \geq 0.5\times Sadness$ makes absolutely no sense. Also, science can give no answers to statements like ‘The world is Maaya’, or ‘What we perceive is not what Is’. These statements belong to the same class of knowledge that the fundamental ‘axiom’ of science belongs to — you cannot prove or disprove them within the logical system that is built upon that axiom.

Now, music is a strange beast. It is highly patterned (scientists like to talk about its ‘mathematical’ structure), but at the same time, its main (probably only) value is in the emotion that it evokes: it is not coincidence that music is an essential part of religious worship, especially of the Bhakti variety. Therefore, no musical education is complete without a good understanding of both the patterns and the emotions (Bhaava) associated with music. Now, scientists are uncomfortable (or dismissive) about things they cannot measure, and musicians are uncomfortable (or dismissive!) of statistical analyses of their art. Therefore, it is not surprising to for each to value one of the two more. Haritsa’s and the audience’s apprehensions merely betrays their respective inclinations.

With the advent of huge computing power, a scientist’s optimism in understanding the universe has understandably increased. It is a common notion that failure of mathematical models is simply due to the ‘exclusion of some variable’ from the model. With more information/data, one can do arbitrarily well. This attitude conveniently ignores the fact that some quantities are not measurable and even if some quantitative representation is possible, they might be incommensurable. This can be seen best in sciences dealing with human tastes and values, like economics, sociology or anthropology. Subjects like econometrics, social psychology seem to be treading a fine line that distinguishes scientific knowledge from gobbledygook. For example, if one surveys 100 students asking them to rate the facilities at the hostel on a scale of 1 to 10, and we conclude that the average score is 8 and so most are satisfied (assume a score greater than 7 implies satisified), we are making two assumptions : we can add the satisfaction of 100 people and divide that number by 100, and that one student’s rating of 7 is the same as another student’s rating of 7. Though there have been arguments justifying such an approach, it is upto the individual to decide how seriously to take such surveys.

The dominant paradigm of our times is that of scientific optimism, and most appeals to emotion or morals are considered ‘woolly’ and ‘unscientific’. But one must realise that unless there is a healthy engagement with both pattern finding and moralising, the Truth can never emerge.

# Systems modelling: how useful is it ?

Modelling complex systems, be they social, economic, terrorist-ic, environmental, ecological, whatever seems to be all the rage nowadays. Everyone (including myself!) seems to be so intrigued with the future that they cannot wait until it arrives.

But what is a model ? and how can/should it be used ? these are questions which are normally unanswered and can lead to disasters like the current financial circus. A systems model is, at its most abstract, a simplification of reality which can help us to answer questions about the future in a computationally tractable way (since everything at the end of the day gets reduced to number crunching, except maybe philosophy). It focusses on internal variables (state), inputs and outputs of a system that are considered important to understand its behavior. The contribution of a model is two-fold: to understand the underlying (simplified) structure, and to use this to answer questions that interest us.

We have to face that we cannot really understand even moderately complex systems properly, and we make certain idealizing assumptions (like there won’t erupt a World War), to make things simple. We then use an intuitive understanding (sometimes called pre-analytic vision) of the system structure to decide how to build the model. For example, economics models are built using the vision that man wants to maximize something (which makes it easy to use calculus of variations or mathematical programming), atmospheric models have to obey the laws of physics, and so on.

Once we identify a structure which can be represented using available mathematical tools, we put them in a computer and start crunching. If they can be represented using nice equations (called deterministic model), you would use differential equations or some cousin or mathematical programming. If it cannot, then you don’t give up, you simply say that it is random but with a structure and use stochastic models, using Monte Carlo methods or time series analysis or some such thing (Read this for a critique of the latter).

Before one gets immersed in fascinating mathematical literature, one must understand that each model comes with an ‘if’ clause: If such-and-such conditions are satisfied, then things might look like what we get here. Which is why I get irritated with both MBAs who talk about ‘market fundamentals being good’ and environmentalists who predict that I’m going to need a boat soon – neither qualifies results which come out a black box. Even worse, there are those who compare results which come out of different black boxes, which need not be comparable at all, just because they, like the modellers, have no idea what is going on. Atleast the modellers admit to this, but those who use these models for political purposes cannot dare to admit shades of gray.

Different models can take the same data and give you radically different answers – this is not due to programming errors, but the pre-analytic vision that the model is based upon. The reason why climate change remained a debate for so long is because of such issues, and vested interests, of course. Therefore, we see that the ‘goodness’ of a model depends critically on the assumed ontology and epistemology, even more than the sobriety of its programmer (well, maybe not).

Thus, as intelligent consumers of data that models spew out everyday, we should make sure that such ‘studies’ do not over-ride our common sense. But in the era of Kyunki Saas bhi, one cannot hope for too much.

# Sociable sociopaths – is it the system ?

System Analysis is simply another way of looking at the world, trying to look at the structure and composition of an aggregate of anything from computer code to people to machines.

For those unaware of terminology (which would be anyone who has not taken a systems course), a system is an entity with certain inputs and outputs, and which converts inputs to outputs through a certain mechanism. It can be completely defined by its inputs, outputs, external limits and feedback systems. Limits determine the boundaries within which the system must operate, like the size of our parliament is limited by the number of rich and powerful idiots in the country. Feedback systems determine the response of the system to changes in its output or environment, like the elections are a feedback in a democracy.

Another factor which determines the performance of the system is delay in the feedback systems. Scientists have been telling economists to change developmental objectives to include climate change issues for many decades and yet it has come into focus only very recently. Even today, development does not include many environmental issues like deforestation and toxic dumps and species extinction. This can be called as a large delay between output changes and the attendant change in system performance.

Why is systems thinking important ? From a business perspective, it can help analyse the people and objects that determine how a system (company) behaves, and how certain kinds of behavior of these ‘components’ can affect overall system output. For example, car manufacturers should change the specifications of their car according to general consumer tastes. Therefore, there must be some system feature that links car specification with consumer taste. If the person who is in charge of implementing this feature in this system fails to do her job well, system output (which is cars) will fail to make the desired impact.

Therefore, most social systems – religion, corporation, state – come up with a set of desired behaviors that the components that make up the system should have, and this is inculcated by various mechanisms – schools, corporate orientation, religious instruction and so on.

One can, if one has considerable amount of time to burn, apply systems principles to the present situation in India. First, a look at the state. The state is a glorified watchman of sorts, taking money from us taxpayers and giving political, social and economic protection. The recent spate of terrorist attacks have underlined the fact that it is unable to deal with the phenomenon of terrorism which is structurally very different from the normal antisocial elements that it is used to dealing with. Highly motivated individuals, working in small groups, from varied backgrounds, with no monetary motivation causing mayhem is something no state can cope with: it was simply not designed for such a task. And there goes physical safety that we were supposed to have.

Next thing to go was religious tolerance. Talking to random people on the train shows that the average Hindu looks at his Christian neighbor with suspicion and will be more hesitant than before to attend religious festivals. This is due to the sensationalist feedback systems which have been set in place called the media and no doubt supported by a political party that wants to expel Pakistani and Bangladeshi nationals (only those with expired visas, of course, preferably Muslims) since they might be terrorists. Never mind the fact that terrorists will go to great lengths to see that their papers are in order, and are not stupid enough to be in a place where checks are taking place. A system is only as good as the people that make it up, and this is shown well in Karnataka now and Gujarat before.

Before these was, of course, financial security. A global economic system needs global  regulatory agencies, a role which the IMF and World Bank ostensibly play. The present crisis shows that a system designed around rational ordering and behavior of individuals completely fails when greed, fear and panic are the inputs. The subprime crisis surfaced around this time last year and its effects are showing now, a huge delay between input and output. This kind of behavior can only mean worse things in the coming year. IMF and the World Bank probably should stick to bullying third world nations.

All these developments are having interesting effects – terrorism has made grassroot level spying a noble duty in service of the state (Orwellian nightmare!), people belonging to different religious groups are eyeing each other with suspicion, and people with money to lose are running around like headless chickens. If people are taken as a system, and if insecurity is an input, the system moves towards whatever promises stability. Therefore, unfortunately, the State and religious organizations are going to be more powerful than before when the dust settles. The last bastion of reliable information feedback, the internet, is now becoming more prone to State intervention. Wonder what the status of the people will be after this – are we going to be sociable components of sociopathic systems ?