hierarchies> thermodynamics> fractals
What is a fractal? And why does it matter? A fractal shape is
geometry with scale – a middle ground or internal
axis of scale symmetry. So it is no surprise that fractals should turn
out to model an organic world which is based on a logic of growth and
Ordinary geometry – like ordinary logic or ordinary science – does not take account of scale. Differences in size are not seen as something fundamental. A triangle is still just a triangle no matter what its size (or duration – temporal scale is also an issue here). So we might say that the ordinary view is Platonic. A triangle exists for all time and can assume any size. Or rather, for the purposes of description, duration and extent are irrelevant to the definition of triangle-ness.
We could say the same for a propeller, a whorl of turbulence in a stream, or many other geometrical forms. The shape has an ideal or Platonic existence, and real life dimensionality or scaling can be added as local properties – extra facts needed only to define particular propellers or whirlpools, specific triangles or hexagons.
Fractals are also Platonic in being represented as static, ready-made, mathematical objects. But at least they include scaling as basic to their description. And they lead on to other mathematical models such as scale-free networks where largeness and smallness finally is granted proper causal status. Size becomes a mathematically meaningful attribute of a shape.
But let’s start first by considering some classic fractals shapes such as the Koch curve, Cantor set, Sierpinski gasket and Menger sponge. Each is created by some iterative step – you perform the same simple act over and over to infinity - to fracture some shape over all its scales of existence.
So with the Koch curve, we begin with a flat line segment – the initiating state. Next we break its middle third to make a triangular bump. This is the generating act. Then we take each of the four smaller flat sections so created and break them in exactly the same way again on a new smaller scale. And we keep on going ad infinitum, or until our pen cannot make any smaller distinct marks.
Note several things here. We started with a line of a particular length. And it also existed in a space, a void, with higher dimensionality. The line was a 1D object in a 2D realm. Then the symmetry of this flat line was broken repeatedly so that it became buckled into the surrounding 2D realm over all possible scales.
Originally the 1D line just sat there in the 2D void. This was a very asymmetrical situation – a highly localised line floating within a generalised global emptiness. But with the fractal development of the line, it began to intrude further and further into its larger 2D context.
This is where the term fractal comes from – the creation of a fractional dimensionality. As the 1D line is buckled to make a Koch curve, it takes on some fractional dimension between 1 and 2D. In fact the Koch curve has a fractal dimension of about 1.2619. Or exactly log 4/log 3 – in other words the four unit buckled “global” shape divided by the three unit original “local” shape.
So the fractal dimension is derived here quite logically, and not at all mysteriously. Note also how this fractal dimension (in maths parlance, the Hausdorff value) of 1.2619 is an asymptotic limit. It is a figure that the Koch curve approaches as it becomes ever more finely broken over all possible scales.
In theory a Koch curve can keep approaching this limit forever. We can imagine how we might keep magnifying sections of the curve and finding yet smaller bumps. Like Dr Seuss’s Cat in the Hat, under every hat will be a tinier cat still, all the way down to infinity. So while log 4/log 3 gives some exact value, in reality we have to keep dividing, keep magnifying, to add yet further decimal places to our approximate Hausdorff dimension of 1.2619.
We have only just started to talk about the metaphysics of a single fractal yet we can see just how many interesting properties are inherent within this mathematical construct. So let’s consider a few more fractals.
A Cantor set is created by chopping out the middle third out of a line repeatedly. We end up with something in-between a 0D point and a 1D line – a “Cantor dust” with a Hausdorff dimension of 0.63 recurring - or log 2/log 3, the “global” shape of a two unit outcome divided by the original “local” or initiating shape of a three unit line. Where the Koch curve is formed by a buckling into higher dimensionality, penetrating a 2D plane over all scales, the Cantor dust is formed by the disintegration of a !D line into almost point-like segments.
The Sierpinski gasket and Menger sponge are like the Cantor set in being iterative divisions. The initial shape is broken the same way over and over, the division being repeated on ever smaller scales until it asymptotically approaches some fractional dimensional limit. A triangle has triangular voids cut out of it, a cube is riddled with cubic voids over all scales.
So what do we have here? Each time we start with a sharply, crisply, divided world. We have an atomistic line floating within a void-like higher dimensional space. Or we can go at it the other way round and start with a solid shape out of which we begin to bite void-like chunks.
Either way, we start with two unlike extremes – asymmetric in the strong sense of not just being different in spatiotemporal scale but apparently disconnected as ontological types. Then the free mixing of the two extremes over all scales produces an emergent equilibrium balance. Line and plane, atom and void, local and global, are muddled together over every level of their joint existence.
In fact, fractal geometry reveals that there was this hidden extra direction – an axis of scale – present all the time in our geometric conceptions. Take any apparently scaleless object like a line, a triangle, a cube, and we can see that it was rather exceptional that the shape was imagined as if a single scale of representation exhausted its inherent possibilities. The standard Platonic conception of form as, at the very least, ambivalent about scale should now seem radically incomplete.
Indeed as organicists we would want to turn the whole story on its head. The standard view would be that fractals are a complexification of simpler geometrical notions. Take a triangle or a line and then find ways to develop it but breaking it over all scales. But now we are saying that the fractal object is the simpler, more fundamental, case. If we have an asymmetry of dimensionality - such as triangle floating in a void - then those two opposing extremes should also be mixed together over all scales. In organic terms, it would take some extra step to prevent such a "thermalisation" or disordering. So the disordered state must be considered the simplest, the lowest, level of geometric existence for the shape.
Note something else here. The standard examples of fractals have all been created with a telling three-ness. An initial symmetry of a line, triangle, or cube gets broken by something happening to its middle third.
The reason is that this is the simplest way of representing the asymmetry of figure~ground, or local features within a global context.
The most extreme way of breaking a line would be to break it into the threeness of line-point-line. But breaking it instead into three equal line segments, or rather two lines and a line-sized gap, preserves a balance of the local and global features. It gives equal weight to atom and void, event and context, and so in fact “wires in” the desired equilibrium balance as the line is broken over all its available scales.
Yes, fractal geometry does not create the equilibrium outcome it so handily portrays. Instead the way it constructs a fractal shape already presumes such a balance. However this is no problem if we understand the metaphysical position being taken. Models make our lives simpler by leaving out unnecessary details. We just need to be aware that it is a simplification and not (yet) the truth of reality.
It is also worth noting how this 2-to-1 fracturing ratio is the minimal needed to create scale within an information theoretic approach to modelling. Elsewhere we have pointed out that information theory is about the modern atomising of even global form. A binary code of 1/0 gives us a representation of a smallest scale presence and then its absence – the located existence of a something and then a nothing, an atom and then a void.
But to represent a world with scale, at the very least we need to employ one extra bit and so create a triplet or trinary code. We need to book-end any event with a context. So we must have 0-1-0 to represent an atom floating within some greater void. Or 1-0-1 to represent a void-like absence within some a block of substance.
Just as (and for exactly the same logical reasons) the duality of a dichotomy becomes the triality of a hierarchy with the breaking of scale, so the minimal model of information must become triadic once the fact of scale is admitted to the mathematical story.
So fractals are a little artificial in that they presume what
seem to create. They are fabricated by the repetition of some geometric
act over a nested succession of hierarchical scale. And the result is
self-similarity, or the emergence of an axis of scale symmetry.
Fractals are called self-similar because no matter what the scale of observation, whether we zoom in close or step right back, we appear to be seeing the same world. Snip off a small fragment of a Koch curve, put it under a magnifying glass, and we will find the exact same ragged line repeating.
Imagine some other kind of less symmetrical object such as a ball of cotton wool. Zoom in very close to the wad and we see a tangle of threads. If we were a bug, we could wander a near infinite maze of paths.
But step back in scale and the tangle becomes a fuzzy ball. Now we see a different world - something with a surface that can be wandered around. Step back even further, say a hundred yards, and now the cotton wool ball has shrunk to a featureless grey point. It has a location, yet no discernible structure that can be travelled.
So a fractal is different in that changes in viewing scale make no visible change to what is seen. This is a symmetry because moving up and down creates no distinctions or memories – nothing to mark the fact a change has actually happened. It is the same as if we were to move a step left then right, or reflect a reflection.
Such symmetries are immensely important in physics as they virtually define a dimension, a degree of freedom. There underpin deep principles like Noether’s Theorem which says for every physical symmetry there must be a conservation law and thus an inertia. Or Goldstone’s Theorem that states for every symmetry there must be a harmonic reverberation and thus a type of fundamental particle. And here we are talking about another axis of symmetry – the middle ground axis of scale symmetry. What can we say about its physical consequences?
Well, that is a topic for the matter pages. For the moment we want to develop a mental image of the rightness, the logicality, of scale-free worlds emerging from the sharp dichotomisation of scale - how a polarity must also create a connecting middle ground spectrum.
A fractal starts with some stark separation of a figure from its ground – say a “substantial” Platonic line segment afloat in the “form” of an empty amd unbounded Euclidean plane. Then fractal development breaks the tension of this strong asymmetry. A triangular buckling of the line ignites a succession of bucklings over all possible scales until the line is infinitely buckled, or at least buckled as far as the eye can spy.
And by this act, a symmetry is restored to the story – a scale symmetry in which no viewing scale is privileged and the middle ground, the “space” between local feature and global context, has become flat and even in its development. A fractal may look messy. And it is indeed a complete “disordering” of scale distinctions. Yet the result is, I would argue, a more complete notion of geometry.
To find a line of a certain size and duration just floating in some unlimited space begs the question why? But if line and space are completely mixed over every available scale, then somehow this seems more logical. If it can happen, it will happen. And in fact with real world systems, it does happen.
rivers, coastlines and other fractals
Fractal shapes like a Koch curve or Menger sponge exhibit some
metaphysical features. For ordinary mechanical logic – which
treats scale as an extra to any description of a world
– the properties of fractal mathematics are merely a local
curiosity. Nothing to get excited about.
But for organic logic, fractals have precisely the properties we want to explore further. Organicism takes scale as fundamental. There is always the parts and the whole, the composing substances and the organising form. Fractals then directly model a “thermalised” middle ground where scale, having been created, then gets completly disordered.
As said, Cantor dusts and Koch curves seem rather artificial or mechanically constructed fractals. Their scale-free nature has been artfully wired in from the start. So let’s look at some more natural examples of fractal structures such as clouds, coastlines, lightning strikes, river tributaries and sandpiles.
Clouds and coastlines have a self-similar geometry. Take a puff of cloud or ragged stretch of coastline and it will look just as puffy, or just as ragged, on any scale.
A cloud is a fine mist of water droplets. Its shape is the result of a balance of factors. Hot air rises, gravity holds air masses down, then the turbulence of surrounding air currents whips up around a trapped clump of water vapour, sculpting it into various cloud forms.
Likewise a rocky landmass is created by a complex story of forces, but its ragged outline where it meets the sea is a simpler tale of erosion by wave action.
So in both cases we have a general stuff – a mass of water vapour or rock – being eroded or otherwise acted upon by a generalised outside force, either winds or ocean wave. This eroding force acts over all scales, and so no particular scale. The resulting structure of a cloud or coastline reflects this fact. It is shaped evenly across all scales of observation (and we can say – pansemiotically – that the lapping of waves, the gusting of winds is a kind of “observation” in this context).
Lightning strikes and river tributaries are a similar story, but subtly reversed. Both lightning and rivers are a source of energy that has to find its path through a resisting volume. Lightning has to find a path through a block of air to dissipate itself by completing a circuit with the ground. A river has to carve a path through a landscape to dissipate its load of water.
Lighting forks in random fashion, breaking up into feathery branchlets. The patterns are fractal. Zoom in on a side-frond and it will dissolve into ever tinier branchlets. This kind of branching dissipative pattern is modelled by fractal structures such as Cayley trees.
Rivers also break up into a maze of tributaries when they hit a flat plain. A weight of water coming down from the hills will slow as the land levels and the same more gently moving water has to be shifted in other ways.
A first response is for the river to form snaky bends. By making a longer riverbed, the plain can absorb more of the weight. But then this smooth response breaks down into a chaotic fracturing of the flow. Like the branching of lightning, there will be a few broad streams and a fractal network of tributaries – the classic river delta.
Where once there was a sharp dichotomy between land and riverbed – a 1D line cutting across a 2D surface – now land and river become all jumbled up. There are tributaries and islets distributed evenly across all scales. The actual fractal dimension for the Nile delta is said to be 1.4; that of the Amazon basin 1.85.
Fractal organisation is to be found in all kinds of dissipative structure – structures that “chaotically” branch in order to dump a load of energy as fast as possible. A system which is dissipating energy at a gentle rate, well within itself, will show rather orderly seeming motions.
A stream in a deep enough channel will have a smooth laminar flow – the viscosity of water molecules over-coming their thermal jostle. But constricted or otherwise disrupted, the stream will break into burbling turbulence. A system pushed to its through-put limits – one that is made to “expand” as fast as it possibly can – will have to break up and do the job fractally over all available spatiotemporal scales.
This same maximisation of through-put principle rules other fractally branching systems such as lungs, blood vessels and nervous systems. A fractal network can both absorb and excrete at a maximal rate, no matter whether it is water and electricity, or oxygen, carbon dioxide, bloodstream nutrients, or even neural control information.
We are making connections with a lot of different-seeming ideas here – dissipative structures, energy flows, chaos, organ systems. But fractal geometry models something deeply logical about the world. It is about how the middle grounds of dynamic or self-organising systems must develop.
We can see that simple dissipative structures, like a thread of river, are not really occupying the full landscape available to them. They are excessively constrained as a geometry. They do not really fill the rich space surrounding them. So it becomes more fundamental once they do penetrate their worlds fully, gaining fractal existence over all scales.
The mechanical view of a system and its thermodynamics is boxed
– rigidly confined and so constrained to a single smallest
scale. The organic view of a system is unboxed and thus freely
expanding. Every scale is expressed
and the statistical picture is
very different. Therefore we need a natural yardstick for measuring
systems and that yardstick is the power-law distribution or log-log
Ordinary graph are normal-normal. Each axis is divided into equal-sized steps. A log-log plot uses logarithmic scales in which each equal-sized step is actually a whole step up in scale. So for example, 1 to 10 becomes as big as 10 to 100, or 100 to 1000, or 1000 to 10,000.
What this does is flatten scale so that an observer is in effect flying along an axis of scale symmetry. It models quite directly the idea that even though every step represents an order of magnitude change, it is as easy to move in one direction as another. On a log-log plot, standing at 100, you are just as close to 1000 as you are to 10.
There is also the log-normal plot that represents exponential growth as a straight line. Exponential or ever accelerating growth is found in many natural situations with “memory”. Bacterial colonies that breed by splitting, or other kinds of doubling where what has been created then freely recreates itself, generates a self-fueling or geometric growth.
In log-normal graphs, the x axis is counted off in normal units because the wider world is taken to be moving forward in time at a steady or linear rate. The hours and days tick off with unchanging regularity – inertially! But the y axis is marked out with an accelerating log scale because some system like a bacterial colony or the money earning compound interest in a bank is building on itself, every increase in its scale enabling even greater increases, so causing an accelerating growth.
By plotting this disconnection on a mixed log-normal scale, a rising exponential curve can be tamed and shown as a straight line. Every inertial or constant-rate step made by the world, the context to the growing system, is matched against an accelerative step being made within the self-expanding system.
So a normal-normal graph portrays a world where inertial change rules both figure and ground, both some local system and its more global context. A log-normal plot shows accelerating expansion in a system against a constant background – and it is the most “violent” kind of change because foreground and background are radically out of kilter. Then a log-log plot represents a total world undergoing cohesive or generalised acceleration. Both the local and the global, the figure and ground, are developing together – expanding at an accelerative rate rather than merely dawdling along at an unchanging or inertial rate.
A log-normal world is wild. If growth of some local system like a bacterial colony or world population goes unchecked, its doubling can become explosive and “fill up” the larger world within which it must exist. Exponential increase is unbalanced or asymmetric. But a log-log world is smoothly expanding across all its scales and it thus gives us a scale symmetric or scale-free outcome. Both its local and global aspects develop together in harmony.
This is why a log-log plot, or what is also known as a power-law distribution, is the natural yardstick for the middle ground of an organic hierarchy, a world developing by dichotomisation and then the free mixing of the dichotomised.
welcome to Extremistan
Remember chaos theory in the 1980s? Although not widely understood at
the time, power-law causality was the reason why
“deterministic chaos” in all its many guises was so
exciting and surprising. Organic logic gives us a paradigm now to
understand what chaos theory was really about.
And more recently – mainly because the internet is itself a prime example – we have had the new kid on the block, scale-free networks. That is networks with power-law or fractal middle ground connectivity. Although as usual there are many names for essentially the same phenomenon. Scale-free networks are also popularly known as small world networks, “Kevin Bacon’s” six degrees of separation, fat-tail distributions, and the long tail effect.
Because of chaos theory and scale-free networks, many people have a good mathematical grasp of power-law behaviour in systems. But the switch we are making here is that power-laws are not the exception, something extra to a world that is essentially normal, linear and Gaussian. Instead a general acceleration, a general openness, is taken as being fundamental. It is the unboxed story that comes first, the boxed or mechanical story that is a secondary development.
A great book dramatising the difference between the two kinds of realms is Fooled By Randomness by Nassim Nicholas Taleb. He calls the world ruled by normal Gaussian statistics, Mediocristan, and the one ruled by power-laws, Extremistan.
In Mediocristan, the bell curve dominates and all outcomes are averaged to a single bounded scale. There can be exceptions, but they are tightly constrained. Randomness is strait-jacketed.
Take for example variations in people’s height, a typical example of Gaussian statistics. The average height for humans is 1.67 metres (or 5 foot 7 inches). About 1 in 6 people will be 10 cms taller than this. But only 1 in 740 will be 30 cms taller. Variation is damped at the extremes and dies away with exponential speed. So only 1 in a million will be 60 cms taller than the average, or 7ft 5. And only 1 in 8,900,000,000,000,000,000 will be 90 cms taller, or 8ft 5. (Of course, this analysis does not take account of syndromes such as pituitary gland tumours that may distort the odds – we are talking “normal” height variations here).
As Taleb says, in Mediocristan, every step away from the mean sees a huge drop in probability. And it does not take many steps for the unlikely to become the virtually impossibility.
But Extremistan which obeys power-law statistics is quite different. Nothing really changes with distance. Larger scale events still become relatively rarer, more sparse, however the drop-off rate is smoothly constant rather than precipitously sudden.
Taleb uses income as an example of a scale-free realm. If personal wealth followed a bell curve, in some model Mediocristan economy you might have 1 in 63 people earning more than $1 million a year. Then there would be a rapid fall-off as you moved further away from the average. Only 1 in 127,000 would be earning more than $2 million, 1 in 14 billion more than $3 million, and to earn more than $8 million, the odds would shoot out to 1 in 1033.
In an Extremistan economy, we again start with 1 in 63 earning over $1 million. But 1 in 125 would earn over $2 million. So instead of a $1 million step up in income creating a 2000-fold drop in probability, it only halves the number. And every further doubling in annual income continues only to halve the number of earners. So 1 in 500 are capable of earning $8 million (compared to 1 in 1033) and 1 in 1,000 of earning $16 million (compared to some just about un-nameable number for Gaussian-ruled Mediocristan).
In the real world, incomes indeed are more like this fractal or power-law distribution. In the Forbes 2006 list of richest Americans, Microsoft’s Bill Gates is top on $53 billion. Sheldon Adelson, a casino owner, at number 3 has less than half that, or $20 billion. Then 30th placed Nike founder Philip Knight has $8 billion and 300th placed winemaker Ernest Gallo has $1.3 billion.
The US economy – which has been freely expanding for many years – seems to support personal wealth of any scale. What Gallo makes, or Knight makes, seems to take nothing much away from the pool of what Gates can make. There is no gravity acting to pull fortunes down to earth. The scale of wealth is random in this scale-free sense. There is no constraint on scale so fortunes of every scale can appear.
Of course in the real world, there are eventually constraints. Even the US economy is only so big. It can only freely consume the planet’s resources for so long. Yet for as long as the world economy is in fact expanding, the wealth of individuals can be expected to have the power-law randomness of Extremistan rather than the crimped Gaussian randomness of Mediocristan.
Another way of dramatising the stark difference between the two realms is the old 80-20 rule. A tell-tale sign that you are in power-law Extremistan is when the few seem to have the most of anything.
Say you were told that two randomly chosen Americans jointly earned $1 million between them. What is the most likely split of their income? In Mediocristan, it would be half a million each. But in Extremistan, it would be more likely that the divide was $50,000 and $950,000.
Compare this to the story for heights. If you knew two people added up to 14 foot, then it would be far more likely that they are a pair of seven footers than a six foot and an eight foot person. Eight foot people are just too vanishingly rare.
Or another example, if you imagine 1000 people in a hall, and the heaviest man on the planet happens to be among them, how much of the total does he represent? Just 0.3%. Now imagine the same hall but with the wealthiest man on the planet, Bill Gates. How much of the assembled wealth is he likely to represent personally? Around 99.999%.
The 80-20 rule was coined by Vilfredo Pareto who calculated that 80% of the land in Italy was owned by 20% of the people. But in many cases of power-law distributions, an 80-20 split can prove to be a severe underestimate. It can end up that 1% account for 99% of the total.
The flip-side to there being far more “largeness” than you might rightfully expect in a system with a power-law distribution is there is also that much more “smallness”.
So for instance, in a scale-free economy the poor are far more numerous than any average or median income figure would suggest. This is what has become known as the long tail phenomenon. A freely expanding system is growing and so can support wealth and privilege of apparently any scale. There is no brake on richness. Yet an expanding system is also leaving an equivalent weight of smallness behind. Poverty or other measures of small scale existence follow a flat and constant log-log growth line as well.
This means that every standard deviation step below an economy’s average wage finds an increasing rather than a diminishing number of people stuck at that level. There will be many more at the minimum wage than the average wage.
Consider the wider picture for the US economy. Over the past decade or so, it has been expanding due to the globalisation of world markets. All that cheap labour in China and India has been coming on-stream. All that cheap oil and other natural resources from Russia, Brazil, Australia, and the Middle East. So the US economy actually represents a top slice of a new wider-base market. The wealth of Gates, Knight and Gallo has been paid for by a matching population growth and resource exploitation in the emerging world – the creation of an equal weight of poor-ness everywhere else on the planet.
The world economy is dichotomising, growing evenly in both its available directions. The super-rich are becoming the uber-rich. And like an iceberg, nine-tenths always sinking below the line, the other 5 billion with not a lot inhabiting the planet will become the other 10 billion by 2050.
the biggest wave
Human examples are useful because they highlight the way that
statistics are surprising. We are so used to a Gaussian
“boxed average” view of events that the idea there
might have to be both Bill Gates and impoverished billions to make an
expanding economy takes us unawares. Even more disturbing is the idea
that this scale-free or long tail story is an example of a
system’s randomness. It is the pattern of disorder that has
to be created when a vagueness is dichotomised and the separated then
freely mixes over all intervening scales.
A huge number of everyday phenomena do seem ruled by power-law rather than Gaussian statistics once you start looking. Some are well-known examples like earthquakes and species extinctions. Others have come as more recent discoveries.
For example, it used to be thought that ocean waves must have a Gaussian distribution and so ships would only need to be built to withstand waves of a certain maximum height. Oil rigs and ocean liner have been designed to withstand waves of 15 metres (49 feet) because it seemed safe to assume waves any larger would be vanishingly rare.
Yet a recent satellite survey of the world’s seas over a three week period found ten rogue waves reaching 25 metres or more – eight stories high! These waves seemed to well up for no particular reason. So while gravity must ultimately limit the size of monster waves, relying on the image of a bell curve to predict maximum likely wave heights could be dangerous. Largeness does not seem so tightly constrained as Gaussian imagery would suggest.
Coming back to suitable mathematical models of power-law or fractal systems, scale-free networks are now the latest focus of interest. As we have said, fractal geometry, and even other mathematical representations of “chaos” such as strange attractors, suffer from being overly Platonic. Their dynamism - their dependence on history, process, memory, context, development - is rather hidden from view. But scale-free networks are more transparently about growth and meaningful communication or semiosis. So even as mathematical imagery, they start out already more alive.
The story of scale-free networks goes back to an experiment by the
Harvard social psychologist Stanley Milgram in the 1960s. Milgram sent
letters to individuals in Nebraska, who had been randomly picked out of
the phone book, asking them to pass on a message to a stockbroker in
Boston. The catch was they were only allowed to forward the letter to
someone they knew personally and whom they thought might be socially
"closer" to the stockbroker.
Milgram found that most of the letters that made it to their destination did so in about six steps, leading to the now famous notion of small world networks or the "six degrees of separation" - the idea that we are all linked to one another through short chains of acquaintances. The notion became popularised by the game in which you had to link any chosen actor back to Kevin Bacon through a list of co-stars in various movies. Mathematicians have a similar game using co-author links to work out their collaboration distance to the prolific (network theorist, among other things) Paul Erdös.
However it was not until the late 1990s that this kind of connectivity phenomenon became understood as a consequence of power-law or fractal system organisation. Various researchers such as Duncan Watts and Steve Strogatz worked on bits of the puzzle. But the central figure has been physicist Albert-László Barabási.
A network (also known as a “graph”) is a model of a communicating world. There are links (or “edges”) connecting nodes. So network theory, being about patterns of relating, is good for modelling knowledge structures, social structures and ecological structures. Complex worlds. But if the universe itself is considered as a pattern of interactions, of connecting events, then there is no reason why network theory cannot also be a powerful model of apparent simplicity, of physical reality itself.
Certainly networks are congenial to our organic approach to causality. A network models the local and the global in interaction. There are the particular or individual connections and then the holistic causal organisation of the network, its behaviour as a whole.
Of course, many models of networks are mechanical. High level behaviour only “emerges” as a consequence of bottom-up or “feed-forward” construction. However once cybernetic feedback is allowed into the story - or better yet, long term memories and anticipation, as with the neural networks of Stephen Grossberg and others – we do indeed get a properly organic causality. There is a dichotomisation of the causality and thus a local~global interaction.
Anyway, network theory has been around a long time and as a “geometry of connectivity” is obviously of great ontological importance. So it is surprising that it took quite so long for scale-free networks to be fully described.
Scale-free networks are created by random connections – the randomness of an ever-expanding Extremistan, that is. In the 1950s, Erdös and Rényi worked out the maths of networks with Gaussian randomness. They set up a circle of nodes and then assigned an average number of connections to each of them. Some nodes had more, some less, but overall the variation was constrained to a bell curve distribution.
However as Barabási notes, this is a topographic or static approach to randomising networks. It starts with a fixed number of nodes and wires them up in one go. He instead decided to explore a more active, semiotic and growth-based approach.
His inspiration came from analysing the connectivity patterns of the internet - the way that pages link to pages to form like-minded communities. As any surfer knows, there are many pockets of common interest on the web. One study of 100 million web pages by Jon Kleinberg claimed they could be parsed by subject matter into about 50,000 identifiable clusters. Although some of the groupings were fairly esoteric, such as a cluster of pages devoted to tracking oil spills off the coast of Japan.
The groupings were of course loosely bordered. A web-site builder worried about ecological disasters might also have interests in rock-climbing and Roman history. So "randomly" there would be some surprising connections made across the breadth of the web. And within any community, some sites were always more prominent than others. These acted like hubs, with many other pages hanging off them, while other sites were isolated and rarely visited.
The pattern looked much like that for the world’s air transport network. A few airports, such as Heathrow, Chicago and Frankfurt, are really big. Nearly everyone passes through them at some time even if they are heading on to other places. But still more are remote airfields which only get visited by people who really want to get there.
This kind of network is multi-level - a nested hierarchy. There are top-level airports that connect far and wide, then the regional airports that connect in a more locally selective fashion.
It seems logical that an air transport or information network might be organised in this way. But what Barabási and others managed to show was how such networks would spontaneously self-assemble through “randomness”.
That is, in a network with scale – in which the global and local levels are in free interaction and thus able to arrive at their natural equilibrium balance – then a scale-symmetric or fractal pattern of connectivity must emerge. It does not have to be designed in but self-organises through its own devices.
assuming free growth and preference
The modelling of scale-free networks has to make a few assumptions.
there must be free additive growth. The number of available nodes must
always be increasing. That is, in our terms, the local bound - the
universe of distinct locales - is always
This is what we see with the internet or global air travel. New pages and new flight destinations, are being added all the time (though the internet has grown up over a decade and air travel over the course of nearer a century). So a scale-free network is a thermodynamically open rather than closed system. If the free addition of further nodes halted, the system would silt up. It would in effect become boxed and so would eventually be expected to turn Gaussian.
The second key assumption is semiotic. New nodes will show a preference in where they attach. Even “just by chance”, they will be more likely to attach to large hubs than small isolated nodes. This has been dubbed the "Matthew Effect" (Matt. 13:12), because the rich get richer. Or as they also say, them that has, gets.
It makes intuitive sense. When a new node strays into the network looking for a place to attach, it is just more likely to strike something large and obvious than something small and isolated. And so the rich – the hubs of a network like a Bill Gates or Heathrow Airport – just keep getting richer with no upward boundary on their scale. The larger a hub becomes, the more likely it will be that it takes the lion’s share of the rain of new nodes.
This logical principle also explains what is know as the prime mover advantage. Being the first nodes in a network make it more likely you will gather the early weight of connections and so evolve to become a future hub, whereas latecomers to the party will struggle to make an impact (though accidents do happen and it is possible for some new node to luck out and quickly gather enough links to grow away and become a hub).
So a scale-free network is based on just a few minimal assumptions about an open flow of new nodes and their natural habits of attachment. Yet the model is useful because it delivers some mathematically precise characterisations of ideal network behaviour.
For a start, the pattern of connectivity is indeed power-law or fractal. In a scale-free network, there are hubs of every scale, of every size. Of course, there are only a few really large ones. But in power-law fashion, the fall-off in largeness is constant not exponential. So there is no actual limit on largeness (except obviously no hub can be larger than the system that contains it).
The balance between bottom-up and top-down connectivity can also be precisely measured. In a scale-free network, the probability of a node being connected to k other nodes is 1/kn. In real-life examples like the internet, n is usually between 2 and 3. Which means that any particular node chosen at random is four times as likely to have just half the number of incoming links as any other node.
That is, following the flip side of power-law statistics, it is far more likely you will find yourself a small player than a large hub. Like the chances of being born one of the world’s impoverished billions or one of the Forbes 400 list of billionaires, in a scale-free realm you are more likely to find yourself looking up from the bottom of the pile than down from the top.
The chaos-savvy will recognise that this kind of connectivity rule is like you find in “edge of chaos” models such as Kauffman’s boolean nets. Scale-free networks also have to be wired with the right balance of input to output to prevent them becoming either over-connected or under-connected.
If connectivity falls too low in a scale-free network, the whole system would simply break up into isolated pockets of activity. You would get the equivalent of steam - the vapour phase of water - where everything has become too hot and fallen apart into localised bits. Or indeed the equivalent of hyperbolic geometry, where every path leads away from every other, causing a fragmenting divergence.
And if connectivity is tuned too high, then like water freezing to ice or the closed paths of hyperspheric geometry, the network will close in on itself and lock up solid. Instead of the rich getting richer, you would now get the “winner takes all story” that is a star network topology, with every node attached to the one central hub.
Like Goldilocks, scale-free networks self-organise to find the flat middle ground where their connectivity is neither too hot, nor too cold, but just right. Where their opposing local and global tendencies – the hyperbolic tendency to diverge and hyperspheric tendency to converge – are properly mixed and so cancel to create a flat equilibrium balance over all scales.
slithering sand piles
Power-law statistics and scale-free behaviour explain the organisation
of many complex natural systems from the internet and airports to the
spread of disease epidemics, the branching of rivers, the frequencies
of commonly used word and the growth of towns.
As should by now be obvious, these are all examples of unboxed or freely expanding systems. They are freely expanding in scale. And because their scale is always being “thermalised” – allowed to mix and reach equilibrium – they will naturally become scale symmetric. Their middle ground is being disordered – made random. But it is the randomness of organic Extremistan not the more familiar mechanical Mediocristan.
The rightful maths of Extremistan now has many faces. As well as fractals, power-laws and scale-free networks, there are dissipative structures, strange attractors, phase transitions, criticality, autocatalytic nets, cellular automata, renormalisation theory, universality, percolation, fluid dynamics, the edge of chaos, synchonicity – dozens of names for essentially the same thing. But often quite the wrong causal messages get read into these chaotic or non-linear models because they are interpreted from a mechanical perspective rather than an organic one.
So let’s briefly consider another couple of standard examples of deterministic chaos - middle ground disorder - to give a feel for the deeper logic at play.
A favourite of chaos studies is Per Bak’s sandpile. No matter that real sandpiles do not act in quite the scale-free way modelled because grains of sand have too much friction. To make a sandpile work the way it should, it has to be constructed using rice grains trapped between upright glass plates. Yet still the sandpile is a memorable image of fractal behaviour.
The idea is that there is a heap of sand being steadily fed by a trickle of new grains from above (so it is set up as an open or constantly expanding system). As the pile builds up, its sides become too steep to be stable and there are sand avalanches that bring the slope back to an equilibrium balance – a local~global balance between the global force of gravity and the local forces of particle friction.
Anyway the model says these avalanches can be of any scale and so will follow a power-law distribution. Mostly each freshly falling grain will only trigger small micro avalanches. But randomly, a really large landslip can occur.
The “marvel” of the system is that small causes can have unpredictable, or at least non-linear, results. This was also what made people goggle about the famous butterfly wing effect – where due to chaos, the single flick of a butterfly’s wing in Brazil could some time later be considered the cause of a tornado in Texas.
However this kind of “from small acorns, mighty oak trees can grow” take-home message was quite misleading. It was the view from mechanical causality – from RAMML - where it is axiomatic that effects are in direct or linear proportion to their causes. For the mechanist, it seems quite “random”, indeed alarmingly chaotic, that small triggering events can sometimes have small effects, yet at other times produce cataclysmic convulsions.
Organic logic changes the expectations around because we now take a holistic view of system’s causality. The middle ground behaviour of any system is axiomatically the result of local and global forces in interaction. The lower bound constructs and the upper bound constrains. So no longer can the whole burden of causal explanation come to rest on just one grain of sand, or one random butterfly flap. The causality is shared between the particular events and their general contexts.
There is still the “wonder” that some very small and localised event can appear to be the trigger for a monstrously out-sized result. But the total causal picture now includes all the other sand grains, all the other flapping butterflies, that did not seem to tip the balance. Every scale-free system will have its hubs. But theory also predicts an equal weight of largely isolated activity.
bubbling Bénard cells
Another icon of chaos theory that typically has quite the wrong
take-home message is the Bénard cell.
Take a shallow pan of oil and heat it. Soon it will develop rolling convection currents. Squeezed together by the sides of the pan, these ring-like vortices will then get squished to make a vaguely hexagonal pattern of cells.
Mechanical logic paints this as an example of “order for free”. Out of the disorderly jostlings of heated oil molecules emerges magically a high-level pattern of orderly flow. This hexagonal order pays for itself because it dissipates the heat faster.
What is happening is that when the pan is only gently warmed, it is each heated molecule for itself. The molecules chaotically barge about and only eventually arrive at the surface where they can shed their excess heat. But at a certain critical temperature there is a phase transition. A symmetry breaking. The jostling molecules are entrained into a coherent flow, the rolling convection current which moves them smoothly from the heated bottom to the cooler surface.
This mechanism is a more efficient way of disposing of the system’s heat and so obeys the second law of thermodynamics. The law says disorder must generally prevail, but you can still have local examples of order, such as the hexagonal Bénard cells, so long as overall they contribute to the wider disordering of the world, the dissipation of energy as entropy or waste heat.
This is again a causal surprise apparently explained. It is a puzzle that order in the shape of life and mind can appear in the midst of a universe ruled generally by a disintegration into thermal disorder. A Bénard cell shows how order can arise spontaneously within wider or more global disorder as a localised dissipative structure.
This is all certainly true. But heat a Bénard cell just a little more and its orderly hexagonal cells soon break up and you get instead turbulent – or chaotic – disorder again. The order turns out to exist only for a very finely controlled balance of heat dissipation.
Organic logic would prompt us to look at a Bénard cell in different light. What is going on now? Well, we start with the gentle heating and localised thermal motions of oil molecules. We reach the critical point and suddenly the oil molecules first start to feel “the shape of the container”. They are moving about enough for the walls of the pan to begin to exert a constraining or boxing effect. The result is the lining up of the molecules into rolling convection currents. We have the first inklings of a local~global system in place.
But it is still only the start of the story as we have simply jumped from a locally-dominated system – one where the only pattern is the local jostlings – to a globally-dominated one, where the molecules are now entrained to some single global scale of orderly behaviour.
We have a dichotomy being created in stages in other words. First the local boundary, then the global boundary. Next must come the free mixing of the dichotomised. We must see the eruption of convection currents on every scale – a power-law distribution of turbulent flows - to complete the natural causal trajectory to full blown "systems-hood".
So when the hexagonal cells break down into chaotic boiling motion with a little extra heating, this third stage is actually the take-home message. Out of dichotomisation erupts a flat, scale-symmetric, middle ground. The system has completed its transition when some new feature – in this case cooling convection patterns – has made its power-law appearance over all possible scales.
And this result obeys the second law of thermodynamics. The hexagonal cells – the simple global order – is more efficient at dissipating heat than the simple local motions. But a complexly developed or fractal pattern of turbulence dissipates the maximum possible degree of heat. It “expands” the system the fastest (throughput being equivalent to expansion in open systems ontology).
and back to organic hierarchies
Well we have spent a lot of time here on the possible mathematical
representations of the middle ground of organic hierarchies. It is good
news that there are so many ready-made models lying about because a
logic needs to be fully fleshed out.
We can think of organic logic as itself a hierarchy. At the most general (and indeed vague) level we start with some broad notions about organicism, holism or systems science. We have a few slogans and mental images to do with a complex causality, such as the conviction that the whole is more than the sum of its parts, that feedback, mutuality and process are important, that purpose and meaning must be modelled as well as information and substance.
Then the purpose of this site is to distil this general organic understanding down to a single crisp causal mechanism – a particular logic that describes how any event, any development, must happen.
But in-between the general idea and the particular mechanism there must be a variety of concrete models. We must have a middle strata of useful mathematics that give various glimpses into real developed systems behaviour. For instance, some models like scale-free networks will be good for accounting for steadily expanding worlds. Others like Ising spin glasses are better at modelling sudden shifts in a system such as a phase transition.
So now we can get back to the logic of hierarchies as a whole – talking about not just their middle ground thermodynamics but also their constructing and constraining upper and lower bounds. Thus to resume the organic hierarchy thread........[under construction]