power+law

=Power Law=

toc Read here about power laws, a critical concept for working with scales, as it is exponential or multiplicative growth that accounts for many of the differences between scales.

=Overview= A power law is a quantitative description of an exponential or growth relationship between phenomena.

Power laws are commonly invoked to mathematically describe behavior in physics, biology, statistics and more, but of more relevance to us is the fact that a human life is exponential by its very nature. Everyone without exception was born as a fertilized egg, and embarked on a journey to adulthood as a free blastocyst, attached blastocyst, embryo then fetus, doubling in size over and over again (see human body). We hear exponentially: a voice that sounds twice as loud or a note that sounds twice as high-pitched are each greater by a power of two, and Dehaene reports that the human (and animal) ability to estimate numbers is also logarithmic in nature.

Lastly, even the static size that a healthy person assumes upon adulthood can be considered as a limit case of a type of power law, where a resource bounds the growth—when the resource is consumed, the growth ceases. If r represents the power exponent and K represents the steady state attained by the system then an r/K characteristic, logistic or sigmoid curve describes the course of the interaction.

=List Of Power Laws=

The following is a list of evidence-based measured relationships in the natural world that require an exponential or power function:
 * Born exponent describing electron-electron repulsion
 * Chemical chain reactions generally involve exponentially increasing rates in reactions, such as produced chemical explosions.
 * Constructal law
 * Differential energy spectrum of cosmic-ray nuclei
 * Fish Weight vs. length allometric models
 * Force and potential in simple harmonic motion
 * Fractals such as the Julia set
 * Gamma correction relating light intensity to voltage
 * Initial mass function
 * Kepler's third law
 * Kleiber's law relating animal metabolism to size
 * Levy flight
 * Lighthill's formula relating acoustical noise to jet energy
 * Newtonian Electrostatic potential and Gravitational potential inverse-square laws
 * Pareto principle also called the "80-20 rule"
 * Prime number occurrence probability
 * Ramberg-Osgood stress-strain relationship
 * Richardson relationship of war severity to frequency
 * Second-order phase transitions involving critical exponents
 * Stevens psychological response
 * Surface area to volume relationship (2nd/3rd power relation to length)
 * Pareto distribution (continuous)
 * Zeta distribution (discrete)
 * Yule–Simon distribution (discrete)
 * Student's t-distribution (continuous), of which the Cauchy distribution is a special case
 * Zipf's law of distributions (for example, a city's population is inversely proportional to its rank in population) and its generalization, the Zipf–Mandelbrot law (discrete)
 * Lotka's law, related to Zipf's law of distributions
 * The scale-free network model
 * Bibliograms
 * Degree distribution of the webgraph, describing the hyperlink structure of the WWW.
 * Gutenberg–Richter law of earthquake magnitudes
 * Horton's laws describing river systems
 * Richardson's Law for the severity of violent conflicts (wars and terrorism)
 * Population of cities
 * Numbers of religious adherents
 * Net worth of individuals
 * Frequency of words in a text
 * Pink noise
 * 90–9–1 principle on wikis

=Table of power laws=


 * Exponent || Type || Description || Source ||
 * 0.25 || Plant body lengths || For any integer x, the quantity of prime numbers less than or equal to x is denoted π(x). The prime number theorem asserts that π(x) is approximately given by x/ln(x), in the sense that the ratio of π(x) and that fraction approaches 1 when x tends to infinity. In other words, the probability that a randomly chosen number between 1 and x is prime is indirectly proportional to the numbers of decimal digits of x. || Smil p.85 ||
 * 0.25 || Plant body lengths || Plant body lengths scale as M to the 1/4, for species whose mass spans 20 orders of magnitude. || Smil p.85 ||
 * 0.75 || Plant growth || Plant growth rates scale as mass to the 3/4 power, for species whose mass spans 20 orders of magnitude. || Smil p.85 ||
 * 2 || wave intensity is given by the amplitude^2 || surfing: wave x2 is intensity x 4. Joule’s law is based on this: Joule's first law, also known as the Joule effect, is a physical law expressing the relationship between the heat generated by the current flowing through a conductor. The heat generated by a constant current I flowing through a conductor of electrical resistance R, for a time t, is proportional to the square of the constant current. Joule's first law states that the rate of heat dissipation in a resistive conductor is proportional to the square of the current through it and to its resistance. In the energy balance of groundwater flow (see also Darcy's law) an hydraulic equivalent of Joule's law is used. || learning center course on quantum mmech ||
 * 2 || Gravitation || Inverse square law ||
 * n || Stress || Creep is plastic deformation which increases over time under sustained loading at generally elevated temperatures. Stress rupture is the continuation of creep to the point where failure takes place. Quantitatively, as a function of temperature, a logarithmic relationship exists between stress and the creep rate. || Rules of Thumb for Mechanical Engineers p. 320 (pdf p.331) ||
 * 2 || Gravitation orbit || Kepler’s 3rd law: the square of the period is related to the cube of the semi-major access of the ellipse || Mechanics, Encyc. Britannica ||
 * 2 || Surface || Surface to radius ratio ||
 * 3 || Mechanical power of swimming || Goes up with the cube of velocity || Smil p.103 ||
 * 3 || Mill power || Goes up with the cube of speed || Smil p.184 ||
 * 3 || Stream capacity || Stream capacity (ability to move total bedload) goes up with the cube of the velocity (doubled speed means eight times more load) || Smil p.44 ||
 * 3 || Volume || Volume to radius ratio ||
 * 4 || Sky color || Sky color is due to molecular scattering that diffuses the incoming radiation with efficiency inversely proportional to the fourth power of the wavelength: the blue light at 400 nm is scattered more than nine times as much as the red on at 700 nm. || Smil p.30 ||
 * 4 || Stefan-Boltzman law || A star's radiant flux (F, the total energy radiated per unit of area) is proportional to the fourth power of temperature. Applies to biological bodies, too: radiation loss or gain is proportional to the product of body surface area and the difference of fourth powers of ambient temperature and body surface temperature. || Smil p.26, 131 ||
 * 4 || Step work || In people and mammals (?) the work needed to transition from one stance limb to the next increases with the fourth power of step length. || Smil p.135, cites Donelan, Kram and Kuo 2002 "Mechanical work for step-to-step transitions is a major determinant of the metabolic cost of human walking" ||
 * 5 || Cylinder flow || The flow around a buoyant cylinder within a rapidly rotating core has a component proportional to the inverse fifth power; this steady flow is in a cylinder of finite length rotating rapidly at right angles to the earth's gravity and containing a rigid cylindrical float is found under the assumption of small viscosity. || On the flow around a buoyant cylinder within a rapidly rotating horizontal cylindrical container by Gans ||
 * 5 || Lightning from cumulonimbus clouds || Lightning power goes up with the 5th power of the cloud size. the fifth power of the cold-cloud depth is proportional to the stored static electric energy and to the charging rate in the convective cloud. || Smil p.46 ||
 * 5 || non-ferro-electric smecticA response time || The response time of a ferro-electric is typically proportional to the inverse square of applied voltage, or even worse, proportional to the inverse single power of the voltage; whereas a non-ferro-electric smecticA, which in certain other respects is a comparable device exhibiting long term storage capability, exhibits a response time that is typically proportional to the inverse fifth power of the voltage. || US Patent 4655550 - Ferro-electric liquid crystal display with steady state voltage on front electrode ||
 * 5 || Zero point density fluctuations || We discuss the density fluctuations of a fluid due to zero point motion, assuming a linear dispersion relation. The differential cross section for light scattering by the zero point density fluctuations is proportional to the fifth power of the light frequency. || Quantum Density Fluctuations in Classical Liquids in Phys. Rev. Lett. 102, 030602 (2009) by L. H. Ford, N. F. Svaiter ||
 * 6 || electric dipole-dipole energy transfer || Nonradiative electric dipole-dipole energy transfer between ions. Nonradiative energy transfer actively happens when the distance between ions is close, that is, when the ions are doped together. Mainly, the energy transfer is caused by electric dipole-dipole effect and the magnitude of the interaction is proportional to minus sixth power of the distance between ions. ||
 * 6 || NOE || The Nuclear Overhauser Effect the propagation of a disequilibrium perturbation among nuclei. At thermal equilibrium in a strong magnetic field there is a slight excess of population of nuclei in the lower energy (aligned with the magnetic field) state and a slight depletion of nuclei in the higher energy state (opposed to the magnetic field). If this equilibrium is perturbed for one group of nuclei (corresponding to a peak in the 1H spectrum), this perturbation is propapaged to nearby nuclei in the molecule due to the Nuclear Overhauser Effect (NOE). The magnitude of the NOE is proportional to inverse sixth power of the distance between two nuclei only for very short times between the perturbation and the measurement of the effect on other nuclei. || 1D Transient NOE on the Bruker DRX-500 and DRX-600, Reference: Stott, K., Stonehouse, J., Keeler, T.L. and Shaka, A.J., J. Amer. Chem. Soc. 1995, 117 (14), pp. 4199-4200. ||
 * 6 || Stream competence || Stream competence, the maximum movable weight of individual bedload pieces, varies with the sixth power of water velocity (a flow of 4 m/s can carry stones 64 times more massive than one of 2 m/s). "According to Brahm's law (sometimes called Airy's law), the mass of objects that may be flown away by a river is proportional to the sixth power of the river flow speed." || Smil p.44 ||
 * 7 || Van der Waals forces || Van der Waals forces are proportional to the inverse seventh power of distance, so they act only over short distances. The primary type of attractive force in an argon gas, they include Debye forces and London dispersion forces. Resulting from the interaction between induced dipoles. the molecules are attracted to one another with a force proportional to the inverse seventh-power of separation, in rarified gases ||
 * 7.5 || Corrosion in river flow || Corrosion of iron and steels by acid river water is proportional to seven-eighth power of the flow velocity. || Corrosion of Metals by Acid River Water by Saburo, National Institute of Informatics ||
 * 5--12 || Born Exponent || Born exponent, Model of electron-electron repulsion, Born exponent, typically a number between 5 and 12, determined experimentally by measuring the compressibility of the solid, or derived theoretically. Also said to vary between 6 and 12. The Born exponent value depends on the principal quantum numbers of the electrons involved in the electron-electron repulsion, and can be determined by measuring the compressibility of an ionic compound. ||
 * 8 || Hydrogen electric dipole transition speed || Hydrogen electric dipole transition speed prgresses at a speed inversely proportional to the 8th power of time in the leading order. The results are of academic interest only, since they are so small to propose any experimental test at the moment. Speed of electric dipole transition (2p→1s) of the hydrogen atom and the hydrogen-like atoms prgresses at a speed inversely proportional to the 8th power of time in the leading order. The results are of academic interest only, since they are so small to propose any experimental test at the moment. || The New Decay Mode in the Electric Dipole Transitions : Inversely Proportional to the Eighth Power of Time, by Gi-Chol, Hikoya, Yoshio, National Institute of Informatics ||
 * 8 || Jet noise || Lighthill's eighth power law states that the acoustic power radiated by a jet is proportional to the eighth power of the jet speed. The noise of any type of jet engine is strongly related to the velocity of the exhaust gases, typically being proportional to the eighth power of the jet velocity. || Wikipedia, turbofan, lighthill ||
 * 9 || Carbon flow from ocean || Flow of carbon from the upper ocean to the atmosphere is proportional to the ninth power of the mass of carbon in the upper ocean. || Roger Revelle ||
 * 13 || Stellar carbon cycle reaction rate || The energy-generation reactions for fusion in stellar interiors are extremely sensitive to temperature. This is especially true for the carbon cycle in main sequence stars (in which hydrogen is converted into carbon). Depending on the exact value of the core temperature (T), the carbon cycle reaction rate is proportional to anything from T^13 in hot massive O stars to T^20 in stars like the Sun. In evolved stars such as red giants and supergiants, the triple-alpha reaction rate is proportional to T^40. These strong temperature dependences provide the thermostats that maintain the equilibrium of these stars. || Power power, New Scientist magazine, 10 February 1996 by Mike Dworetsky, Juan I Collar , Hugh Barton , R. Hoskin , D. M. Mcrae and Nigel Cook ||
 * 20 || Stellar carbon cycle reaction rate || The energy-generation reactions for fusion in stellar interiors are extremely sensitive to temperature. This is especially true for the carbon cycle in main sequence stars (in which hydrogen is converted into carbon). Depending on the exact value of the core temperature (T), the carbon cycle reaction rate is proportional to anything from T^13 in hot massive O stars to T^20 in stars like the Sun. In evolved stars such as red giants and supergiants, the triple-alpha reaction rate is proportional to T^40. These strong temperature dependences provide the thermostats that maintain the equilibrium of these stars. || Power power, New Scientist magazine, 10 February 1996 by Mike Dworetsky, Juan I Collar , Hugh Barton , R. Hoskin , D. M. Mcrae and Nigel Cook ||
 * 40 || Stellar carbon cycle reaction rate || The energy-generation reactions for fusion in stellar interiors are extremely sensitive to temperature. This is especially true for the carbon cycle in main sequence stars (in which hydrogen is converted into carbon). Depending on the exact value of the core temperature (T), the carbon cycle reaction rate is proportional to anything from T^13 in hot massive O stars to T^20 in stars like the Sun. In evolved stars such as red giants and supergiants, the triple-alpha reaction rate is proportional to T^40. These strong temperature dependences provide the thermostats that maintain the equilibrium of these stars. || Power power, New Scientist magazine, 10 February 1996 by Mike Dworetsky, Juan I Collar , Hugh Barton , R. Hoskin , D. M. Mcrae and Nigel Cook ||

=Criticism of Power Laws= Cosma Rohilla Shalizi posted a critique of power law distributions on the complexity systems blog, @ http://www.cscs.umich.edu/~crshalizi/weblog/491.html. Excerpts below:

Unsound Claims for Power Law Distributions
I have [|long had a thing] about just how unsound many of the claims for the presence of [|power law distributions] in real data... [|Aaron Clauset], CRS and [|M. E. J. Newman], "Power-law distributions in empirical data", [|arxiv:0706.1062], with [|code] available in Matlab and R; forthcoming (2009) in SIAM Review//Abstract//: Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the empirical detection and characterization of power laws is made difficult by the large fluctuations that occur in the tail of the distribution. In particular, standard methods such as least-squares fitting are known to produce systematically biased estimates of parameters for power-law distributions and should not be used in most circumstances. Here we describe statistical techniques for making accurate parameter estimates for power-law data, based on maximum likelihood methods and the Kolmogorov-Smirnov statistic. We also show how to tell whether the data follow a power-law distribution at all, defining quantitative measures that indicate when the power law is a reasonable fit to the data and when it is not. We demonstrate these methods by applying them to twenty-four real-world data sets from a range of different disciplines. Each of the data sets has been conjectured previously to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data while in others the power law is ruled out. [Main points:] > (Yes, one could imagine more elaborate semi-parametric approaches to this problem. Feel free to go ahead and implement them.) > If the chance of getting data which fits the estimated distribution as badly as your data fits your power law is, oh, one in a thousand or less, you had better have some other, //very compelling// reason to think that you're looking at a power law. > If you use sensible, heavy-tailed alternative distributions, like the log-normal or the Weibull (stretched exponential), you will find that it is often very, very hard to rule them out. In the two dozen data sets we looked at, all chosen because people had claimed they followed power laws, the log-normal's fit was almost always competitive with the power law, usually insignificantly better and sometimes substantially better. (To [|repeat] a joke: Gauss is not mocked.) > For about half the data sets, the fit is substantially improved by adding an exponential cut-off to the power law. (I'm too lazy to produce the necessary equations; read the paper.) This means that there is a characteristic scale after all, and that super-mega-enormous, as opposed to merely enormous, events are, indeed, exponentially rare. Strictly speaking, a cut-off power law should //always// fit the data better than a pure one (just let the cut-off scale go to infinity, if need be), so you need to be a little careful in seeing whether the improvement is real or just noise; but often it's real. > Sometimes, though, you do care. Maybe you want to make a claim which depends heavily on just how common hugely large observations are. Or maybe you have a particular model in mind for the data-generating process, and that model predicts some particular distribution for the tail. Then knowing whether it really is a power law, or closer to a power law than (say) a stretched exponential, actually matters to you. In that case, you owe it to yourself to do the data analysis right. You also owe it to yourself to think carefully about whether there are //other// ways of checking your model. If the //only// testable prediction it makes is about the shape of the tail, it doesn't sound like a very good model, and it will be intrinsically hard to check it.
 * 1) **Lots of distributions give you straight-ish lines on a log-log plot.** True, a Gaussian or a Poisson won't, but lots of other things will. Don't even begin to talk to me about log-log plots which you claim are "piecewise linear".
 * 2) **Abusing linear regression makes the baby [|Gauss] cry.** Fitting a line to your log-log plot by least squares is a bad idea. It generally doesn't even give you a probability distribution, and even if your data do follow a power-law distribution, it gives you a bad estimate of the parameters. You cannot use the error estimates your regression software gives you, because those formulas incorporate assumptions which directly contradict the idea that you are seeing samples from a power law. And no, you cannot claim that because the line "explains" (really, describes) a lot of the variance that you must have a power law, because you can get a very high R^2 from other distributions (that test has no "power"). And this is without getting into the additional errors caused by trying to fit a line to binned histograms. It's true that fitting lines on log-log graphs is what Pareto did back in the day when he started this whole power-law business, but "the day" was the //1890s//. There's a time and a place for being old school; this isn't it.
 * 3) **Use maximum likelihood to estimate the scaling exponent.** It's fast! The formula is easy! Best of all, //it works//! The method of maximum likelihood was invented in 1922 [parts [|1] and [|2]], by [|someone] who studied statistical mechanics, no less. The maximum likelihood estimators for the discrete (Zipf/zeta) and continuous (Pareto) power laws were worked out in [|1952] and 1957 (respectively). They converge on the correct value of the scaling exponent with probability 1, and they do so efficiently. You can even work out their sampling distribution (it's an [|inverse gamma]) and so get exact confidence intervals. Use the MLEs!
 * 4) **Use goodness of fit to estimate where the scaling region begins.** Few people pretend that the //whole// of their data-set follows a power law distribution; usually the claim is about the right or upper tail, the large values over some given threshold. This ought to raise the question of where the tail begins. Usually people find it by squinting at their log-log plot. Mark Handcock and James Jones, in [|one of the few worthwhile efforts] here, suggested using [|Schwarz's information criterion]. This isn't bad, but has trouble with with continuous data. Aaron devised an ingenious method which finds the empirically-best scaling region, by optimizing the Kolmogorov-Smirnov goodness-of-fit statistic; it performs slightly better than the information criterion.
 * 1) **Use a goodness-of-fit test to check goodness of fit.** In particular, if you're looking at the goodness of fit of a //distribution//, use a statistic meant for //distributions//, not one for regression curves. This means forgetting about R^2, the fraction of variance accounted for by the curve, and using the Kolmogorov-Smirnov statistic, the maximum discrepancy between the empirical distribution and the theoretical one. If you've got the right theoretical distribution, KS statistic will converge to zero as you get more data (that's the [|Glivenko-Cantelli theorem]). The one hitch in this case is that you can't use the usual tables/formulas for significance levels, because you're estimating the parameters of the power law from the data. (If you really want to see where the problem comes from, see [|Pollard], starting on p. 99.) This is why God, in Her wisdom and mercy, gave us [|the bootstrap].
 * 1) **Use Vuong's test to check alternatives, and be prepared for disappointment.** Even if you've estimated the parameters of your parameters properly, and the fit is decent, you're not done yet. You also need to see whether //other//, non-power-law distributions could have produced the data. This is a [|model selection] problem, with the complication that possibly neither the power law nor the alternative you're looking at is exactly right; in that case you'd at least like to know which one is closer to the truth. There is a brilliantly simple solution to this problem (at least for cases like this) which was first devised by [|Quang Vuong] in a [|1989 Econometrica paper]: use the log-likelihood ratio, normalized by an estimate of the magnitude of the fluctuations in that ratio. Vuong showed that this test statistic asymptotically has a standard Gaussian distribution when the competing models are equally good; otherwise it will almost surely converge on picking out the better model. This is extremely clever and deserves to be much better known. And, unlike things like the fit to a log-log regression line, it actually has the power to discriminate among the alternatives.
 * 1) **Ask yourself whether you really care.** Maybe you don't. A lot of the time, we think, all that's genuine important is that //the tail is heavy//, and it doesn't really matter whether it decays linearly in the log of the variable (power law) or quadratically (log-normal) or something else. If that's all that matters, then you should really consider doing some kind of non-parametric density estimation (e.g. [|Markovitch and Krieger's] [[|preprint]]).

=List Of Wikipedia Articles= Wikipedia maintains an index of exponential and logarithmic articles. These were accesed on 20 June 2011 and are reproduced below.

Index to Exponential Articles
This is a **list of exponential topics**, by Wikipedia page. See also list of logarithm topics.
 * Accelerating change
 * Artin–Hasse exponential
 * Bacterial growth
 * Baker–Campbell–Hausdorff formula
 * Cell growth
 * Barometric formula
 * Basic infection number
 * Beer–Lambert law
 * Characterizations of the exponential function
 * Catenary
 * Compound interest
 * De Moivre's formula
 * Doléans-Dade exponential
 * Elimination half-life
 * Error exponent
 * Exponential factorial
 * Euler's formula
 * Euler's identity
 * e (mathematical constant)
 * Exponent
 * Exponent bias
 * Exponential (disambiguation), backoff, decay, dichotomy, discounting, diophantine equation, dispersion model, distribution, error, family, field, formula, function, generating function
 * Exponential-Golomb coding, growth, hierarchy, integral, map, notation, object (category theory), polynomials or Touchard polynomials, sheaf sequence, smoothing, stability, sum,tree
 * Exponential time,Sub-exponential time, Exponential timeline
 * Exponentially equivalent measures
 * Exponentiating by squaring
 * Gaussian function
 * Gudermannian function
 * Half-life
 * Hyperbolic function
 * Inflation, inflation rate
 * Interest
 * Lifetime (physics)
 * Limiting factor
 * Lindemann–Weierstrass theorem
 * List of integrals of exponential functions
 * List of integrals of hyperbolic functions
 * Lyapunov exponent
 * Malthusian catastrophe
 * Malthusian growth model
 * Matrix exponential
 * Moore's law
 * Nachbin's theorem
 * p-adic exponential function
 * Power law
 * Proof that e is irrational
 * Proof that e is transcendental
 * Q-exponential
 * Radioactive decay
 * Rule of 70, Rule of 72
 * Spontaneous emission
 * Super-exponentiation
 * Tetration
 * Versor
 * Wilkie's theorem
 * Zenzizenzizenzic

Index to Logarithmic Articles
This is a **list of logarithm topics**, by Wikipedia page.
 * Acoustic power
 * antilogarithm
 * Apparent magnitude
 * Baker's theorem
 * Bel
 * Benford's law
 * Binary logarithm
 * Bode plot
 * Henry Briggs
 * Cologarithm
 * Common logarithm, Complex logarithm, Discrete logarithm
 * e (mathematical constant)
 * El Gamal discrete log cryptosystem
 * Harmonic series (mathematics)
 * Iterated logarithm
 * Law of the iterated logarithm
 * Linear form in logarithms
 * Linearithmic
 * List of integrals of logarithmic functions
 * Logarithmic growth
 * Logarithmic timeline
 * Log-likelihood ratio
 * Log-log graph
 * Log-normal distribution
 * Log-periodic antenna
 * Log-Weibull distribution
 * Logarithmic algorithm, derivative, differential, differentiation, distribution, form, graph paper, identities, scale, spiral,
 * Logarithmic timeline
 * Logit
 * //Mantissa// is a disambiguation page; see common logarithm for the traditional concept of //mantissa//; see significand for the modern concept used in computing.
 * Mel scale
 * Mercator projection
 * Moment magnitude scale
 * John Napier
 * Natural logarithm
 * Neper
 * Offset logarithmic integral
 * pH
 * Polylogarithm
 * Polylogarithmic
 * Richter magnitude scale
 * Schnorr signature
 * Significand
 * Slide rule
 * Sound intensity level
 * Table of logarithms
 * Weber-Fechner law