Sunday, February 07, 2016

Maximum Entropy in Elections

The maximum entropy approach to electoral predictions is a departure from the conventional approach of identifying voter sympathies and projecting their likelihood to vote for certain candidates. Where the conventional Bayesian approach makes use of inferred structures in voter behavior, the maximum entropy approach is a rejection of structure.

There is discernible knowledge in the voting public and not all elections end in ties. How should we model this information-theoretic structure?

In the maximum entropy approach it is critical to separate what is known from what is unknown. The known is modeled and the unknown is distributed equally without any presumption of structure. The problem with conventional models is that there is an unmanaged chance that in a dynamic environment that the predictions could be severely amiss. On the other hand, the problem with the maximum entropy approach is that unknown but potentially decisive structures will not be incorporated in the models and the resulting predictions will not be sharp.

To eliminate errors in models, the conventional approach can be described as top-down pruning and iterative refinement while the maximum entropy approach may be characterized as bottom-up. In a system awash in data, the potential for making false inferences without substantiation is significant. Thus, I prefer the maximum entropy approach.

Lets take a look at the 2016 election from the GOP side:

First, polls measure stated preference. When averaged over days and various likely voter models, they give a pretty good estimate of how many people are proportionally willing to state their preference for a candidate to a pollster. This is not the same as actually voting and the difference matters the most when there are significant dynamics in the learning environment of the electorate.

The maximum entropy approach does not seek to extract rationales from the polls. It simply recognizes the results of a poll as incomplete estimates of a state of the electorate. The unmeasured dimensions are the bugaboo.

In New Hampshire, all polls show that there is a significant number of voters that remain undeclared and undecided. What does it mean to be undecided in the presence of enough information to be able to characterize the particular attractiveness of each individual candidate? Was it not known ahead of the last debate that Rubio was a candidate that was principally beholden to big donors and was likely to renege on campaign promises? I would say not. Each candidate has enough of a history for the engaged voter to discern their character IF that voter was intelligent and willing to make the determination.

The problem with voters is that they choose ignorance--but for intelligent reasons. And so, on ensemble, their actions can be predicted using a model built on ignorance, i.e. the maximum entropy model.

Secondly, there is some discernible structure to the electorate overall. For example, Democrats are more "feeling" and Republicans are more "thinking". In the 2016 election, the long term consequences of crony capitalism finally weighed so hard on the electorate that the traditional control measures of the establishment lost hold of all but their most well conditioned followers. Trump emerged to champion the disenfranchised working class and Cruz emerged as a champion of traditional conservatism as a governing philosophy. Rubio is presently the leading candidate of the rump establishment core.

So there is a three player game. Each group is more repulsed by the other two groups than to the foilables of their candidate figurehead. Each group self-associates for a mixture of rational and irrational factors. The membership of each group can change as the figureheads interact, but the cores of each group remain separate and distinct for the very same reasons that gave rise to the discernible dynamic of the Trump phenomenon. There is something rotten in Denmark.

In New Hampshire, the ideological and establishment cores were somewhat fragmented in the polling due to the fact that the voters perceive that they have a choice in the election outcome. Many have met individually with the candidates so this exaggerated perception of self-importance looms large. In reality, Carson, Christie, Kasich, and Fiorina are not viable candidates by any stretch of the imagination if one was to look at their organizations and nationwide polling. Bush was likewise thoroughly rejected as a national candidate due to his family's poor record of voter fidelity and his readily apparent ineffectualness as a leader. (His nickname is "Rabbit").

However, the realities of the three player game are in conflict with the New Hampshire voter's boutique sensitivities. All three major players have reasons about them that grate to various degrees against their cores. Yet, each of the three has a certain perceived utility. In polls, people are eager to voice their personality preference and much less likely to censor themselves with rational considerations in the projections of utility. Judging all personalities to be registered already into the poll results, that leaves us with a calculation on utilities.

Thus, I discount the minor players by a rate that is something of a measure of the ability of the electorate to place utility in balance with personality preference. In the case of Carson, I realized by his demeanor and manner that his followers were more like that of a cult than that of voters. Thus, his learning rate is set at zero.

In the case of the other minor players, I set the learning rate at 1/2. This may be too high. I don't know and I can't well judge the rationality of the New Hampshire voters to vote utility over ego. My sneaking suspicion is that I am too optimistic. Nonetheless, once dedicated to a maximum entropy method I need to stick to it. I have no reason to know that ego is more important to the New Hampshire voter than utility so I split the difference.

Without further apologies.

Thursday, February 04, 2016

Simplistic New Hampshire

The RCP polls for New Hampshire are shown on the left with the dynamic adjustments shown after the slash

Cruz:       12/27
Trump:    33/27
Rubio:     11/27
Carson:    3/3
Bush:       9/5
Kasich:    11/6
Christie:  6/3
Fiorina:   4/2

The three major players each represent a core of the dynamic learning model.

The adjustments were achieved by keeping the Carson percentage (due to the fact that he attracts an emotionally committed following), halving the remaining minor players, and setting Trump to his baseline support of 27 (a little better than what we found was his baseline in Iowa.)

Cruz and Rubio benefit from the remainder in equal percentages due to the fact that they are equally declared as preferences in the polls.

Think of this as the adjusted maximum entropy model from baseline projections. Interestingly, it shows massive changes and a dead heat for win, place, and show.

5 Feb 12:05 update---------------------------------------------------------------------------

Checking the 4 Feb percentages at RCP, Carson was at 3, not 5 so those two points were distributed by the basic rule and the variation for Fiorina was eliminated.

The dynamic concerns the three cores and the learning process applied especially to Trump.

The learning process predicts a hard limit to his support by intelligence and it's education correalate. Also, that the pragmatic core (establishment) will coalesce faster than the ideological core. These cores are settled on Rubio and Cruz respectively.

The Carson splinter group of the ideological core is more emotional than rational, thus their learning rate is stagnant. (After consideration of the Myers-Briggs results of political affiliation and the Iowa-Carson anomaly from the earlier prediction, this became apparent as the correct application of learning to the dynamic model).

The splitting of the remainder vote between the ideological and establishment cores was done by maximum entropy. Halving the minor players is the application of a learning rate that is balanced between personal preference and rational considerations in picking a winner. Again, this is essentially a maximum entropy approach.

Sunday, January 24, 2016

Fear and Loathing in D.C.

The U.S. political system can be thought of as a distributed system with relationships between voter and candidate stand-ins for government (and its prospective alternative forms). When distributed systems fail, they fail in a catastrophe. Relationships fail successively in a wave of destruction because the mechanism of failure in one relationship is very similar to that in another within its basin of influence. All that is necessary is to bring the system to saturation and then give it a tiny shove. After this destruction, there is a space minimizing contest to fill the void.

Both political parties in the U.S. are in a state of collapse. After years of steadily increasing pressures, the average voter has reached a level of stress and frustration that is near some saturation point. The polarized system fails at the weakest point and the destruction spreads outward leaving the functional ideological extremes and the remaining locally functional rump of the corrupt structure as cores. In this region of destruction, nearly anything can evolve untethered from the ideologies of the core factions and the quid pro quo relationships of the establishment rumps. Thus we have the Trump phenomenon.

The failure mechanism here is a vote of no confidence in the fiduciary relationship between voter and representative. The bipartisan corruption of crony capitalism, awash in trillions of dollars of taxpayer leveraged debt has simultaneously destroyed the faith and trust of both factions in their government. Primitive sources of failure produce inchoate symptoms. When individual trust is lost, the disruption is manifested in ways that are difficult to identify and impossible to control--until the system learns.

Given that there is no readily known fix, the voter is drawn towards the visceral satisfactions of delusion--democracy without responsibility, ends without means, and rhyme without reason. What other possibility does he have in the near term?

In other systems, we would have civil war. In the U.S., civil war is no longer a possibility. The political forces are too intertwined. So instead we will have turmoil. What comes out of it cannot be predicted in excruciating detail. There is some significant chaotic effects. However, it is true in general that the faction that makes the necessary corrections the fastest without destroying itself in the process will dominate the U.S. political landscape until its competitor does likewise. Until then, both will be thrashed by the groundswell that their failures have unleashed.

Trump rose out of the gap between the two parties that allowed crony capitalism to flourish. After the ideological battles of the 60s were roughly settled (circa 1985 by my estimation), the system reached a stable bipolar arrangement such that thereafter the two poles became move divided to the point that personality rather than ideology was the defining characteristic of the separatrix.

In the 2016 election, Bush and Clinton are the rumps of the old crony capitalistic parties. (Bush is presently losing the establishment rump to Rubio). Cruz and Sanders represent the ideological political extremes. Trump could not come from nowhere and he judged that the right was more fertile territory to launch a populist campaign. It could be said that he sprang from the GOP like Athena from Zeus's head ready to fight (Zeus's head was temporarily split apart in the process).

What can be predicted by this? Assuming that the system learns ahead of its decisions:

First, in the GOP primary race, Trump's popularity as a candidate is strictly bounded and he will not be able to attact many more to his nebulous cause than those already immediately enthused.  Reason is persuasive.  Stupidity, while exhilarating, has a finite shelf life.

Secondly, of the two sentient cores of support in the GOP race, the conservative core lead by Cruz will ultimately dominate the establishment rump lead by Rubio. In a three way race where Trump is bounded to below 45%, the two other cores will ultimately coalesce. Between the ideological and pragmatic cores, where the two are not mutually exclusive, the ideological core wins what is seen as an insurgent battle.

If Trump was to fail early, the establishment core could reassert itself since the factional threat would once again loom large. This was the traditional game of the establishment core that lead to the crony capitalistic situation originally. Alernatively, if the establishment core was to fail early, then the Trump core would lose part of its reason for existence and the ideological core should prevail.  If the ideological core was to fail, it is anyone's guess since then the relative percentages could push the Trump core over 50%. None of these scenarios are likely.

Third, when the race is contested and the weak muddled core is a minority, the fight ultimately goes to the principled core.  In other words, the Trump phenomenon will fail unless there is a significant influx of liberal partisans into the mix to sustain his bandwagon.  At this point, that possibility seems unlikely.

To summarize: The Trump core exists by its own momentum due to the failure of government. The Cruz core wins by drawing off support of the Rubio core against Trump and simultaneously from Trump against Rubio. The Rubio core wins by some annihilation of the Trump core. In no case does the Trump core win in a three way race although it could potentially be the strongest of a collection of minor players in a fragmented field. But even then, this is a temporary situation.

It's a race to learn now or fall to a demagogue.

Sunday, October 11, 2015

Thoughts on the Riemann Hypothesis

The Riemann Hypothesis is a mathematical conjecture that the (nontrivial) zeros of the Riemann zeta function all have real parts of 1/2 on the complex plane. Its proof (or disproof) is a millennial prize problem worth a fair amount of money and everlasting fame. Thus it has become over the 150+ years since its inception by Bernhard Riemann, a landmark for mathematic pursuit, but despite the efforts of the greatest minds in mathematics, the hypothesis remains a conjecture.

So I offer a naïve suggestion.

In the work, Principia Mathematica, Whitehead and Russell spend several pages developing the notion of cardinal and ordinal couples to the conclusion that 1 + 1 = 2. That is, there is a dimension of numbers with order arising out of the process of addition together with the concept of unity. Multiplication is another dimension related inextricably to that of addition as the replication of the now existent numbers by each other along the dimension of order. The primes are simply byproducts of this relation.

The Riemann Hypothesis seems complex, but it is a result of the structure of multiplication over addition and nothing more. Two dimensions are required to solve polynomials--the expression of addition and multiplication together--and so the complex plane is two-dimensional (the imaginary number, i,  is just a symbol that marks the relation). No more and no less. The fact that multiplication is an operation quasi-independent of addition necessitates two dimensions for the expression of the solution of a polynomial expression once we have allowed that numbers have a structure that is ordinal.

My guess is that we will eventually find that the density of the primes is nothing more than a representation of the structure of addition and multiplication expressed in spatial dimensions similarly to how we view fractals--apparently complex, but exactly only the generating formula and nothing more in its essence.

The Riemann Hypothesis could not exist without the Euler product formula which is itself an expression of the Sieve of Eratosthenes which is nothing more than an expression of the artifacts of multiplication over the generated dimension of addition.

In short, the Riemann Hypothesis, is nothing more and nothing less than a statement that there is a midpoint created by the new ordinal relation of 1 + 1 and it occurs at 1/2 the distance between the unity of multiplication and the unity of addition. It cannot be anything different and it cannot be anything more or less.

Of course, this is just a thought.

Wednesday, June 10, 2015

Against the Export-Import Bank

The Export-Import (Ex-Im) Bank is a taxpayer sourced fund with the ostensible purpose to give the federal executive power to improve the trading prospects of selected US companies in respect to their foreign competition. The benefit to the taxpayer is then supposed to be a return on investment through localizing the creation of the exported wealth (keeping that economic activity local) while focusing the receipt of the trade for that export into a discrete stream of foreign currency which can be attributed directly to that trade.

The benefit of subsidizing free trade for local economic benefit is nonsense on its face. We can consider the fallacy of the benefit of the transaction using the ideas of conservation of mass (price) and the efficiency of the information streams.

1. Subsidizing a trade is a reduction of price of the traded good below the actual market price. By itself, the subsidy transmits economic inefficiencies to the supposed beneficiaries of the trade and sets up structures within the local creation of the traded wealth which result in an apparent price below the market price. This difference in wealth is dissipated through inefficient consumption (e.g. an extra yacht for the CEO of Boeing that he lets rot in the bay through lack of use together with the idea that the extra unused yacht is a good thing).  In short, it makes the US in actuality less competitive. This is the free trade principle in conservation of mass (price) terms.

2. However, there is an argument that if a central authority can direct a game against a foreign competitor, he may be able to direct the transactions so as to win market share and increase the leverage of the local (although necessarily inefficient) company by an effective monopoly. Therefore the immediate inefficiency would be worthwhile in order to create a subsequent greater net positive increase in wealth once the foreign competition is weakened through lack of capital investment. The best example of this is a technical efficiency in the production of a high tech asset such as an aircraft that the foreign competition cannot match through economies of scale, (e.g. the US can build planes with fiber composites at less per airframe in actuality than the foreign competitor because of the US investment in the process).

Unfortunately, in this game there can be no long term winner. If either side gains a monetized technological advantage, then the monetary value of that edge will induce that player to maximize profits at the expense of technological advancement. Eventually, that player will become technologically stagnant as focus on the measurable good (money) attributed directly to the trade dominates the allocation of its resources. Meanwhile, because technology is interconnected and nonlinear in its advacement, other players may discover the means by which to nullify the measured technological edge of the leading player by happenstance if not by design. With this undisclosed technological revolution, a trailing player may take the lead.

For the leading player to reliably maintain the position advantage, it must continue to innovate in necessarily unprofitable ways at a rate that entirely swamps the innovation rate of its competitors. Thus, technological gains are ephemeral since this cost is usually exorbitant and cannot me rationalized for the discrete benefits of the trade. The transitory cost of these gains together with the indirect benefits of the transaction that is subsidized minus the inefficiencies of the subsidy is the equilibrium price difference of the transaction which ultimately goes against the subsidy.

In short, even in the best case, the subsidy reinforces measurable failure at the expense of immeasurable innovations and gains in other areas of the economy. The best strategy is one that is naive in the directed allocation of profits as it is that strategy that maximizes innovation across all areas of the economy. The paradox is that in the face of technological innovation, long term investment in the future should be divorced from the optimization of short term gains.

Thus I am against the use of taxpayer funds in the Ex-Im bank except for the case of preserving a strategic defense capability.

Tuesday, May 05, 2015

On "isms"

An "ism", such as racism, sexism, ageism, nationalism, etc. is a shorthand term we use to describe the thinking of people as if that thinking put them in a group themselves.  The people in these groups which we ourselves created by the use of the shorthand are then typically labeled as "ists" as if they functioned as automatons according to our categorization. When we do this, we are very often attempting to justify ourselves by describing what we reject.  Unfortunately, when we extend concepts that we created for our own convenience to the categorization of other thinking people, we are enforcing a relation that is counterproductive to the elimination of the "ism" that we criticize.

In short, calling others racists or sexists or communists etc. tends to sharpen the boundaries against free thinking that we are against. Why?

Useful information is like a virus. It spreads and affects us to the degree that we value it's utility. If the information is contradicted by experience, it loses its veracity and hence some degree of its utility. Curiously, humans do seek out and value thoughts which are often in contradiction with empirical reality when those ideas are useful to our emotional well being. This last statement is simply saying that denial is a natural stage in learning just as it is in the stages of grief.

However, unlike religious convictions that are neither provable or refutable, the utility of beliefs in contradiction to experience is transitory for any learning being that evolves towards it's own tangible benefit. Denial yields to anger to bargaining and ultimately to acceptance of the new experienced truth.  That is, of course, while we allow ourselves to evolve. The ideology of the "isms" act as a learning impairment. When we label others with a derogatory group membership as "racists" etc. we are actually defining in our mind a justification against learning.  What we are not doing is teaching the object of our derision. Whatever the good that we might offer it is effectively nullified by the symmetrical onus that we confer since all people want to feel good about themselves while they think about the things around them.

Learning is its own joy. To the degree that our information is useful, the new found utility of the information is a source of empowerment and so naturally appeals to all. The solution of all the pathologies of the isms is found through mutual learning--not by name calling.

So when you next hear of someone being called an "ist" ask whether one is solving a problem or contributing to it.

Sunday, January 05, 2014

Protection and Privacy

As we discuss here, privacy is the property of the individual to exist in a state of personal knowledge without that knowledge being controlled or influenced by forces outside that individual. Privacy is not a state of knowledge that is necessarily unknown to others, it is simply that they have no means of utilizing that private knowledge in directing a force that affects the individual. This knowledge can be shared outside of the individual, but it cannot be acted upon.

If it were not for privacy, the system of the whole would utilize the useful knowledge of the individuals to reach a more optimal overall state. This state would be cemented in equilibrium and impervious to adaptation except for the influences of knowledge and forces entirely external to that system. In other words, the system could not self-regulate, it could only respond.

However, in systems that allow for individual privacy, adaptation can occur within the system as information is released by will of the individual at states and times of that individual's choosing. Thus a small dose of information at the proper moment might swing the system into an entirely different trajectory from which it might have ever evolved if all information was shared. The momentum of the initial change in state could see the evolution through.

In order for this form of privacy to exist within the system, the individual must be protected from influences within the system that tend towards system-wide equilibrium. In the least, freedom of conscious must be allowed to exist. The individual must be in a way sovereign to himself, but not necessarily independent of others. He must be allowed to self-organize, i.e. to think, learn, feel, and forget autonomously. So that the system of the whole might be more sensitive to learning and evolving itself, the individual must have freedom of action in addition to freedom of conscious---all while remaining interdependent on others.

It follows that such sovereignty cannot exist without a shared respect for original life, liberty, and the pursuit of "fill in the blank", whether it be happiness, industry, love, or whatever are the shared values of the system. But always, there must be respect for life. Without that, there is nothing.