Wednesday, December 17, 2025

The Unknown Unknowns are what get you Killed: Mallory and Irvine

As posted at Everest Mystery:


Postmortem: My guess is that Irvine achieved his weight reductions by breaking the seals between the backup bottles and the regulator with the idea that he could reseal a bottle to a single regulator (one at a time) with a good spanner. This is why they broke down the rigs and "saved" weight. (he got rid of the "redundancy" of multiple regulators or a manifold/lines--about 5 lbs. worth per rig). The real reason was to enable caching of the bottles, I suspect.

However, after the first bottle, he found that in the extreme cold he could not achieve the sort of seal that he could at base camp (probably due to stiction/Blish effect) which was outside of his undergraduate education. There is some evidence that they exactly had this problem when testing things out.

This would have given them 4 hours of O2, and not much more. Thus, they would have reached the point where the No 9 bottle was found and spent the next three hours beating the hell out of the remaining bottles with their spanner. At 1:27 PM, in a snow squall and being out of O2 for some time and frustrated beyond comprehension, Mallory took one wrong step.

And that was that. Odell may have seen/heard it--I think so. Because he knew enough to try to poke around at that time but also enough that they were dead or about to be.

Note--we know that Irvine disassembled the rig. So, this idea can be tested before running off and accusing anyone of lying (or worse yet--the arrogance of undergraduate ignorance). I am only proposing this idea. But it answers everything as far as I can see. The bottles at that time were not able to be resealed under the conditions of the dead zone. 

The fact that many of these bottles were already breaking seals in Tibet is a testament to the limits of a threaded seal at that time and the variability of the threads under extreme conditions, not that some bottles were "bad" and the others "good". It's all about confidence limits and statistics. At two sigma you may have a number of good ones, that goes down exponentially at three sigma. Irvine would not have known this.


They never came close to the summit.


Saturday, November 15, 2025

The Real A.I.

 By the data processing inequality, we know that AI will never achieve greater information (intelligence) than what goes into it. As noted in the previous post, a local instance of AI may learn from communications with another, but the data processing inequality holds.

So, our objective is the utility of AI. In other words, to use it as a handy, accessible repository of helpful (true) information, not as a font of new knowledge or as a replacement of human thought. The key in its use then is to tune it with human insight to correct mistakes and add surgically to the structure of the AI knowledge base. Perhaps the best method of augmentation is through counterfactuals.

Counterfactuals have the benefit of being built over a human-understandable Structured Causal Model (SCM) and so are very pointed, Not only that, the change that might be induced in the AI knowledge base can be very directed, i.e. systematized with little or no confounding.

The automation of increased AI knowledge base efficacy is the challenge of the future of AI. Human insight and direction is the one method for this to be done consistently and reliably.

Friday, May 30, 2025

Chaotic Interactions and AGI

Godel's First Incompleteness Theorems tells us that a formal system is either complete or consistent. Or, equivalently by the data processing inequality, the information generated by a computer will be something less than the sum of its own complexity and that of its input. In other words, computers have no ability to innovate outside their programming and (static) environment. Thus, they do not "think".

But what if a computer was in communication with at least two other computers? The environment of any one computer might be considered to be the other two in that closed communication system. In such a system, "chaotic interactions" might occur. The reason why three computers would be necessary is, like the three-body problem, one computer might instantaneously serve as an entropy sink for the communication between the other two thus allowing them to explore a range of innovation outside their natural limitations. These two computers might, in respect to themselves and given this informationally dynamic environment, appear to innovate, i.e. to think.