Incompleteness Theorem Doesn't Mean "Stop Trying" It just means, "Revise Your Objective"Kurt Gödel's incompleteness...

Posted by Michael E Karpeles on Sunday, May 3, 2015

Incompleteness Theorem Doesn't Mean "Stop Trying"

May 3, 2015

Incompleteness Theorem Doesn't Mean "Stop Trying"
It just means, "Revise Your Objective"

Kurt Gödel's incompleteness theorem is a lot like John McCarthy's
Frame Problem, which essentially postulates, "the world changes behind
your back"[1]. A more Gödelian explanation might assert, a system
cannot account for (be resilient to) Frames it hasn't seen. One such
relevant demonstration was the insufficiency of euclidean postulates
(the axioms of euclidean geometry) to satisfy alternate
(e.g. spherical) geometries, which are relevant because of the
specific application that spherical geometry better models our earth
(i.e. an advancement in our understanding)

Rather than appealing to a specific anecdote, I'll generalize: the
reasons we cannot derive comprehensive systems (for logic, math, etc)
which are absent of inconsistency and achieve verifiable "correctness"
are (a) Occam's Razor, Search Space & Expense; there's only 1 way to
be right, an infinite number of ways to be wrong, and a potentially
combinatorially explosive search space of possibilities and
inter-dependencies to verify (b) this rightness can only be measured
by the currently known use cases (known/visible Frames), and (c) there
will always be views or functions (as we can contrive infinite
permutations of abstract cases) which are not considered by out
current models.

The Frame problem & Gödel's incompleteness theorems can be somewhat
mitigated in one of two ways. First, by addressing or eliminating each
inconsistency as it arises (derive a new model completely,
monkey-patch or ignore the "edge-cases") or, by listening to the
wisdom left by John McCarthy and championed by Monica Anderson and
others, who I believe have a productive mindset: that the very nature
of Artificial Intelligence and systems (like math) which try to model
the intricacies of the real, evolving world *need not*, and in fact as
proof by Gödel' and McCarthy, cannot be both verifiable and
comprehensive. That we should instead, (where necessary) limit
ourselves to scoped, safe mostly-deterministic environments wherein
uncertainty, stochasticism, and inconsistencies cannot harm us, or
alternatively, build systems which attempt to continuously improve as
inconsistencies (Frame changes) are discovered with the best accuracy
we can achieve, and not rely on unrealistic promises of verifiability.

For further insight on this topic, I recommend watching Monica's video from 330 seconds in:
https://vimeo.com/monicaanderson/dualprocesstheory#t=330s

The important TL;DR? Because the world (even the "assumed", limited
world we model in our math) has not been made fully
visible/known/exhaustively explored or proven deterministic (other
than for basic, highly restrictive / self contained universes of
discourse / domains dealing with discrete and finite components under
specific functions, constraints and use cases) we both shouldn't
expect fully verifiable systems, *nor should we let this arbitrary
reality deter progress*.

Disclaimer: This is my first time really thinking about the Frame
problem as a parallel to the Incompleteness Theorem and I have not
read such works as Godel Escher Bach (which is why Mark is cc'd for
his thoughts), only Godel's Proof
(http://www.amazon.com/G%C3%B6dels-Pro…/…/ref=pd_bxgy_b_img_y).

cc: Mark P Xu Neyer, Jessy Exum, Kartik Agaram, Anthony Di Franco

Follow-up: Reconciling Risks & Rewards of Decentralized Web

https://www.facebook.com/jackson.kernion/posts/10156426135140132?comment_id=10156426282080132&reply_comment_id=10156427360425132&comment_tracking=%7B%22tn%22%3A%22R%22%7D