top of page

Hide and Seek with Our Robot Overlords

Updated: Nov 18

To Be Human Is to Be Unbound by Math


It is not emotion that separates humans from artificial intelligence, but mathematics.

More specifically, the ability to ignore mathematics. Not just because “bad at math” is the reason that many enter the fields of liberal and creative arts, but because humans are capable of recognizing limits that mathematics is not.


Before you run away, gentle reader, rest assured that you shall not be required to do math.  An attempt will be made to explain a few concepts regarding the fundamentals of mathematics – stuff that you probably didn’t learn in school –to help you recognize what this means for modern technology, cybersecurity, and AI, as well as the laws that attempt to regulate it.


Mathematical Incompleteness and The Halting Problem:

Let’s start with some key people: Kurt Gödel and Alan Turing.  Turing, you’ve probably heard of (of not, go watch The Imitation Game or, if you want to know more about his mathematical genius, read Alan Turing: The Enigma).  Gödel was an earlier mathematician and logician who you may not have heard of.  Famous among academics, his theorems were instrumental in Alan Turing’s work and remain a limitation of what we can do, no matter how clever our inventions.


In 1930, mathematician Kurt Gödel demonstrated that there are limits to our mathematical knowledge by proving that mathematical knowledge is “incomplete”. What does “incomplete” mean when it comes to mathematics? At the most basic level, it means that there are valid mathematical statements (like 3+3) that mathematics cannot prove AND that there is no way to identify these statements ahead of time.[1] For those more familiar with Rumsfeld than Riemann, these are the “unknown unknowns” that are – Gödel proved –unknowable.  These are known as the Theorems of Incompleteness.


Gödel’s Theorems of Incompleteness were being studied by Alan Turing when Turing was working on problems regarding to infinite loops in early computer programming. Turing wanted to know if could write an algorithm for his machine essentially told the machine “Keep looking for X until you find it; otherwise, stop looking (e.g., halt)”.  This became known as the Turing Halting Problem. In trying to solve his Halting Problem, Turing discovered the limitations of computer science were bound by the Incompleteness Theorems.  In short: he not only couldn’t solve it himself, but proved no one could solve it.  The problem could not – and cannot – be solved.


What does this mean? Simply put, it is not mathematically possible to know if a computer program that takes input will halt. Not because of malware, bugs in code, or threat actors, but because of mathematics. Like the conservation of energy is a fundamental law of physics, the incompleteness of mathematics is a rule that computation power cannot overcome.  And because computer science is – at its core – mathematics, this means that even if computer code is perfectly executed, it is a proven impossibility that it can be guaranteed to be free from harm.


Mathematical Limits to Modern Technology

In fact, recent events show us how this happens. The inability to halt was the underlying cause of the “blue screen of death” created when CrowdStrike released an update to its Falcon endpoint detection solution.  In that case, the updated Falcon solution included a 21st field in the kernel package while the Microsoft-certified “csagent.sys” driver only defined 20 fields, resulting in a read-out-of-bounds memory safety error. The CrowdStrike software went off desperately searching for the 21st field and fell victim to the Halting Problem. While Delta characterized the flaw as “unauthorized access” in their $500 million lawsuit, the access in question was sought because the computer did not know to stop looking.


It is possible that process improvements could have helped mitigate the breadth of impacted customers with the CrowdStrike error.  But mathematical limitations of process-bound software release are not limited to the Halting Problem. Combining complex systems (e.g., “systems of systems”) has been proven to be non-deterministic (math-speak for “can’t figure out the outcome from the inputs ahead of time”). This is part of the study known as “Complexity Theory.”  And the ultimate craze best described as a “system of systems” is, of course, Artificial Intelligence.


AI is created though a system or systems of deep learning, neural networks, machine learnings, and even other AI systems.  The non-deterministic nature of “systems of systems” is what creates the “black box” of many generative or general purposes AI systems. This can be regardless of whether the underlying models are built on other AI systems, as the model combinations might be so complex that the AI system developers themselves do not understand how the AI system gets outputs from inputs. And the growth in the AI-powered digital economy is only going to accelerate the limits of this complexity.

Mathematical Limits to Digital Law

Theoretical mathematics doesn’t usually mesh well with law making. Consider that, in 1897, the Indiana legislature considered a bill to set the value of pi to 3.2 (if failed)[2]. Then again, the legislature was probably not under 24/7 media pressure to be outraged at the existence of a number that was “irrational”.


Today’s lawmakers may not be so immune, and that should concern us all. Swift action was taken to ensure the passage of the EU AI Act, and it is expected to perpetuate the Brussels Effect in regulated global markets.  The U.S., which is slower due to both our system of federalism and the speed at which common law is made in the courts, is moving forward through state legislatures and civil lawsuits under old laws. Regardless of the legal mechanisms, though, the risk of developing outcome-based standards for lawfulness remains high.


For example, the EU AI Act will require high risk AI systems and models to meet a conformity assessment before being allowed into the EU markets. The European Commission has mandated European standardization organizations develop and publish these standards by March 2025, so we don’t yet know what they are (a bit of a “black box” process in and of itself). California’s Generative Artificial Intelligence Accountability Act (SB-896) requires a risk analysis of threats on California’s critical infrastructure posed by the use AI, but it is not yet clear whether risks will be required to be mitigated and to what extent. Civil lawsuits and regulatory enforcement actions – including potential precedent created by the Delta v. Crowdstrike suit – may similarly develop common law “standards of care” governing the development of AI models and AI systems (analysis on this case is almost complete - subscribe to be the first to know!).


The need for lawyers, lawmakers, and judges to understand the limitations of computer science and mathematics will become more and more important as artificial intelligence and other technology evolves.  These complex systems of systems will unfortunately make the limitations of mathematics more obvious to use. 


Hide and Seek with Our Robot Overlords

If we cannot demand through the law that harm be avoided, are we resigned to being enslaved by our robot overlords? Absolutely not.


We can take steps to acknowledge that AI will cause harm at some point. We can prepare for it and plan to reduce it by implementing qualified human oversight, feedback loops, and model correction mechanisms and other processes designed to identify and mitigate harm. Politically “easy” legal rules cannot offset the risk any more than we can legislate the risk associated with gravity. A duty of care cannot be to produce faultless code because code will never be perfect.  The test for “lawful” AI cannot be that it produces no harmful output. It would be like lawmakers mandating that the climate crisis be solved by requiring energy to be created from nothing.


And if all else fails, tell the robots to play hide and seek.  They won’t know when to stop looking.


[1] This is not an easy concept to grasp for the non-mathematician (basically, 99.99% of lawyers). The best lay-person explanation the author found is here: https://youtu.be/I4pQbo5MQOs.  Slightly more in-depth is at https://medium.com/p/b59be67fd50a, which is well worth the Medium premium account. 

429 views0 comments

Recent Posts

See All

Behind the Curtain of Security Theater

Trade the Comfort of Security Theater for True Security It's time to wipe off the flattering grease paint and instead make executives see...

Comments


bottom of page