This article describes the physical problems with prediction of future events.
One issue that is rarely dealt with are the computational problems of such predictions. By this I do not mean their complexity but the actual possibility that such predictions can be made.
Take Newcomb’s paradox, a description of which can be found here. Essentially it is a game where if the machine can guess what you are going to do you lose money.
Now say the predictor is a program that is feed with all sorts of information about you and then uses that to make a certain prediction. Such programs exist. Shannon made one that guessed which number you will pick next.
Now scared of how this first program will be used to rip you off you decide to figure out how it is going to guess. You get the same program copy it to make a second program. Then you find out what it is going to say you will do and you do the opposite. This is a similar idea to that used by Turing to show that you cannot tell in general using a program whether a program will terminate. Essentially his argument boiled down to if you could make a DoesTerminate() program then you could feed this program this input
If (DoesTerminate()==True)
Loop forever
Else
Stop
In the same way you could have a predictor P() and use it in your program
If (p()==2)
pick 1
else
pick 2
You could argue that the predictor could know you using a copy of itself and take this into account. Such arguments do not hold for the halting problem and they do not hold for the "prediction problem". This means that no computation can ever predict every action you are going to do.
No comments:
Post a Comment