06 Nov '20 20:02>10 edits
Unlike in BS science fiction where AI decides it doesn't want to obey humans anymore (which doesn't make any sense for reasons far too tedious to explain here but I may eventually write a whole book about it), the real current problem with AI is not that it doesn't do what we tell it to do but rather it does do what we tell it to but we keep accidentally telling it to do something we don't want it to do.
We humans often fail to fully take into account that AI always does EXACTLY what we tell it to do; very literally and EXACTLY! This isn't the fault of the AI because all it knows is what we tell it to do, which it does, and the data it is given.
To see what I mean, watch this lecture below;
YouTube
One thing I hope to do in my AI research is to deal with this exact problem by programming the AI to learn to understand this problem.
The AI must somehow be made to come to know the difference between what we tell it to do and what we would want it to do.
More generically, and contrary to common layperson belief, the real problem (and potential danger) with AI is not that its too smart but rather is the exact opposite i.e. it is not nearly smart enough i.e. it is too stupid.
We humans often fail to fully take into account that AI always does EXACTLY what we tell it to do; very literally and EXACTLY! This isn't the fault of the AI because all it knows is what we tell it to do, which it does, and the data it is given.
To see what I mean, watch this lecture below;
YouTube
One thing I hope to do in my AI research is to deal with this exact problem by programming the AI to learn to understand this problem.
The AI must somehow be made to come to know the difference between what we tell it to do and what we would want it to do.
More generically, and contrary to common layperson belief, the real problem (and potential danger) with AI is not that its too smart but rather is the exact opposite i.e. it is not nearly smart enough i.e. it is too stupid.