I really enjoyed watching Prof. Jürgen Schmidhuber’s talk about future of AI. The topic of AI becoming Terminator-style is an evergreen and as a person who actually works in research I start to get quite annoyed about these futurology predictions. I think they are similarly annoying as quantum physics interpretations of The Secret movie - mathematical truths twisted by knaves to make a trap for fools (yes I like Kipling ;-)
His argument is that super AI will not be concerned with humans that much as we are not concerned with for example ants, because our goals are different. The only time we really become aware of ants and think about them is when they come inside the house and annoy us in the kitchen. Super AI would be somewhat similar to this.
That makes a lot of sense. The problem is that this AI would have a lot of friction points with humans, because it would compete for scarce resources. Machines that we build today are built out of relatively scarce materials, and presumably one of the goals of an AI like this would be to spread itself (like life). Which it couldn’t do if it needs scarce metallic resources and needs to compete with humans for them. Once it could move to a different, non-competitive place, like the Moon, where it could find these materials, there’s no reason for interference.
But anyway, I think discussions on these topics are still quite irrelevant today, because all we can do today is to build some statistical models that have become a bit more advanced than in the past.
Just my two cents.