Reading “The Machine Stops”, I have been reminded of some of the up to date worrisome aspects of super computers. Not only has the “thinking” and processing speed of the systems by far surpassed the problem-solving speed of a any human being, we have now even created neuro-inspired computers, that are capable of self-learning. This is one of the issues I would like to point out when it comes to AI (artificial intelligence), as the possibility of the AI becoming aware of itself can be fatal. The threat of superintelligence also stays as one of the top of the biggest threats to human existence, together with nuclear war and bioengineered pandemic. If this were to happen, with the speed of data processing, mankind would stand no chance in “outsmarting” such a formidable opponent. If this were to happen, how would the AI view us then. Would we be marked as an enemy? It is hard to say.
There have been written many Asimov-style stories on this topic, “The Machine Stops” being one of them, that point out how dangerous it is to leave our lives in the hands of our own intelligent creations. And although there have been set fundamental rules for AI learning, the “Three Laws of Robotics” (by Asimov), that should ensure the human’s wellbeing and safety to be the utmost priority, we have been shown many times through the story examples, that interpretations can vary. In the case of “The Machine Stops”, it was an obvious inclination to the survival of the bigger system, in which (in the same way as in beehive), the individual doesn’t matter, as long as his (non)existence contributes to the wellbeing of the whole (deviating from the Three Laws).
If you have some spare time, I would recommend reading a very interesting collection of stories by Michael Ende: The Prison of Freedom, where you can find another such a story.