NBN on AI: the help or hindrance of humanity

    Image licensed under Creative Commons.

    What is the future of humanity?

    Everyone is entitled to their own opinion, with some seeing humanity eventually colonizing the stars and others predicting that the human race would dwindle to extinction. According to the Future of Humanity Institute at the University of Oxford, the key to humanity’s success or failure might come in the form of artificial intelligence, or AI.

    AI is one of the most commonly explored subject in science fiction, appearing frequently in films like Terminator and Interstellar. Because of its complexity, some misunderstanding still lingers as to the definition and capabilities of AI.

    One big category of AI technology is knowns as narrow AI. This form of AI can perform a certain task, such as running a calculation, often with an efficiency that far exceeds its human counterpart. While it may sound very impressive, this technology is already commonly used in many fields, such as on college campuses. Many students and faculty are even creating their own projects using narrow AI. For example, Professor Kristian J. Hammond, co-director of the Intelligent Information Laboratory, conducts research on AI as well as machine-generated content like news articles written by algorithms.

    Besides narrow AI, strong AI is another type that generates much excitement and even controversy. Some even see this type of AI generating fear, as it’s potent enough to beat humans in a variety of ways. At the moment, AI is besting humanity at a variety of games from chess several decades ago and poker more recently. In a study by the University of Oxford and Yale University of AI researchers, it was estimated that AI has a 50 percent chance of beating humans in all tasks in 45 years. Would AI take over? It is unknown. The advent of strong AI could lead to an event called the Singularity where technological change is so rapid that predicting what happens afterwards is unfathomable.

    Perhaps just as interesting as the potence of strong AI are the controversy associated with it, which leads to some serious discussions about ethical and philosophical issues. Some arguments stand out on both side of the debate.

    AI does not have a conscience, so there is no risk of it “becoming evil.” The goals of AI could differ from our own, however, and this could spell disaster for humanity.

    One of the most famous thought experiments for this is the Grey Goo scenario. In this situation, nanotechnology capable of replicating itself begins to spiral out of control. If actions are not employed to stop the replication process, humanity could be driven to extinction as the AI will not stop using the resources in the environment to complete its task of replication.

    To combat the problems of superintelligent AI, the Three Laws of Robotics, created by famous writer and Boston University professor Isaac Asimov, could be employed as an initial step to protect humanity. The laws are the following:

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Maybe the repercussions of AI will not be as dramatic as leading to the end of the world, but a negative impact on employment as a result of AI taking over many jobs is very real and perceptible. Jobs lost due to automation in the coming years is estimated to be about 47 percent of all U.S. employment, according to Carl Benedikt Frey and Michael A. Osborne of Oxford University, and AI is not even factored in.

    What’s more, if AI is becoming more efficient than humans in the service sector, college education might even see its value being diminished. There is over a 95 percent chance that jobs like cashiers, secretaries and clerks will be replaced within 20 years. Even jobs requiring more advanced skills are at risk; Goldman Sachs recently replaced hundreds of equity traders due to automation. As a result, some forms of college education might not be so valuable if a computer can outperform a human at the same task.

    Despite all the concerns regarding AI, one can’t ignore the vast potential benefits of AI. Very soon computers may be able to improve themselves instead of requiring tedious programming for each update. Additionally, AI could play a major role in future educational endeavors as a result of personalized learning that could drastically change the way the classroom, and perhaps Northwestern, looks in the coming years.


    blog comments powered by Disqus
    Please read our Comment Policy.