jihjihk

The Weak AI

Not the strong AI we fear

The term “Artificial Intelligence” is used more frequently now, often with an underlying tone of an impending threat to replace human intelligence. The recent achievements in artificial intelligence research, especially in parallel with those infrastructural level progress in computing power, certainly promise a larger role that machines will play in our economy. To directly make an assumption that this replacement in jobs equals replacement general intelligence, however, may yet be unfounded, a large reason for it being language.

First of all, there is a clear distinction between artificial general intelligence, or strong AI, and narrow AI. Most commercialized AI we have now are “narrow” in that they are good at one specific task, such as playing Go, but the same AI cannot easily transfer its specific intelligence to other simple tasks. This is due to the “illusion” of intelligence as pointed out by the AI’s Language Problem article refers to the lack of common sense, let alone emotional and social intelligence, that today’s AI lacks but humans often overlook. The problem lies in the myopic methods humans use to quantify intelligence. Deep learning, as demonstrated in AlphaGo’s win against Sedol Lee, has enabled machines to capture a new dimension of instinctive knowledge of humans but is still limited to the medium of how information is represented. For example, image and video recognition, despite its much, much larger vector space than words, has approached very impressive accuracy. Yet deciphering language is much harder than identifying the object in an image because language is captured in three very different layers of syntactic, semantic, and pragmatic meanings, the latter two of which are very hard to represent in vectors.

Another problem is the question of whether AI can ever achieve creative intelligence. The fundamental necessity of any machine learning being training data, every algorithm learns by training on data, thus dependent on the amount of data for accuracy. It can attempt to generate, instead of simply comprehend a concept but hasn’t succeeded much as demonstrated in the color naming example. Human creativity is an interesting mix of learning from examples, randomness, and some creative consciousness. To humans, the consciousness almost comes naturally, even with very little “training data” that we encounter. Capturing this creative consciousness may not be as simple as learning from the left-right brain dichotomy mentioned in Weizenbaum’s writing.

Beyond this technical feasibility of how the philosophical question of why still remains. Many of us have come to equate jobs with purpose in life, thus fearing the AI’s increasing involvement in strictly economic activities as a threat to our existence. As Weizenbaum pointed out, beings/organisms are shaped by the problems they face and that sense machines will never be humans. When we try to emulate a specific human knowledge in machines, it is still a tool that humans use. To give more weight to a machine intelligence than as a tool and fear it then appears unintelligible, on the human’s side.