▲ A.I. Superpowers
Artificial intelligence has been the hot topic in recent years. The interactions between Iron Man Tony Stark and A.I. computer “J.A.R.V.I.S.” are amazing, and could that ever happen in the near future? Can human beings be released from the shackles of intensive labor to find out more about the meaning of life?
In his book “A.I. Superpowers”, Dr. Lee Kai-Fu talks about the A.I. development in the past and the future, and the advantages and disadvantages between China and America. It offers a window into the whole industry. Here, we summarize some key points within the book.
US Started Early, China is Closely Behind
In the start of the 90s, the US has already made achievements in A.I. With the Internet in China was rapidly developing, the officials in Mainland China announcing that the country hopes to become the globe’s A.I. incubator by 2030:
Zhongguancun, Beijing, is the hub for A.I. development in China. Relevant resources, policies and talents are all key investment areas.
In 2017, the A.I. startup venture capital was 48% of the global A.I. investments in China, marking the first time ever to have surpassed the USA.
The rapidly developing Internet in China offers a massive amounts of rich data as a great advantage for A.I. development.
There are two main kinds of A.I. scientific methods
1. Rule-based methods (symbol system/expert system)
A series of logic rules to instruct the computer on how to think. For instance, “if x then y”, could become a rule definition to follow through expert knowledge in relevant fields, and A.I. software could better adapt to the real world. However, this type of method is only suited for simple and well-defined problems, and cannot resolve possible choices or a large expansion of operations.
Rule-based methods have their limitations because of the lack of self-learning capabilities and the inability to deal with issues outside the prior scope of consideration.
2. Neural network methods (deep learning)
Since complex problems cannot be resolved by listing all the rules, why don’t we just rebuild the human brain on the machine? After all, the human brain is an expert in dealing with complex issues.
Neural network methods allow machines to “learn” through “neural networks”: we can input a large number of examples of a phenomenon (an image or an audio file) into the artificial neural network, so that the network can learn from these data to identify a pattern.
“Deep learning” can only be used to make decisions, predict, and categorize in specific fields. That is why it requires a massive amount of data from a specific field to be able to make the best decision for the intended result.
Why has “deep learning” become so popular in recent years? The reason is because:
It requires massive computing power => Computing power is currently powerful and cheap
It requires massive amounts of data => The rise of Internet has produced massive data For instance, using the example of my team’s “A.I. Skin Test”:
▲ Results of an A.I. Skin Test
First, we must clarify if this algorithm is an inspection for the skin issues of certain types of ethnicities, because different ethnicities vary in skin issues/colors, and that would determine the data source which we use to train the algorithm. => Suppose we choose East Asians, and we need to detect black eye circles, color patches, pimples, and other issues.
Secondly, a massive number of photos are collected. Each photo is manually labeled with their skin issues (yes, one by one they have to be manually labeled). This is the highest standard for machine learning algorithms: consistency with real human judgment. Therefore, the more accurate the manual labeling, the more precise the algorithmic output results will be.
Thirdly, design and then modify the algorithm. We will first design an algorithm to run these data, then output the results and accuracy. For instance, the algorithm tells me that this photo has 10 pimples at certain positions on the left and right cheeks. Then we compare that to the manual labeling and discover that there are only 5 pimples, all on the left check. This means that this algorithm is not accurate enough, and there are some errors in its judgments. We need to then revised the algorithm (tell it that those on the right cheek are not pimples~), until the accuracy reaches our intended level.
Finally, if we take a photo, even if this photo was not among the initial algorithm training photos, the algorithm can accurately tell us the skin issues that it has identified, and that this result will nearly coincide with human identification, then we have obtained an AI skin test expert!
The four waves of AI applications
The first wave: Internet A.I.
The main beneficiaries of this wave are the Internet providers. By labelling user behavioral data acquired, algorithms can understand, study, and learn personal preferences, and further recommend data specifically for us, such as TikTok, Taobao, Google, and others.
The second wave: Business A.I.
The labeling of massive amounts of professional data accumulated by traditional companies, tagged with strong or weak features, to discover hidden correlations and offer bases for corporate’s decisions, such as better identification of insurance fraud in claims by insurance companies, faster repayments by those who receive their loans on Wednesdays.
The US is still ahead of China in the field of business A.I., because the American financial industry has a better data storage framework, which is beneficial to A.I. development. The majority of Chinese companies have their own unique data storage systems and are less willing to invest in third-party professional services. It is difficult for China to profit from the techonology of second wave of A.I., because most of the traditional Chinese corporations do not have structuralized data yet. They also retain old corporate cultures and have other issues.
The third wave: Perception A.I.
Extend A.I. into our living environment, where massive amounts of sensors and smart equipment transform the real world into data that can be analyzed by deep learning algorithms and optimized. The algorithms can simulate the operations of a human brain and integrate pixels of images or videos into meaningful clusters to identify the targets within. Words and phrases can be extracted from voice data to analyze the meaning of the full sentence, such as: Xiao Ai speakers, face scan payments, and others.
This A.I. wave blurs the boundaries between online and offline and significantly increases ours and Internet’s interactive nodes creating online-merge-offline (OMO): offers the convenience of the online world to offline and the offline world sensory content to online. E.g. KFC face scan payment, customized education (a dynamically designed course based on each person’s level of learning) and others.
▲ Alipay and KFC jointly release face scan payment
China is developing relatively faster than the US as the country has a higher tolerance for public data and personal privacy, and they are willing to exchange for convenience and Shenzhen has advantages on A.I. hardware production.
The fourth wave: Autonomous A.I.
This refers to the familiar applications of unmanned aerial vehicles and autonomous vehicles. One notable fact is that China is directly considering autonomous vehicles in its urban planning, so as to better provide its development space.
Reflections on A.I.: Back to human value
The four waves of A.I. has transformed the world. They will create a huge impact on the global economy and society, and will possibly cause large-scale unemployment.
Dr. Lee Kai-Fu proposes “human-robot collaboration” and “jobs of compassion” to adjust for future employment demands so that the giant impact on employment caused by A.I. can truly be mitigated while human beings can fully express their unique potential.
“[Book Excerpt] A.I. Superpowers
Dr. Lee Kai-Fu’s Deep Analysis of Ten-Year A.I. Future Trend” by Livia Yang. Reprinted with permission by the author.