Stanfords Annual Report on AI, the second largest number of Chinese AI papers in the world

category:Internet click:547
 Stanfords Annual Report on AI, the second largest number of Chinese AI papers in the world


According to reports, the report is dedicated to tracking, collating, extracting and visualizing AI-related data, and has become the basis for policymakers, researchers, executives, journalists and the general public to fully understand AI in order to form a more intuitive impression of the complex AI field.

Report summary

This years report achieves two goals: first, it refreshes last years indicators. Secondly, it tries to analyze the progress of AI technology in the global context. The former is crucial to the achievement of the reports mission, that is, to lay the foundation for the AI dialogue, which means continuing to advance technological progress. But the latter is also essential. Without a global perspective, there would be no AI story.

The 2017 report focuses heavily on North American activities, reflecting the limited number of global partnerships in the project, rather than an inherent bias. This year, we are beginning to narrow the global gap. We recognize that there is still a long way to go to make the report comprehensive enough and that further cooperation and external participation are needed. However, we can assert that AI is a global technology.

In 2017, 83% of the AI papers in the Scopus database came from outside the United States. Of these, 28% come from Europe, which is the highest proportion in all regions. The number of students enrolled in AI and Machine Learning (ML) courses in universities around the world is increasing, especially in Tsinghua University, China. The total number of students enrolled in AI+ML courses in 2017 is 16 times that in 2010.

Not only the United States, China and Europe have made progress, but Korea and Japan are the second and third largest producers of AI patents in 2014, second only to the United States. In addition, South Africa hosted the Second Indaba Conference on Deep Learning, one of the largest ML teaching activities in the world, which attracted more than 500 participants from more than 20 African countries.

The diversity of AI is not just geographic. Today, more than 50% of AI partnerships are profitable, including projects from the American Civil Liberties Union (ACLU), the Oxford Institute for the Future of Humanity and the United Nations Development Programme. At the same time, people are increasingly aware of the importance of gender and ethnic diversity to AI progress. For example, we have seen an increase in the number of organizations such as AI4ALL and Womenin Machine Learning (WiML), which encourages the participation of vulnerable groups.

Activity index and technical performance index

This article mainly introduces the first part of the report of the General Assembly - Data: Activity and Technical Performance

The activity index reflects the participation of academia, enterprises, entrepreneurs and the public in AI activities. From the number of college students studying AI to the proportion of women applying for AI jobs to the growth of venture capital in AI start-ups, these data are all-encompassing.

Technical performance indicators refer to changes in AI performance over time. For example, we can measure the quality of AI answers and the speed at which computers detect objects in tests. The 2018 Global AI Report adds additional national granularity to many indicators last year, such as robotic installation and attendance at AI conferences. In addition, we have added many new metrics and research areas, such as patents, downloads of robotic operating systems, GLUE metrics and COCO rankings.

Overall, we see the continuation of last years major achievements: AI activities are increasing in almost all places, and technical performance is also improving in an all-round way. Nevertheless, some particularly interesting results this year deserve special attention. This includes significant progress in natural language and limited gender diversity in the classroom.

Activity study

I. A Survey of the Publication of 2018AI Papers

1. Classification by discipline

Compared with 1996, the following chart shows the growth of annual publication rate of academic papers in 2018. This chart compares the growth of papers in all fields of scientific research, computer science (CS) and artificial intelligence (AI). From 1996 to 2017, the annual growth rate of AI papers published exceeded that of CS, which shows that the growth of AI papers is not only due to peoples interest in computer science.

Figure 1: Annual growth of AI papers published by discipline from 1996 to 2017

2. Classification by Region

The figure below shows the number of AI papers published annually by region. Europe has always been the largest producer of AI papers, with 28% of the AI papers in the Scopus database coming from Europe in 2017. Meanwhile, despite fluctuations in the number of AI papers published in China around 2008, the number of AI papers published in China increased by 150% between 2007 and 2017.

Figure 2: Number of AI papers published annually by region between 1996 and 2017

3. Classification of subdivisions

The following figure shows the number of AI papers in the Scopus database by subdivision domain. These subcategories are not mutually exclusive.

Of the AI papers published in 2017, 56% came from machine learning and probabilistic reasoning, compared with 28% in 2010. The speed of publication of papers in most of the period from 2014 to 2017 is faster than that in 2010 to 2014. Most notably, the compound annual growth rate of neural networks (CAGR) was only 3% in 2014 and 37% in 2017.

Figure 3: Number of AI papers published annually by subarea between 1998 and 2017

4. AI Papers on arXiv

The figure below shows the number of AI papers on arXiv, categorized according to the main subcategories of each paper. ArXiv is a website that collects preprints of papers on physics, mathematics, computer science, biology and mathematical economics. The right axis represents the sum of all AI papers on arXiv (represented by a gray dashed line).

The total number of AI papers on arXiv and the number of papers in many sub-categories are increasing. These papers, whether peer-reviewed or accepted by AI conferences, indicate that AI authors tend to disseminate their research, which also indicates the competitiveness of the field. Since 2014, computer vision (CV) and pattern recognition have been the largest AI subcategories in arXiv. Before 2014, the growth of this category was closely related to AI and machine learning. In addition to showing growing interest in computer vision (and its common applications), it also shows the growth of other AI applications, such as computing, language and robotics.

Figure 4: Number of AI papers by subcategory on arXiv between 2010 and 2017

5. Classification by regional activity concern

The chart below shows the relative activity index (RAI) of the United States, Europe and China. RAI approximates regional specialization by comparing it with global research activities in AI. RAI is defined as the proportion of publications in a country relative to the global share of AI publications. The value of 1.0 shows that a countrys research activities on AI coincide with the global activities on AI. A value higher than 1.0 means more attention, while a value lower than 1.0 means less attention.

Chinese AI papers focus more on engineering and agricultural sciences, while American and European AI papers tend to focus on Humanities and medical and health sciences. Compared with the data of 2000, the data of 2017 show that the degree of specialization in these three regions has improved, and the focus of China is shifting to agriculture. This is in line with our expectations, because China is the worlds largest food producer and tends to focus on the application of AI.

Figure 5: Focuses of AI research in different regions between 2000 and 2017

The following five charts show the number of AI papers in the Scopus database that work with governments, businesses and medical institutions. Among them, the first three maps directly compare the number of AI papers classified by institutional concerns in China, the United States and Europe, while the last two maps show the number of papers published by enterprises and governments in different regions.

In 2017, the number of AI papers published by the Chinese government was nearly four times that of Chinese enterprises. Since 2007, the number of AI papers published by the Chinese government has increased by 400%, while the number of AI papers published by enterprises has increased by only 73% in the same period.

In the United States, corporate papers account for a relatively large proportion of all AI papers. In 2017, the proportion of American companies publishing AI papers was 6.6 times higher than that of Chinese companies and 4.1 times higher than that of European companies.

Figure 6: Number of AI papers published annually by institution in China between 1998 and 2017

Figure 7: Number of AI papers published annually by institution in the United States between 1998 and 2017

Figure 8: Number of AI papers published annually by institution in Europe between 1998 and 2017

Figure 9: The growth of AI papers published by enterprises in different regions between 2009 and 2017

Figure 10: Growth of AI papers published by regional governments between 2009 and 2017

7. Overview of citation of AI papers

The following figure shows the regional weighted reference (FWCI) of AI authors. Geographically weighted references are the average number of references received by AI authors in the region divided by the average number of references by all AI authors. In this figure, FWCI is rebuilt, which means that the number of references is shown relative to the world average. The recalculation of FWCI based on 1 shows that these publications are cited as many times as the world average.

If FWCI is 0.85, it shows that the citation rate of papers is 15% lower than the world average. Despite the largest number of AI papers published annually in Europe, the FWCI in Europe remains relatively stable, comparable to the world average. By contrast, China has substantially increased its FWCI. In 2016, the citation rate of AI authors in China was 44% higher than that in 2000. Nevertheless, the total citation rate in the United States is still higher than in other regions, 83% higher than the global average.

Figure 11: Regional weighted citation by AI authors from 1998 to 2016

8. AI Paper Author Liquidity

The figure below shows the impact of international liquidity on the publication and citation rates of AI papers. We studied four types of flow: sedentary, temporary, migratory inflow and migratory outflow. The sedentary authors are active researchers who have not published articles outside their hometowns. Temporary authors publish articles in unexpected areas of their home town for no more than two years. Migratory authors contribute to other regions in two or more years.

Figure 12: Publication Rate and Geographically Weighted Citation Impact of AI AuthorsPapers in China, America and Europe from 1998 to 2017

The X-axis in the figure below represents the relative publication rate, i.e. the average number of publications of authors in each category divided by the average number of publications in the region as a whole. The Y axis represents a geographically weighted reference, that is, the average number of references received by each migrant author divided by the average number of references in the region as a whole.

If at least 30% of the authors papers involve AI, he is considered an AI author. Temporary authors have the lowest publication rates in the United States, China and Europe. In addition, FWCI of migrant authors is the highest in these three regions. As a result, they tend to have more citations and more frequent authors.

Among the three regions, China has the highest percentage of sedentary AI authors (76%), followed by Europe (52%) and the United States (38%). Although the proportion of sedentary authors in China is relatively large, the publication rate of non-sedentary authors in China is often higher than that of non-sedentary authors in other regions. In other words, although relatively few Chinese authors are geographically mobile, they tend to be more productive than migrant authors elsewhere.

9. National AAAI Papers

The figure below shows the number of papers submitted and accepted by the National Association for the Advancement of Artificial Intelligence (AAAI) Conference in 2018. The 2018 AAAI Conference was held in February 2018 in New Orleans, Louisiana, USA. About 70% of the papers submitted to AAAI in 2018 came from the United States or China. Although China contributes the most papers, the number of papers accepted by the United States and China is almost the same, 268 and 265 respectively. As a result, the selection rate of relevant papers in the United States is 29%, compared with 21% in China. German and Italian papers had the highest selection rate, reaching 41%.

Figure 13: Papers submitted and selected at AAAI 2018

II. Registration of AI Courses in Colleges and Universities

1. Number of students

The figure below shows the percentage of undergraduates enrolled in AI and Machine Learning (ML) courses. Although the proportion of undergraduates choosing AI courses tends to be slightly higher than that of ML courses (average AI is 5.2%, ML is 4.4%), the number of undergraduates enrolled in ML courses is growing faster. This shows that machine learning is becoming more and more important as a sub-domain of AI.

Figure 14: The proportion of undergraduates enrolled in AI and ML courses between 2010 and 2017

2. American AI Course

The following chart shows the growth of AI and ML enrollment in several leading computer science universities in the United States. In 2017, the number of students enrolled in AI courses increased 3.4 times compared with 2012, while in 2017, the number of students enrolled in ML courses increased 5 times compared with 2012. The number of students taking ML courses at the University of California at Berkeley in 2017 was 6.8 times higher than that in 2012.

Figure 15: Growth of students enrolled in AI and ML courses between 2012 and 2017

3. International AI Course

The following two graphs show the registration of AI and ML courses at several leading computer science universities outside the United States. In 2017, the number of students enrolled in AI + ML courses at Tsinghua University increased 16 times compared with 2010, which is the highest growth rate outside non-American universities. In all the schools studied, we found that the growth of AI curriculum enrollment is relatively dependent on schools, and is not particularly affected by geographical location.

Figure 16: Growth of AI+ML course enrollment outside the United States between 2010 and 2017

III. Summary of AI Conference

1. Large-scale academic conferences

The following chart shows the attendance rate of large AI conferences and the growth of attendance relative to the large AI conferences in 2012. Large AI conferences refer to meetings attended by more than 2,000 people in 2017. NeurIPS (formerly NIPS), CVPR and ICML were the most attended AI meetings. Their attendance has grown fastest since 2012. NeurIPS and ICML participants grew fastest: NeuRIPS increased 3.8 times in 2018 and ICML 5.8 times in 2012. This shows that people are still very interested in ML as a subset of AI. Meanwhile, meetings focusing on symbolic reasoning continue to show relatively small growth rates.

Figure 17: Participation in large AI conferences from 1984 to 2017

2. Small academic conferences

The figure below shows the attendance rate of small AI meetings and the growth of attendance relative to the small AI meetings in 2012. Small AI meetings refer to meetings with fewer than 2,000 participants in 2017. ICLR2 attendance in 2018 was 20 times higher than in 2012. This growth is likely to focus more on depth and learning outcomes within AI today.

Figure 18: Participation in small AI conferences from 1995 to 2017

3. Diversity Organization

The figure below shows the attendance of the annual conference sponsored by WiML, an organization dedicated to supporting womens machine learning, and the number of alumni participating in AI4All activities. AI4All is a deterrent designed to improve AI diversity and inclusiveness. WiML and AI4All have seen an increase in project registration over the past few years, with WiML participants increasing by 600% over 2014 and AI4ALL alumni increasing by 900% over 2015. These increases indicate that the AI sector is still trying to attract women and vulnerable groups.

Figure 19: The number of women and vulnerable groups participating in AI and ML learning is increasing

IV. Robot Software Download

The figure below shows the number of binary packages of Robot Operating System (ROS) downloaded from ROS.org. ROS is a widely used open source software for robotic software stack, which is used by many commercial manufacturers and academic researchers. The left axis shows the total average monthly download times, while the right axis shows only the average monthly download times from the unique IP address. Since 2014, total downloads and independent downloads have increased by 352% and 567%, respectively. This shows that people are more and more interested in the use of robotic technology and robotic systems. Since the number of independent downloads is growing faster than the total number of downloads, we can infer that there are more ROS users, not just more frequent ROS users.

Since 2012, among the five regions with the largest ROS. org page views, the United States and Europe have the highest ROS page views. China has the fastest growth rate of all large regions, with 18 times more visitors in 2017 than in 2012.

Figure 20: The number of downloads of Robot Operating System (ROS) increased between 2011 and 2018

V. AI Venture Companies and Investment

1. AI Startup Company

The figure below shows the number of active U.S. private start-ups supported by venture capital in a given year. The blue line (left axis) only shows AI start-ups, while the gray line (right axis) shows all venture capital-backed start-ups, including AI start-ups. This chart shows the total number of start-ups in January each year. From January 2015 to January 2018, active AI start-ups grew by 2.1 times, while all active start-ups grew by 1.3 times. To a large extent, the growth of active start-ups remained relatively stable, while the number of AI start-ups increased exponentially.

Figure 21: Growth in the number of AI start-ups in the United States from January 1995 to January 2018

2. Venture Capital

The figure below shows the annual amount of capital that venture capital firms (VC) provide to active U.S. start-ups at all stages of financing. The blue line (left axis) only shows the support to AI start-ups, while the gray line (right axis) shows the support to all venture capital-backed start-ups, including AI start-ups. These data are annual data, unlike the data in the previous chart, which is accumulated year by year. From 2013 to 2017, venture capital funds to support AI start-ups increased by 4.5 times, while all capital flows to start-ups increased by 2.08 times. The boom in venture capital from 1997 to 2000 can be explained by the Internet bubble. Smaller booms in 2014 and 2015 reflect relatively high economic growth over a period of time.

Figure 22: The annual venture capital funds acquired by AI start-ups between 1995 and 2017

VI. AI Talents and Patents

1. Talent demand

The following chart shows the number of job vacancies required annually in the AI skills field, as well as the relative growth of job vacancies required for AI. AI skills are not mutually exclusive. Although ML is the most important skill requirement, in-depth learning (DL) is growing at the fastest rate. From 2015 to 2017, the number of job vacancies requiring DL skills increased by 35 times.

Figure 23: Vacancies requiring AI skills between 2015 and 2017

2. Gender Diversity of Applicants

The figure below shows the proportion of male and female applicants for AI vacancies in 2017. These data are collected according to the skills required and are not mutually exclusive. In the United States, on average, male job seekers account for 71% of all AI job seekers, because machine learning requires the largest number of job seekers, which is largely driven by machine learning job seekers. In addition, compared with other categories, robotics, in-depth learning and robot gender diversity gap is greater.

Figure 24: Application for AI positions by gender in 2017

3. patents

The figure below shows the number and growth of AI patents, mainly by the region where the inventor is located. The aggregation of AI patents uses IPC code, which belongs to the field of cognitive and meaning understanding and human-machine interface technology. Over time, tracking patents is very difficult. In 2014, about 30% of AI patents originated in the United States. Next came South Korea and Japan, which accounted for 16% of the total. Among the top inventors, South Korea and Taiwan have the fastest growth, with the number of AI patents nearly five times higher in 2014 than in 2004.

Figure 25: From 2004 to 2014, AI patents are classified by the inventors area.

VII. AI adoption

1. Dividing AI Embedding Functions by Region

The chart below shows the results of McKinsey & Companys survey of 2135 respondents, each of whom responded on behalf of their organization. This chart shows the proportion of respondents who have AI embedded in at least one function or business unit. Respondents can choose a variety of AI capabilities. Although some regions use AI more widely than others, the level of AI adoption across regions is almost the same.

Figure 26: In 2018, the proportion of companies embedding AI functionality in at least one function

2. Industry and Function

The chart below shows the results of McKinseys survey of 2135 respondents, each of whom responded on behalf of their organization. The chart shows the proportion of respondents who have tested or embedded AI functions in specific business functions. These organizations tend to incorporate AI functions into the most valuable functions in their industry. For example, financial services use AI to cope with risks to a large extent, as do car manufacturing, retail marketing/sales. This means that the speed at which AI progresses in specific applications, such as manufacturing, may be related to the degree of application in industries where specialization is particularly important.

Figure 27: Proportion of enterprises testing or embedding AI functions in specific business functions in 2018

VIII. Attention of Enterprises and Governments

1. Number of references to AI and ML in earnings teleconference

The figure below shows the number of times keywords such as artificial intelligence (AI) and machine learning (ML) were mentioned in the companys earnings conference by industry. The first chart shows only the number of AI and ML referenced in the IT sectors earnings teleconference, because the industry is more closely related to AI and ML. The second chart shows the number of AI and ML references made by industries other than IT during earnings conference calls. The number of IT companies mentioning AI and ML continued to increase in 2015. But for most other industries, this growth began in 2016. In the earnings teleconference, besides the technology industry, the companies that mentioned AI most often were mainly distributed in the consumer, financial and health care industries.

Figure 28: From 2007 to 2017, the number of references to AI in the earnings teleconference of technology companies and other industry companies

2. Robot Installation

The following figure shows the annual installation data of industrial robots by region. The first chart shows the five largest areas where robots are installed, and the second one shows how robots are installed in other areas. Since 2012, annual robotic installation in China has increased by 500%, while in other regions (such as Korea and Europe), it has increased by 105% and 122%, respectively.

Figure 29: Robot installation in different parts of the world from 2012 to 2017

3. GitHub stars

The following figure shows the number of asterisks applied to GitHub by various AI and ML software packages, which provides a rough measure of the popularity of various AI programming frameworks. Recent trends are that frameworks supported by big companies (i.e., compared with other languages) are becoming more popular, including Googles Tensorflow, Facebooks Pytorch and Amazons mxnet.

Figure 30: Between 2015 and 2018, the more popular AI became a framework

4. Media Coverage Emotions

The following figure shows the proportion of mass media articles containing the term AI, which are classified as positive, negative or neutral. AI articles have become less neutral, but more active, especially since the beginning of 2016, the positive description of AI articles has increased from 12% in January 2016 to 30% in July 2016. Since then, the proportion of positive articles has been hovering around 30%.

Figure 31: Emotional analysis of articles referring to AI from 2013 to 2018

5. Government Concern

The following chart shows the number of references to the terms AI and ML in the US Congressional Records, Canadian and British Parliamentary Records. Since 2016, references to these terms have increased significantly in the three Governments. However, compared with AI, ML was rarely mentioned before 2016.

Note that methodological differences make it difficult to compare countries.

Figure 32: Number of references to AI and ML in Canadian and British parliamentary proceedings

Technical performance

I. Image Recognition-ImageNet Competition

The following figure shows the performance improvements that ImageNets accuracy has gained over time. The ImageNet Competition lasted until 2017, aiming at scoring the model on contest-specific test data sets. Since the competition is over, our report chooses to track the continued progress of ImageNet through research papers. The results show that the performance of ImageNet is always improving. This metric also highlights the inherent challenge of modelling AI progress: if a research metric is built around an ImageNet contest, cancelling the contest may make real progress more challenging. However, due to the availability of open data sets, continuity can be ensured by some clever processing.

Figure 33: Performance of ImageNet has been improving from 2010 to 2018

II. ImageNet Training Time

The following figure shows the time spent by the training network in classifying highly reliable images from the ImageNet Corpus (image database). This measure is the time required by resource-rich participants in the AI field to perform AI tasks (such as image classification) for training large networks. Since image classification is a relatively common supervised learning task, the progress of this indicator is also related to faster training time for other AI applications. In a year and a half, the time required to train the network has dropped from about an hour to about four minutes. ImageNet training time measurement also reflects the industrialization of AI research. The factors that reduce the training time of ImageNet include: algorithm innovation and infrastructure investment (e.g. underlying hardware for training systems, or software for connecting these hardware).

Figure 34: ImageNet training time change chart from June 2017 to November 2018

3. Case Segmentation-COCO

With the high performance of computer vision algorithms in target detection and image classification tasks provided by ImageNet, the ImageNet Challenge Competition ended in 2017, and the CV field focused on Microsofts COCO, that is, challenging semantic segmentation and instance segmentation. Since then, the research group has turned to more difficult computer vision tasks. The community shifts its attention to visual tasks that require more complex reasoning, such as locating objects with pixel-level accuracy (called object instance segmentation) and dividing scenes into regions with pixel-level accuracy (called semantic segmentation). Over the past four years, the accuracy of image segmentation challenges on COCO datasets has increased by 0.2%, and the performance in 2018 has increased by 72% compared with that in 2015. However, there is still no more than 0.5, and there is ample room for progress in all areas.

Figure 35: Between 2015 and 2018, the image segmentation accuracy of COCO data sets has been improved continuously.

IV. Semantic Analysis

The following figure shows the performance of AI system in the task of determining sentence syntactic structure. Analytical metrics are the first step in understanding natural language in certain tasks, such as answering questions. Originally implemented using algorithms similar to parsing programming languages, deep learning is now almost universal. Since 2003, the F1 scores of all sentences have increased by 9 percentage points (or 10%).

Figure 36: Constituency Analysis - Penn Treebank, 1995-2018

V. Machine Translation

The following figure shows the performance of AI system in translating news from English to German and from German to English. Today, the translation performance from English to German is 3.5 times better than in 2008, and the translation volume from German to English has also increased by 2.5 times. Because different test sets are used every year, the BLEU scores are not exactly the same for different years. Nevertheless, BLEU scores show the tremendous progress made in machine translation.

Figure 37: News Translation - WMT Challenge from 2008 to 2018

6. Question Answer-ARC

The following figure shows the performance progress of AI2 Reasoning Challenge (ARC) over time. The ARC data set contains 7,787 selected science questions at the real elementary level to encourage advanced question-and-answer research. This problem is divided into challenge set (2590 questions) and simple set (5197 questions). Challenge sets contain only search-based algorithms and word co-occurrence algorithms that incorrectly answer questions. The problem is the English exam in plain text, covering several grades. Each question has multiple choice structures (usually four answer options). These questions are provided by the ARC corpus, which contains 14 million disorderly science-related sentences, including ARC-related knowledge. There is no guarantee that the answer to the question can be found in the corpus. The ARC benchmark was released in April 2018. In 2018, the performance of the simple set rose from 63% to 69%, and the challenge set from 27% to 42%.

Figure 38: From April 2018 to November 2018, ARC rankings

VII. Question Solution - GLUE

The following figure shows the results of the GLUE benchmark ranking. General Language Understanding Assessment (GLUE) is a new benchmark for testing the performance of natural language understanding (NLU) systems on a range of tasks and encouraging the development of systems that are not suitable for specific tasks. It consists of nine sub-tasks: two single sentences (measuring language acceptability and emotion), three sentences on similarity and interpretation, and four sentences on natural language reasoning, including Winograd Model Challenge. The size of corpus varies from less than 1000 to more than 400,000. Measurements include accuracy/F1 and subject correlation coefficient. Although the benchmark was released only in May 2018, performance has improved.

Figure 39: GLUE Benchmark Ranking from May 2018 to October 2018

(Selected from: Stanford University Compiler: Netease Intelligent Participation: Small) Source of this article: Netease Intelligent Responsible Editor: Yao Dili_NBJS7522

(Selected from Stanford University Compiler: Netease Intelligent Participation: Small)