• Tidak ada hasil yang ditemukan

Technologies of the Fourth Industrial Revolution

CHAPTER 2: LITERATURE REVIEW

2.4. Technologies of the Fourth Industrial Revolution

A key innovation that is still at the forefront of driving the digital revolution is the microprocessor. The quintessential GPT can be applied to a myriad of processes that involve the processing, storage, and exchange of information. Steady decreases in production costs coupled with increases in the processing power and capabilities of microprocessors, further leverage different technological combinatory possibilities (OECD, 2017:2).

This section will explore three broad and prominent technological trends: industrial robots, computer software applications, and AI.

2.4.1. Robots

An industrial robot is an automatically controlled, reprogrammable, multipurpose manipulator which is programmable along three or more axes (Rao et al., 2011:1). Robots may be mobile or fixed in position and are utilised in industrial automation applications (Webb, 2019:17). In general, such robots are able to perform difficult, often hazardous, and repetitive tasks with absolute precision, thereby dramatically increasing productivity and output quality (Datta et al., 2015:2).

The most common metric used to indicate a country’s level of robot adoption is the expression of the number of actively operational industrial robots as a share of its manufacturing workers.

The International Federation of Robotics (IFR) reports that the global average number of industrial robots per 10 000 labourers increased from 66 in 2015 to 85 in 2017, indicating that companies are rapidly increasing their use of robots (Atkinson, 2019:1). South Korea was the world’s largest adopter with 710 industrial robots per 10 000 workers.

The adoption of robots is also driving productivity growth; investment in robotic technology has contributed to 10% of Organisation for Economic Co-operation and Development (OECD) countries’ per capita GDP growth during the period of 1993 to 2016. Atkinson (2019:2) found that a one-unit increase in robot-density will increase labour productivity by 0.04%. The study

20

defined robot-density as the number of operational robots per million hours worked (Atkinson, 2019:2).

The routine task of each robot remains to be specified in advance by a human operator. As such, variability in the task itself, as well as the physical environment in which the robot operates, must be kept to a minimum (Webb, 2019:38). Hitherto, the industrial robotic technologies applied in production processes tended to be rigid, with the functionality of the machine being physically encoded in its mechanical design. A change in the function of the robot would therefore require a physical change in the design of the machine. In the occurrence of unforeseen circumstances, a human operator would still be required to provide the necessary flexibility and appropriate response (Fernández-Macías, 2018:15).

Even though machine automation predates even the First Industrial Revolution, the potential of modern digital technologies radically increases the possibilities for different processes to be automated, as machines can be algorithmically controlled to perform increasingly abstract tasks. The accelerated diffusion of robots enabled with digital software and AI is expected to significantly intensify the effect of technological unemployment (OECD, 2017:2).

2.4.2. Digital Software

A software application is described as a computer programme that is able to execute manually codified “if-then” sequences. These instructions are labelled as “routine” and must also be specified in advance by a human. A human programmer would therefore also have to be able to anticipate various outcomes relative to the task at hand (Webb, 2019:28).

Digital technologies ensure that information is more available and transparent throughout all levels of an economic process, thereby reducing transaction costs, facilitating more complex organisational structures, expanding market opportunities, and effectively rendering geographical location increasingly irrelevant. These technologies also have low to zero marginal costs as they are generally non-rival and infinitely expandable (Fernández-Macías, 2018:15).

21

In contrast with the rigidness of industrial robots and machinery, a digitally enabled (production) process is controlled by algorithms which can be reprogrammed and recalibrated at will as they are not embodied in fixed mechanisms. As a result of the algorithmic control, production processes are effectively more flexible than prior methods of lone mechanically controlled devices (Webb, 2019).

2.4.3. Artificial Intelligence

Webb (2019) uses the term Artificial Intelligence to refer to machine learning algorithms, which can be described as algorithms that are utilised by computers to locate statistical patterns amongst large amounts of data. These findings are then used to make predictions and take actions in complex and unstructured environments (Ernst et al., 2019:2). Machine learning techniques therefore enable machines to learn in a manner that is automated through inferences and patterns that are gathered from data, rather than instructions given by a human (Lane & Saint-Martin, 2021:21). AI software therefore differs from conventional software in that, after a programmer has defined a learning algorithm, the algorithm will teach itself from experimentation and gathered data to complete varying objectives (Webb, 2019:35). The current wave of machine learning applications is driven by sophisticated statistical modelling techniques commonly referred to as “neural networks” (Lane & Saint-Martin, 2021:21).

In his study Webb (2019:35) makes specific reference to the dramatic practical advances in supervised and reinforcement learning algorithms:

Supervised learning algorithms learn functions that map inputs into outputs from training data that consist of example input-output pairs. Such an algorithm may learn a function that is able to map an image (input) into a textual description of that image (output) or, with the appropriate training data, it can map the financial history of a person and determine their likelihood of repaying a loan, or it can even translate a sentence from one language to another (Webb, 2019).

Reinforcement learning algorithms learn how to operate in dynamic environments in order to perform functions and achieve objectives. Algorithms such as these learn by trial-and-error

22

executions, and therefore require experiments in the environments in which they are intended to operate (or accurate simulations of them), as well as a way to evaluate performance (Webb, 2019). Examples include finger movement controls of a robot hand in a dynamic environment.

Applications that are developed with structures that support deep learning algorithms and AI applications are able to observe their environment, learn, and adapt to whatever function they are tasked to execute with minimal human interaction. Theoretically, these applications could be as flexible and innovative as a human being; in some cases, even more so (Fernández- Macías, 2018:10).

Furthermore, two distinctly significant non-human capabilities that AI networks possess are connectivity and updateability, e.g., should the Ministry of Transport decide to change a traffic regulation, an entire integrated network of self-driving automobiles could be updated to abide by the new regulation at the same moment, thereby preventing non-compliance and potentially saving lives of passengers in the process (Harari, 2018:20).

The potential for application across a range of occupations and sectors indicate that AI is associated with increased potential for output and welfare gains (Lane & Saint-Martin, 2021:20). As such, and due to its ability to produce predictions that can be utilised as inputs for decision-making across a range of occupations, such as teaching or radiology, AI effectively qualifies as a GPT (Agrawal et al., 2019:5).

A great deal of the existing economics literature on AI centres around the technology’s potential to increase overall productivity and output by reducing costs, complimenting manual labour input, and stimulating complimentary innovations (Lane & Saint-Martin, 2021:27).

However, despite significant progress in the capabilities of AI (machine learning in particular), productivity growth is found to have been lagging over the past decade. There are a number of possible explanations for this so-called productivity paradox:

23

Firstly, the potential of AI to improve productivity may have been overestimated. Researchers such as Gordon (2018:37) are of the opinion that much of the impact of AI has already been seen, e.g., customer service robots or legal text search engines, and that further innovations are likely to be improvements on previous innovations, unlike the significant leaps we have seen thus far.

In contrast, there are still those researchers, such as Brynjolfsson et al. (2017), who are of the opinion that AI has the potential to increase productivity substantially, similar to that of previous GPTs. They attribute the productivity paradox to mere lags in the implementation of AI applications.

Secondly, due to the difficulty of capturing improvement quality of high-tech products, Byrne and Sichel (2017) are of the opinion that mismeasurement of productivity statistics has contributed to the productivity paradox.

Thirdly, due to the formation of winner-takes-all market dynamics (discussed in section 2.4), a small number of large firms’ market power enables them to engage in efforts to block other smaller players’ access to certain technologies. These efforts are considered anti-competitive, anti-innovation, and wasteful (Lane & Saint-Martin, 2021:28).

Finally, should automation be introduced at excessive rates, the possibility for deploying the wrong types of AI applications, along with mismatches between skills and new technologies, increases substantially, which further undermines productivity growth (Acemoglu & Restrepo, 2018).

However, it remains difficult at any time to forecast productivity growth into the future. This is particularly relevant when the productivity growth relies upon the invention (and application) of entirely new technologies, such as in the case of AI. Notwithstanding these challenges, consultancies like Accenture (2017) and Mckinsey (Bughin et al., 2018) have respectively

24

calculated the value that AI could potentially contribute to global economic output by 2030 at around $13 trillion, $14 trillion, and $15,7 trillion (Lane & Saint-Martin, 2021:27).