• Tidak ada hasil yang ditemukan

Knowledge Management and Business Intelligence Systems

CHAPTER TWO Review of Literature

2.8 Knowledge Management and Business Intelligence Systems

Knowledge Management and Business Intelligence are enabled via Information Systems (Kebede, 2010). Building from this, comes ‘KM and BI systems’. These are Information Systems that are designed purely for the intense analytical processing of information and data with the aim of generating useful ‘knowledge’ reports from that (Maria, 2005; Elbashir, Collier and Michael, 2008). There are a variety of KM and BI systems that exist. Some of the

20 most widely used KM/BI systems include data mining, data-warehouses, online analytical processing (OLAP), predictive analytics and digital dashboards (Sharman, 2010).

Brief definitions of some of the main KM/BI systems are presented below.

- Data mining - this is also known as ‘knowledge discovery’ and is an inductive type of analysis that determines relevant patterns/ relationships/trends from data that is hidden in a data group or database (Corne, Dhaenens and Jourdan, 2012). It is a highly mathematical process and also uses various statistical methods which can include cluster analysis, artificial intelligence and/or neural network techniques (Sharman, 2010). Data mining is used primarily to discover key knowledge from data allowing business to make pro-active decision based on that knowledge.

- Data warehouse - this is more than just a data storage facility but a specialised data repository that is used to support decision making (Ariyachandra and Watson, 2010). It provides easy and convenient access to large volumes of data (internal and/or external).

This is because it acts as a central data repository for all data that that may come from various smaller databases within or outside the organisation (Mannino, Hong and Choi, 2008). This data is then integrated, cleaned, and archived into the warehouse to support decision-making according to management requirements (Mannino, Hong and Choi, 2008).

- OLAP - is a popular KM/BI tool that allows for the effective and rapid analysis of information from not just one but multiple databases/data sources (Sharman, 2010). In other words, data from various data bases can be analysed at once and not in a serial fashion. As a result, OLAP is regarded as multi-dimensional analysis tool as it analyses and compares information in a variety of ways and uses operations and functions such as slice and dice, roll-up and drill-down (Prat, Comyn-Wattiau and Akoka, 2011). OLAP is also used to generate knowledge reports derived from the various data sources and is often used in conjunction with data warehouses and data mining (Hsu and Li, 2011).

- Predictive analytics - this innovative KM/BI tool allows one to predict future trends by using both current and past data. This is supported by Eckersen (2007) who asserts that predictive analytics is a set of BI technologies that is forward looking and can discover

21 future patterns, trends and relationships from current and past data available in large datasets, databases and/or data sources. Similarly IBM (2010) posited that predictive analytics allows one to connect data with strategic action by making reliable conclusions based on data about current conditions that can influence future events.

- Digital dashboard - this is an Executive Information System with an interface that provides knowledge to management in the form of numbers, charts and graphics and is designed to present the overall organisational picture on a single page (Wu and Phillips, 2012). According to Sharman (2010), digital dashboards are easy to read and allow management/executives to continuously monitor the performance of their organisation via key performance indicators. This gives them a visual snapshot of overall performance with a special emphasis on areas that may be reflecting poor performance and needing attention. It presents various benefits such as (Sharman, 2010):

- Ability to view performance instantaneously.

- Identification and correction of negative trends and identification of new trends new trends.

- Measurement of both efficiencies and inefficiencies.

- Promote more informed decisions based on collected knowledge.

- Assists in developing organisational strategies.

- Performance Scorecard - These are similar to dashboards in regard to graphical representation. However, the major difference is that dashboards display the status of an organisation at a ‘specific point in time’, while scorecards indicates ‘progress over time’

(Rouse, 2010). Scorecards are made up of mainly two essential concepts which are key performance indicators (KPI) and targets. Key performance indicators (KPI) involve metrics that are used to measure factors that are critical to organisational success such as performance, efficiency, and quality among others. Targets are in turn the specific goals for the KPIs (Rouse, 2010).

‘Big Data’ is also a recent advent in KM and BI and the term ‘Big Data’ is the latest term or buzzword that encapsulates the explosion of large datasets, both structured and unstructured, that exists globally and are present in business, public and society as a whole (SAS, 2014).

As also asserted by Manyika, et al. (2011), digital data is prevalent in every economy, sector,

22 and organisation as well as any user of technology. As posited by IBM (2014), we create over 2 quintillion bytes of data on a daily basis and this data is generated from various avenues such as mobile phones/devices, Information Systems, Social Media, digital devices, equipment, GPS, sensors and so forth. In other words, any device that uses data is a contributor to ‘Big Data’. All of this data dominates the virtual resources that exist in organisations’ which include networks, servers, storage and IS as a whole. In simple terms, organisations’ around the world are at an ‘information overload’ due to ‘Big Data’.

The challenge that lies herein for organisations is that the data is too large, too fast and comes in diverse quantities. This is relates to SAS (2014), whereby the mainstream definition for

‘Big Data’ is referred to as volume, velocity and variety. This poses the major challenge in the sense that the common data processing tools and technologies are no longer adequate in dealing with these vast amounts of data, along with its speed and variety (IBM, 2014;

Webopedia, 2014). These include the traditional databases, data warehouses, storage devices, networks and even more so, data processing and analysis tools.

There is substantial value in Big Data. Research conducted by Mckinsey Global Institute, as asserted by Manyika, et al. (2011), found that data created substantial value for the global economy by boosting productivity and competitiveness of both business and the public sector which in turn created a significant economic surplus for consumers. This goes to show that

‘Big Data’ should not be ignored, underanalysed and underutilised. There is considerable value present in big data and the only way to harness this value is to strategically utilise more specialised IS that can handle ‘Big Data’ efficiently and effectively. This is supported by IBM (2014). Therefore, the advent of ‘Big Data’ has also sprouted the development of more sophisticated IS that are highly specialised in the analysis and processing of ‘Big Data’.

One of the main systems is Hadoop. Apache Hadoop is a specialised data analysis software that is equipped to handle the distributed processing of large datasets across widespread servers (IBM, 2014). Hadoop consists of sophisticated development tools, analytics, accelerators, visualisation, performance and security features and can be scaled up to thousands of machines and has a very high level of fault tolerance. This therefore becomes an effective solution to handle ‘Big Data’ due to its analytical ability, scalability, flexibility and fault tolerance (IBM, 2014).

23 Another specialised system that is designed to effectively deal with ‘Big Data’ is ‘Stream Computing’. This is an analytical processing system that is effectively designed to deal with frequently changing data (data in motion). It uses predictive analytics to promote real-time decisions. It also captures and analyses data at any given time whilst working on a just-in- time basis. One of the main benefits of this is the ability to store less and analyse more which in turn promotes better and faster decision making (IBM, 2014). Other systems that can satisfy the processing of ‘Big Data’ also include Content Management Systems which effectively manages documents and data contents allowing them to be properly controlled.

Advanced or high-performance databases and data warehouses are also used that function at high speed and have intense analytical capabilities to deal with large scale data (IBM, 2014).

These are among the main systems used in KM and BI. There is little evidence of them being used in HE in Africa. However, substantial evidence exists of its use in developed countries.

This will be covered in greater detail in section 2.13.