• Tidak ada hasil yang ditemukan

DATA AND TECHNOLOGY INFRASTRUCTURE

Dalam dokumen Erik Banks and Richard Dunn (Halaman 145-148)

Data and technology comprise the backbone of risk infrastructure. We feel that without the right data and technology – featuring sufficient breadth, flexibility and granularity – a firm faces an almost impossible task in understanding, and then managing, its risks.

14.3.1 Data

Data is the primary way of generating information about a firm’s risk positions. The data process must be complete, accurate, uniform and flexible. It should also be surrounded by regular audit checks that ensure it is functioning as intended. If the data process is flawed a firm will ultimately lose track of its exposures and be unable to make risk decisions or confirm whether it is adhering to the risk framework. A good data process is also vital to keeping process risk to a minimum.

Data has to be accurate in order to be useful; data errors at the position level lead to bad information at an aggregate or portfolio level, and might then lead to bad risk decisions. Even the simplest error, such as reflecting a position of $10 million instead of $1 million, or long instead of short, can radically skew the risk of a portfolio or create fictitious credit exposure.

A firm should therefore verify the accuracy of its data through trade-level reconciliation, collateral mark checks, error reports and controller verification. The integrity of the process must be tested/audited regularly as an additional “check and balance”.

A data template – effectively a list of required data items – should feature relevant information on a transaction, including details on:

r

Business unit,

r

Desk,

r

Location,

r

Trader,

r

Notional size,

r

Market,

r

Underlying product,

r

Counterparty/client identifier,

r

Rate/yield/coupon,

r

Maturity.

It should also include information that touches other control processes, such as:

r

Legal documentation status flags,

r

Confirmation status,

r

Funding rate,

r

Collateral detail.

The template might feature pre-computed risk information (e.g. risk sensitivities, desk VAR, and so on), or simply act as the feedstock into risk analytics that compute risk information.

Defining the template with as much granularity as possible (but not so much that it becomes burdensome) is a good idea. Once defined, this template should serve as the model for all businesses.

Data uniformity is vital, particularly in large organizations, and using a standard data tem- plate is one way of ensuring uniformity. Processes that receive, manipulate, analyze and report back information are far more efficient if they use a common template. Since all businesses have core information they need in order to conduct business, these details should remain consistent throughout the organization. Naturally, businesses have unique risk characteristics and relatively specialized risk reporting requirements, so maintaining some flexibility is im- portant. Uniformity in the data template does not mean rigidity; the best processes can handle new data dimensions, new structures, products, markets and counterparties without breaking down. Uniformity should also extend to data sources. Processes that reference data to perform particular tasks, such as updating market prices, getting transaction or product reference data, computing risk exposures, generating daily P&L, confirming funding requirements, valuing collateral, and so forth, should be drawn from the same reliable underlying source. This leads to greater accuracy, and greater confidence in the results. If the data to compute distinct, though related, processes (e.g. VAR backtesting of P&L) is drawn from multiple sources, there will always be some question as to whether the same trade details are being used; to be absolutely certain, separate reconciliation processes will be required, meaning incremental resources will have to be employed (and still probably won’t be able to guarantee with 100% certainty that everything matches perfectly).

Good data leads to good reporting and an accurate picture of a firm’s risks. It permits confident decision making. Bad data skews the appearance and magnitude of the firm’s risk profile and leads to bad decisions.

14.3.2 Technology

Good technology is the second essential dimension of infrastructure. In today’s financial world it is virtually impossible to manage a risk business without proper technology that covers front- end trading, middle-office operations, valuation and risk, and back-end processing. A cohesive platform permits trades to be entered, priced, executed, hedged/risk managed, valued, cleared, settled, reconciled and related collateral or other credit support to be dealt with accurately. If a company cannot perform these functions it is likely to be burdened with incomplete records, misvaluations, settlement errors and large process risk losses; any, or all, of these problems could result in a material misstatement of risk and lead to risk-related losses or bad decisions being made.

A firm’s technology platform has to be flexible, scalable and capable of communicating with other internal/external platforms. By sticking to these basic characteristics a firm should be able to handle new, next-generation risk products (as well as large transaction volumes), pass common information between platforms (which is key for firm-wide aggregation) and

construct a tree of information such that summary reports are a mere aggregation of easily accessible underlying data. Ideally, every risk-taking unit and desk within a firm would use the same platform; practically, this is sometimes not possible. Therefore, some flexibility is necessary.

Naturally, every risk-taking unit must enter all trades/risk-related business into an authorized trading system – under no circumstances should risk be permitted to live in “off-system”

environments that are not linked to settlement processes and “books and records” (the official repository of the firm’s accounts). If a firm lets traders use “off-system” technology, it is probably just a question of time before “rogue” traders start writing tickets and putting them in the drawer. Systems that are used for P&L, risk or operations purposes have to be under the direction and control of the information technology (IT) department, not the individual business units. This protects against a situation where a risk taker knowingly, or unknowingly, manipulates pricing or reporting software. A regular audit of all technology platforms is good practice.

Risk analytics embedded in technology platforms (rather than those contained in independent risk and financial control systems) have to be able to provide the market, credit, liquidity and process risk measures defined by the risk management group. For instance, if the firm requires VAR to be computed based on specific parameters, each desk supplying the VAR must adhere to such requirements. The risk management function should independently review and test analytics and approve any changes. If analytic processes to analyze risk exist within the independent risk function rather than the businesses (being fed with predefined data from each business), it is still good practice to benchmark them against third-party models or valuations.

A toolbox containing the right nuts and bolts is vital to effective risk management. Investment in effective risk policies and limits, reporting, data and technology can create a far more secure and efficient environment – leading to more confident risk management. As with any dynamic process, these have to be reviewed regularly and enhanced as needed.

15

Ongoing Diagnostics and Transparency:

Knowing if the Risk Process is Working

Dalam dokumen Erik Banks and Richard Dunn (Halaman 145-148)