• Tidak ada hasil yang ditemukan

Use of Big Data & Groundwater Databases - Mapping, Cleansing & Maintenance

Dalam dokumen 1 | Page (Halaman 140-147)

Groundwater intelligence: applying data analytics and visualisation tools to process, analyse and communicate groundwater data

Alice L. Drummond 1 , Christian M. Borovac 1 1. DiscoverEI, Melbourne, VIC, Australia

Objectives: In this age of ‘Big Data’ and real time monitoring, the groundwater industry is literally drowning in data. This study presents a range of case studies which apply innovative ways of processing, analysing and visualisating groundwater data, to help bridge the communication gap between the scientists and decision makers and facilitate a data driven culture.

Design and Methodology: This project combines the latest business intelligence tools and computer animations to create a shared understanding of the complex

groundwater systems we manage. Groundwater data was sourced from publicly available databases (such as the Victorian Water Management Information System).

Interactive dashboards were developed within Microsoft Power BI, the world’s leading Business Intelligence platform. One-page reports combining dynamic maps,

geological bore logs, groundwater level hydrographs, water quality data and statistics were developed, providing water managers with key information in an accessible format to drive decisions. Animated conceptual site models were developed in the Adobe Creative Cloud to help communicate how these complex groundwater systems operate and describe the key findings from groundwater studies.

Original data and results: The results from this study are a suite of interactive dashboards, infographics and computer animations which can be used to help manage and communicate groundwater data, providing an alternative to traditional written reports.

Conclusions: The data analytics and visualisation tools presented in this study have broad and wide-ranging applications across the groundwater and environmental industries. These tools can be used to both streamline data processing and improve communication, facilitating a culture of data driven decision making to help manage the future sustainability of our groundwater resources.

Managing an extensive regional groundwater monitoring network in the Surat Basin – key challenges, opportunities and innovation

Steve C. Flook 1 , Ben Cairns 1 , Lynne Ford 1 , Peter J. Khor 1 , Sanjeev K. Pandey 1 1. Office of Groundwater Impact Assessment, Brisbane, QLD, Australia

The Surat and southern Bowen basins are complex multilayered aquifer systems extensively developed for private water use and more recently for petroleum and gas (P&G) development. The Office of Groundwater Impact Assessment (OGIA) is

responsible for the design of a regional monitoring network to support

hydrogeological research, system conceptualisation, impact assessment and regional groundwater modelling in the Surat Cumulative Management Area (CMA).

141 | P a g e In this area, OGIA produce an Underground Water Impact Report (UWIR) every three years which sets out monitoring obligations for petroleum tenure holders across the Surat CMA. These include construction and installation of groundwater monitoring equipment, the measurement of groundwater pressure and chemistry, aquifer injection and associated water extraction volumes. As of late 2018, there are around 500 monitoring bores – with up to 1,000 individual monitoring points – and more than 7,000 CSG extraction bores.

There are a range of unique challenges in managing a network extending over across an area of 100,000 km2 encompassing more than 20 hydrostratigraphic units,

varying hydrochemical conditions, multiple fluid phases and with monitoring depths up to 1,500 m. Across the area, construction and instrumentation differ significantly and necessitate careful data treatment and density correction requirements.

Responsible tenure holders capture and compile four groundwater datasets – pressure, chemistry, extraction, and injection – to OGIA every 6 months. OGIA complete an extensive series of checks on the received data. Given the scale of data received, OGIA have developed a number of in-house data assessment tools to verify data format and quality.

There are unique challenges in the management of OGIA’s dataset: multiple data providers, data types, large volumes of transient data, quality control of data format and content, and preparing data for internal and external release. This presentation will provide a summary of the challenges and unique data management tools

developed by OGIA.

Moving towards near real-time groundwater level data. Providing an automated and consistent groundwater level dataset for Australia

Brendan Dimech 1 , Todd Lovell 1 , Mario Mirabile 1 , Elisabetta Carrara 1 1. Bureau of Meteorology, Melbourne, VIC, Australia

Objectives: As part of the Water Act 2007, the Bureau of Meteorology is required to collect, hold, interpret and disseminate Australia's water resource data. One of these is a nationwide dataset for groundwater levels, containing 230,000 bores with a recorded water level. This data is made available through the Bureau's Groundwater Explorer - bom.gov.au/water/groundwater/explorer – for download at bore level or by state or river catchment scale.

The Bureau was publishing this data on the Groundwater Explorer twice a year.

Following customers feedback for increased data currency, the Bureau is now increasing the frequency of groundwater levels ingestion and publication. A project was developed to automate the ingestion of data and the publishing to the

Groundwater Explorer.

Design and Methodology: Under the Water Regulations 2008 all state and territory water agencies are required to deliver water resource data to the Bureau. This is ingested into the Australia Water Resources Information System (AWRIS). AWRIS is a database developed to hold all water resource information submitted to the

Bureau. Currently the majority of data relates to surface water.

This project was developed to complete several tasks:

Working closely with lead water agencies to enhance and facilitate their data submission.

142 | P a g e

• Extending AWRIS to better accommodate groundwater information.

• Further automation of data processing and quality checking to allow automatic publishing to the Groundwater Explorer.

Results: This means Bureau's Groundwater Explorer is now publishing water levels at days after being read, at sites where available. This standardised dataset is available at a national scale for most states and territories.

Lead water agencies are required to deliver monthly, however in most cases it is delivered more frequently. Automation has allowed for the data to be ingested and published as it is delivered with minimal latency. About 500 sites are telemetered, with readings being updated weekly. 5,000 more are logged sites than have readings within 2019.

Internally, the Bureau staff has now access to automated and consistent water level data for use in analysis and assessment across Australia. For example, this data has been used to provide up to date and relevant groundwater information to the NSW Drought Taskforce and the MDBA/CEWO's Climate and Water Briefing.

When groundwater strikes: mapping shallow groundwater risks

Jeremy Bennett 1 , Tara Forstner 1

1. Tonkin & Taylor Ltd, Newmarket, Auckland, New Zealand

Objectives: Hydrogeological studies typically focus on groundwater as a resource for both human and ecological activities. Shallow groundwater is often neglected as these resources may be considered too vulnerable to surface contamination, or the yield may be too variable. Despite shallow groundwater often not being a viable extractive resource, it can present a significant risk to infrastructure. These risks are likely to be exacerbated under changing climatic conditions (flooding, sea-level rise) and are applicable to a variety of risk mechanisms.

Design and methodology: Although shallow groundwater is subject to the same governing laws of flow as deeper groundwater resources, there is often far less information available about the spatial distribution of shallow groundwater surfaces.

This lack of information makes traditional groundwater flow modelling difficult as data scarcity leads to greater model parameter uncertainty. As many shallow

groundwater risks are dependent on hydraulic head, rather than flow or aquifer yield, simplified maps of shallow groundwater head can be used to understand these risks.

We have developed a workflow for mapping shallow groundwater levels that

incorporates a range of spatial and temporal information. The information is obtained from regional authority databases and environmental data sets. We used

geostatistical relationships between variables to improve estimates of shallow hydraulic head in data-sparse areas.

Results: The results honour groundwater observation data and provide a continuous shallow groundwater surface across the area of interest. The workflow is flexible and allows for different scenarios (i.e. seasonal variation) to be efficiently mapped.

143 | P a g e Conclusions: The shallow groundwater level maps produced in this study can be used to quantify risks to infrastructure, including liquefaction risks and groundwater

inundation. Regional shallow groundwater maps are also applicable as design criteria for water-sensitive urban design and construction. The automation of mapping workflows allows for maps to be produced more efficiently and offers the opportunity for further development using more advanced approaches, such as machine learning.

Validating and scaling metered groundwater use data for the development of the Central Condamine groundwater flow model

Leon L. M. Leach 1

1. Dept Environment and Science, Dutton Park, QLD, Australia

This presentation describes the process used in pre-processing of groundwater use data for the MODFLOW unstructured grid (MFUSG) model for the Central Condamine and Tributaries. The objective was to validate and to scale groundwater use data to the model temporal and spatial scales where metered use data existed, and to derive and infill data where metered used data did not exist. The model domain is vast when compared to other alluvial systems in Queensland and covers an area of approximately 7,720 square kilometres with approximately 8,950 registered bores. Of these there are approximately 3,340 bores with a water entitlement (licence). Metering of some bores commenced in 1979, and to date 1,340 bores have been metered.

Groundwater use data were captured at various time scales ranging from fortnightly, to annual and at different time intervals. Since metering commenced water use data were stored in a variety of systems ranging from initially paper to various database, and with a variety of index systems ranging from registration number (one to one) to property number (many to one)

One of the challenges was to derive groundwater use at the day scale for all 3,340 bores, for the period from 1960 to 2017. The first step in this process was to collate information on when and where the bores were drilled; and to establish if the bore is presently being pumped or when it ceased to be pumped. The second step was to

144 | P a g e determine the likely extraction rate. For bore drilled before 1979, information on use and extraction rate were also obtained from property surveys and interviews with landholders.

The next step was to establish time intervals when bores may have been pumped at the day scale. For most bores and particularly since 2005 bores, the granularity of meter use data at the annual scale precludes the identification of individual pumping sequences. For some irrigation bores close to observation bores with equipped with data loggers, pumping sequences could be readily identified in the water level behaviour. For more distant bores, pumping sequences were derived from an irrigation scheduling model.

Pre-processing software was developed using a hierarchal approach to derived daily groundwater use data with a data quality index identifying reliability and method.

The index was used to assign weights for calibration purposes. Where possible, the quantum of the metered use was retained.

Porosity and permeability of the Springbok Sandstone, Surat Basin – integrating wireline and laboratory data

Oliver Gaede 1 , Mitchell Levy 1 , David Murphy 1 , Les Jenkinson 2 , Thomas Flottmann 2

1. Queensland University of Technology, Brisbane, QLD, Australia 2. Origin Energy, Brisbane, QLD, Australia

Objectives: The Late Jurassic Springbok Sandstone in the Surat Basin is highly heterogeneous in terms of lithology and hydrogeological properties. This

heterogeneity is poorly defined in well logs, due - in part - to clay phases that do not exhibit a prominent gamma ray signature. The resulting uncertainties in the

hydrogeological properties are transferred to uncertainties in the groundwater models of the Springbok Sandstone. Further, only a small amount of porosity and permeability data is publically available and no petrophysical model of the Springbok Sandstone has been published in the peer-reviewed literature. At the same time, accurately predicting the potential groundwater impact due to coal seam gas production from the underlying Walloon Subgroup is of significant societal and economic importance.

Design and Methodology: We present new porosity and permeability data from more than 50 core samples from the Springbok Sandstone alongside a review of existing data. Based on this dataset and wireline data from five study wells a new

petrophysical model for the formation is proposed.

Original Data and Results: The results show that (a) the Springbok Sandstone is highly variable in terms of hydrogeological parameters, (b) this variability can be captured with a petrophysical model that draws on a full log suite (i.e. triple-combo) and (c) electrofacies classifications based on gamma ray and bulk density log cut- offs do not reflect this variability.

Conclusion: Ultimately our results can be utilized in combination with 3D geological models to predict the presence of rock units that have the sufficient transmissivity to constitute aquifers. This work also forms the platform, which allows in combination with future detailed (sequence) stratigraphic analyses to define geobodies and their facies affiliations to predict aquifer properties and their spatial distribution in the Springbok. Our results therefore provide key inputs into a potential regional aquifer characterisation of the Springbok.

145 | P a g e

Field-scale downscaling of passive microwave soil moisture retrievals using a neural network trained on integrated hydrological model predictions

Steven j. Berg 12 , Graham Stonebridge 1 2 , David Hah 12 , Steven Frey 1 2 1. Aquanty Inc., Waterloo, Ontario, Canada

2. University of Waterloo, Waterloo, Ontario, Canada

Passive microwave satellite remote sensing systems (e.g., SMAP and SMOS) can provide reliable near-real-time observations of surficial soil moisture at a coarse resolution of 9-to-30 km. Recent efforts to downscale these observations have focused on the fusion of Visible-Infrared or Synthetic Aperture Radar imagery.

However, these methods are limited by the availability of the ancillary satellite

imagery datasets, and their resolution is limited to 1-to-2 km. The state-of-the-art in neural network downscaling methods is also limited by the extreme sparseness of soil moisture probe datasets. We present an alternative downscaling procedure that leverages fully integrated groundwater-surface water models for their insights into the spatial distributions of soil moisture.

We constructed a feedforward neural network with 30-m aggregated input

parameters including topographic wetness index, hydraulic conductivity, land cover class, and soil moisture observations interpolated from daily passive microwave soil moisture products. The neural network contained one input layer, two hidden layers, and one output layer. The novel aspect is that the neural network was trained on nodal soil moisture values predicted at daily intervals by a HydroGeoSphere model spanning Southern Ontario. With ~900,000 total nodes at surface, this vast training dataset spans ~75,000 km2 at a resolution of 10-to-500 m.

The trained neural network was able to delineate sharp features such as wetland boundaries and produces plausible-looking soil moisture patterns over various

geophysical features such as ravines and moraines. Neural network predictions were comparable to in-situ soil moisture probe data.

We demonstrate that a neural network can be trained using the outputs from a fully integrated hydrological model and applied in practice for the downscaling of passive microwave soil moisture retrievals.

146 | P a g e

The importance of quality assurance and quality control for making the most out of hydrogeochemistry data

Ivan Schroder 1 , Joanna Tobin 1 , Patrice de Caritat 1 , Luke Wallace 1 1. Geoscience Australia, Canberra, ACT, Australia

Geoscience Australia’s Northern Australia Hydrogeochemistry Survey (NAHS) has been collecting groundwater samples across the north-eastern Northern Territory, investigating water-rock interaction to identify regional mineral prospectivity and establish geochemical baselines. Given the sensitivity of groundwater composition to a range of confounding variables, the program adopted an approach to ensure it minimised (and captured) as much uncertainty as possible in chemical results from the sampling, processing and analysis aspects of a groundwater survey through robust Quality Assurance and Quality Control (QA/QC) protocols. This presentation will share a systematic approach to adopt for future sampling campaigns, useful scripted methods for quickly visualising QA/QC data to make judgements on the quality, and examples from the NAHS of major problems caught through our QA/QC process.

QA/QC begins before the survey commences, with a plan (and budget) for additional samples that need to be collected. We follow a triplicate sampling approach with field and lab duplicates every 10 sites. Our field duplicates capture the errors introduced through the sampling process and field heterogeneity, while our lab duplicates capture variance in the laboratory analysis. Additionally, water and filter blanks are collected on every sampling trip to measure any systematic contamination resulting from sampling, storage, transport and processing. For non-isotope systems,

standards are included to assess accuracy of response as well as track batch effects.

Overlapping samples are used to check for consistent performance when a new laboratory or method is trialled. Consideration is given both to how these blind QA/QC samples perform, as well as holistically whether the batch chemical results make sense using both charge balance and element ratios.

Using this range of QA/QC samples and semi-automated scripts, this project has been able to quickly calculate statistics and visualise performance of each new analysis batch. Worryingly, in several instances’ lab duplicates were found to have much poorer agreement than field duplicates. As a result, instrument-specific

problems, changes in an instrument/calibration within and between batches, sample number mix-up, dilution errors, and systematic offsets attributed to instrument software errors were caught. By identifying these problems at an early stage, which is only possible with independent and blind QA/QC samples, an opportunity to work with the laboratory to deduce and resolve issues quickly was afforded. As a result, greater confidence both in the true uncertainty in our datasets, and that

interpretations are being made from a validated view of the groundwater system, exists.

147 | P a g e

Data Assimilation & Metrics for Models in Decision

Dalam dokumen 1 | Page (Halaman 140-147)