Document Title Document Version
D2.1.2 Data Acquisition Specification 2 1.0
Project Number Project Acronym Project Title
FP7‐2011‐7‐287661 GAMBAS Generic Adaptive Middleware for Behavior‐driven Autonomous Services
Contract Delivery Date Actual Delivery Date Deliverable Type Deliverable Access
M18 July 31st, 2013 R (Report) PU (Public)
Document Author or Reviewer Organization Work Package
Umer Iqbal UDE WP2
Marcus Handte UDE WP2
Josiane Xavier Parreira NUIG WP2
Abstract
This document is the second version of data acquisition specification. The first version of this document provided the design details about the adaptive data acquisition framework. As specified in the first version, the framework consists of a component system and an activation system. The component system provides a component based approach for performing context recognition. The activation system enables automatic, state‐based activation of different context recognition configurations.
This version of the deliverable extends the first version by providing details about the external interfaces of Data Acquisition Framework (WP2) with other GAMBAS system components namely the Privacy Preserving Framework (WP3) and the Semantic Data Storage (WP4).
0.2 UDE Added details about DQF external interfaces with PRF and SDS. 0.3 UDE Added innovation sections due to review comments.
0.4 UDE Updated conclusions.
0.5 UDE Added section on configuration folding. 0.6 UDE Internal review.
0.7 UDE Integrated review comments. 0.8 UDE Internal review.
0.9 NUIG Review.
1.0 UDE Addressed review comments.
4.2.1 Privacy Preservation Framework Interface ... 26
5 Requirements Coverage Analysis ... 31
5.1 Framework‐related Requirements ... 31
5.2 Developer‐related Requirements ... 33
5.3 Data‐related Requirements ... 34
6 Conclusion ... 37
7 Acronyms ... 38
8 Bibliography ... 39
List
of
Figures
Figure 1 – Data Acquisition Framework Overview ... 4
Figure 2 – Component System Overview ... 5
Figure 3 – Speech Detection Configuration Example ... 6
Figure 4 – Component System Structure ... 8
Figure 5 – Component System Tool Support... 9
Figure 6 – Activation System Overview ... 10
Figure 7 – Examples of Activation System States ... 12
Figure 8 – Example for Rule‐based Transitions between States ... 12
Figure 9 – Activation System Structure ... 14
Figure 10 – Example Mapping of State Machines to Components ... 14
Figure 11 – Example for Active Graph Structures in Folded Configuration ... 15
Figure 12 – Activation System Tool Support ... 16
Figure 13 – Folded Configuration with Transformation Component ... 17
Figure 14 – Folded Configuration with Delayed Transformation ... 17
Figure 15 – Context Recognition Components Overview ... 19
Figure 16 – Integration of DQF, SDS, PRF ... 25
Figure 17 – Integrated GAMBAS Middleware ... 26
Figure 18 – DFQ Interaction with PRF ... 28
Figure 19 – Local and Remote Components for Storing Data in SDS ... 29
1
Introduction
This deliverable is the second version of GAMBAS data acquisition specification. This version, like the first version, contains the description about the adaptive data acquisition framework as part of the WP2 specification. The description includes discussions on the system architecture for the framework, including the component system for developing context recognition applications and the activation system for enabling automatic, state‐based activation of different configurations. The document also provides insight into the design rationale for the system. Various details on how specific objectives will be achieved are given. This includes motivation behind the component based approach for context recognition, chosen component model, energy efficient techniques to perform context recognition on resource constrained mobile devices, etc. Furthermore, rationale behind the state machine abstraction for the activation system and how energy efficient techniques used in the framework with other architectural components of the GAMBAS middleware. Like the first version, the intended audiences for this document are primarily the GAMBAS stakeholders, as well as the GAMBAS research and development teams of the consortium. However, as a public document this WP2 specification will be freely available for other interested parties as well.
1.2
Scope
This deliverable is the second of two versions of the WP2 specification on the adaptive data acquisition framework. This deliverable extends the first deliverable which was produced based on the project details carved out in the requirements specification (D1.1), use case specification (D1.2) interfaces of Data Acquisition Framework (WP2) with other GAMBAS system components namely the Privacy Preserving Framework (WP3) and Semantic Data Storage (WP4). The details about these external interfaces are accompanied with example applications that have been developed during the second year of the project. The details are provided in Section 4 of this document.
work that is to be done during the course of the project. The details of the Innovations are provided in the respective sections of the document.
1.4
Innovations
The GAMBAS data acquisition framework is responsible for providing a platform for the development of context recognition applications which can acquire data from the physical and virtual sensing sources, can extract features from the acquired data and also deduce meaningful context information from it. In GAMBAS we employ component based abstraction for the development of these applications. The component based abstraction not only provides a generic way of developing these applications and also enhances the reusability of already implemented code, but it also provides means to analyze the application’s structure such that energy efficiency techniques can be applied on them.
The two main innovations of the data acquisition framework in GAMBAS therefore are, (1) use of component based abstraction for achieving generic context recognition and (2) using it to achieve energy efficient execution of context recognition applications.
1.4.1 GenericContextRecognition
The component based abstraction models every context recognition functionality as an independent piece of code called component. A component is an atomic building block that constitutes specific recognition logic. It has input and output ports which allow it to communicate with other components and with the parameterization support. A component can be configured according to the developer specific requirements. One key advantage of using a component abstraction is that same components can be used to recognize different type of contexts, thus enabling generic context recognition. By using the component based abstraction, the GAMBAS’ data acquisition framework provides context recognition at two levels.
The lower level is the component system, which is used to develop components required for performing context recognition functionalities. The set of components linked together to perform specific context recognition is called a configuration. The developer can create configurations either by creating new components or using the already existing ones and linking them as per requirements of configurations (using the same component abstraction described for the component system) and the transitions are realized as if‐else rules.
The use of the component model as a fundamental abstraction in the two levels of the data acquisition framework helps in providing context recognition in a generic manner. Further details on the component system and the activation system are provided in Section 2 of this document.
1.4.2 EnergyEfficientRecognition
single but meaningful context recognition. However, it is also possible that multiple types of context characteristics are detected by executing more than one of such applications simultaneously. The state machine abstraction, on the other hand, allows the GAMBAS applications to detect not just one type of context but multiple of them, as would be required by the user during the course of a day or during some activity.
In both of the above mentioned cases i.e., when multiple independent applications are executed simultaneously for detecting different context characteristics, or a single application using state machine abstraction is executed to detect different context characteristics at different points in time, the execution of these applications poses challenges for the energy efficient execution of these
The configuration folding technique removes redundancies between different context recognition applications running simultaneously, by analyzing their structures and by providing a single configuration valid for all the applications. The resulting configuration is then instantiated by the runtime system and is used to perform context recognition. Experiments with sample applications have shown that up to 48% of energy can be saved if configuration folding is applied.
One limiting factor for configuration folding is the case when different context recognition applications have redundancies between them but the redundancies are parameterized differently, thus causing the use of transformations in the final folded configuration. The use of transformation prohibits further removal of redundancies between the applications as described in the relevant sections, thus causing suboptimal energy savings. In order to deal with such cases we are investigating possibilities of enhancing energy savings when transformations are used and a brief discussion is given in Section 2.3 of this document.
1.5
Structure
The structure of the remainder of this document is as follows. In Section 2, we give a high level description of the data acquisition framework followed by the design rationale and various building blocks of the component system and the activation system. In Section 3, we describe the context recognition components and intent recognition components to be developed using the component system. Besides providing a test case for the framework, these components are also required for creating the prototype applications of the GAMBAS project. In Section 4, we discuss the integration of the framework with other work packages such as WP3 and WP4. In Section 5, we perform a requirement coverage analysis for the requirements related to the data acquisition framework and thereafter, we conclude the document.
2
Data
Acquisition
Framework
The data acquisition framework (DQF) is one of the fundamental building blocks of the GAMBAS middleware. Conceptually, the DQF is responsible for context recognition on personal mobile devices including smart phones, PDA’s and laptops. The DQF supports various platforms including Android, Windows and Linux. The DQF is a multi‐stage system. On the one hand it allows developing reusable context recognition applications. On the other hand it enables relevant applications at a particular time automatically. Specifically the DQF consists of a component system and an activation system as shown in Figure 1.
Figure 1 – Data Acquisition Framework Overview
The component system uses a component abstraction to enable the composition of different context recognition stacks that are executed continuously. A context recognition stack or simply a configuration refers to a set of sampling, preprocessing and classification components wired together to detect a specific context. Examples of such context include physical activity of a person, location of a person, etc. These configurations can be used to detect context for a multitude of purposes and have applications in areas of smart home environments, assisted living for elderly, proactive route planning, budget shopping, etc.
screen turns on (transition for state change), for instance. In the following sections a more detailed description of the component system and the activation system is given.
2.1
Component
System
In the DQF, the user’s context and activity recognition is done using a component based approach. This approach promotes reusability and rapid prototyping. It also gives us the ability to analyze application structures in order to optimize their execution in an energy efficient manner. In the component system, each application consists of two parts: the part containing the recognition logic and the part containing the application logic. The part that contains the recognition logic consists of sampling, preprocessing and classification components that are connected in a specific manner as shown in Figure 2. The part that contains the remaining application logic can be structured arbitrarily. Upon start up, a context recognition application passes the required configuration to the component system which then instantiates the components and executes the configuration. Upon closing, the configuration is removed by the component system which eventually releases the components that are no longer required. The component system supports various platforms such as J2SE and Android. Using an Eclipse‐based tool, application developers can visually create configurations by selecting and parameterizing components and by wiring them together.
Figure 2 – Component System Overview
2.1.1 ComponentModel
To structure the recognition logic, our component system realizes a lightweight component model which introduces three abstractions. First, components represent different operations at a developer‐defined level of granularity. Second, connectors are used to represent both the data as well as the control flow between individual components. And third, configurations are used to define a particular composition of components that recognizes one or more context characteristics.
2.1.1.1 Components
in contrast, they can be instantiated multiple times and they are parameterizable to support different application requirements. Due to the parameters, the component model is more flexible than other models. Besides parameters, all components exhibit a simple life cycle that consists of a started and a stopped state. To interact with other components, a component may declare a set of typed input and output ports that can be connected using connectors.
Figure 3 – Speech Detection Configuration Example
As depicted in Figure 3, the recognition logic of a speech recognition application consists of a number of components which can be divided into three levels. At the lowest level, the sampling components are used to gather raw data from an audio sensor. On top of sampling components, a set of preprocessing components takes care of various transformations, noise removal and feature extraction. Finally, the extracted features are fed into (a hierarchy of) classifier components that detect the desired characteristics. Depending on the purpose and extent of the application logic, it is usually possible to further subdivide the layers into smaller operators. Although our component system does not enforce a particular granularity, such operators should usually be implemented as individual components to maximize component reuse.
2.1.1.1.1 Parameters
input and output ports of different types without worrying about their memory allocation, ordering and memory de‐allocation. The internal buffer management for the ports is transparent to the component abstraction, connectors are implemented using an observer pattern [GAMM1] in which the output ports are acting as subjects whereas the input ports are acting as observers. This enables modeling of 1:n relationships between the components, which is required to avoid duplicate computations. To avoid strong coupling between components, input ports do not register themselves at the output ports but the component system takes care of managing all required
energy efficient techniques if more than one application is executed simultaneously. When the applications do not require the context information anymore the runtime system stops executing the associated configurations. A detailed description of the component system structure and execution of applications is given in the following sections.
2.1.2.1 SystemStructure
As shown in Figure 4, the main elements of the runtime system of the component system are the configuration store, the configuration folding algorithm [IQBA12] and the applications. The configuration store is used to cache the configurations associated with applications that are active. It is also used to store their folded configuration. The configuration folding algorithm provides energy efficient execution of context recognition applications, provided that more than one application is executed simultaneously. The entity responsible for managing the runtime system is called the component manager. The component manager will be implemented as an Android service (recognition service) and must be installed on the device separately. When the application developer creates an application it must provide code to bind the application with this recognition service and when the application is finally deployed on the device and executed, the recognition service gets activated, instantiated and executes the configuration associated with it.
Figure 4 – Component System Structure
2.1.2.2 ConfigurationExecution
energy requirements, the component manager does not directly start the components contained in the configuration. Instead, it uses the set of active configurations as an input for our configuration folding algorithm. The goal of the configuration folding algorithm is to remove redundant components that are present in different applications and perform the same sampling or compute redundant results. Using the set of configurations, the configuration folding algorithm computes a single, folded configuration that produces all results required by all running applications without duplicate sampling or computation. Once the configuration has been folded, the component manager forwards it to the delta configuration activator. By comparing the running and the folded configuration, the activator determines and executes the set of life cycle and connection management operations (starting, stopping and rewiring of components) that must be applied to the running configuration in order to transform it into the folded target configuration. When executing the different operations, the delta activator takes care of ensuring that their ordering adheres to the guarantees provided by the component life cycle. To do this, it stops existing components before they are manipulated. This procedure is illustrated in Figure 4.
2.1.2.3 PlatformSupport
The core abstractions of the component systems as well as the component manager are implemented in Java 1.5. In order to support multiple platforms, different wrappers have been implemented that simplify the usage of the component system on platforms including Windows, Linux and Android.
2.1.2.4 ToolSupport
In addition to the platform support, the component system provides offline tools to support rapid prototyping. These tools include a visual editor which is used for creating and updating configurations for the context recognition applications. The visual editor provides a user friendly interface which allows developers to drag, drop, parameterize and wire existing components to create new configurations or update existing ones. The visual editor is implemented as a plug‐in for the Eclipse IDE (Version 3.7 and above).
In addition to the visual editor, the component system also provides a large set of sampling, preprocessing and classification components as part of the component toolkit. At the sampling level, our toolkit provides components that access sensors available on most personal mobile devices. This includes physical sensors such as accelerometers, microphones, magnetometers, GPS as well as Wi‐Fi and Bluetooth scanning. In addition, we provide access to virtual sensors, for instance, personal calendars. For preprocessing, the toolkit contains various components for signal processing and statistical analysis. This includes simple components that compute averages, percentiles, variances, entropies, etc. over data frames as well as more complicated components such as finite impulse response filters, fast Fourier transformations, gates, etc. Furthermore, the toolkit also contains a number of specialized feature extraction components that compute features for different types of sensors such as the spectral rolloff and entropy or zero crossing rate which are used in audio recognition applications [LU09] or Wi‐Fi fingerprints which can be used for indoor localization. At the classification layer, the toolkit contains a number of trained classifiers which we created as part of the audio and motion recognition applications. Furthermore, there are a number of platform‐specific components which are used to forward context to an application which enables the development of platform‐independent classifiers. In Android, for example, a developer can attach the output of a classifier to a broadcast component which sends results to interested applications using broadcast intents. We have also developed a number of components that are useful for application development and performance evaluation. These includes components that record raw data streams coming from sensors as well as pseudo sensors that generate readings using pre‐recorded data streams. Together, these components can greatly simplify the application development process on mobile devices as they enable the emulation of sensors that might not be available on a required. As described earlier, such context characteristics can be detected using the component system by developing configurations with the appropriate components, parameterizations and connections. Furthermore, in order to fully identify a particular context, more than one configuration would be needed at a particular time. In real life, however, the context of an entity does not remain static and over the period of time it requires detection of different/new context characteristics.
automatic, state‐based activation of different configurations associated with developer defined tasks. Hence, in our activation system, the entity’s context is modeled as a state with different configurations associated with it, irrespective of its granularity. The transitions between the states are modeled using a rule based approach, e.g., if for a state “working”, the associated configurations provide negative results (e.g. the user is not present in his/her office anymore) or results not up to a certain threshold, the activation system uses this to trigger an associated state transition.
2.2.1.1 States
Figure 7 – Examples of Activation System States
2.2.1.2 Transitions
Transitions are defined by the conditional changes in the configurations associated with a state. When the changes suggest that a certain condition holds, the activation systems disables the current state and its associated configurations and enables the ones associated with the new state. This is done by modeling the transitions using a rule based approach. Each transition is represented by an abstract syntax tree, in which conditions or thresholds for each configuration are evaluated. Depending upon the evaluation of the abstract syntax tree, the activation system decides whether a state change has occurred or not.
current state, the activation system evaluates the abstract syntax tree associated with Transition 21.
2.2.2 RuntimeSystem
The main task of the runtime system is to load the state machine pertaining to a user and ‐ as indicated by the application logic ‐ instantiate the configurations associated with states, identify the current state, instantiate rules for different transitions and evaluate the abstract syntax trees associated with the respective transitions. Furthermore, the activation system executes the state machines in an energy efficient manner by applying configuration folding among all configurations across all the different states. The outcome of such a “folded” state machine is a single folded configuration. Clearly, it is possible that in such a folded configuration different configurations share the same graph structure, at least to a certain level. Therefore the activation system provides appropriate logic for evaluating transition between the states. Further details on how this can be achieved are described in the following sections.
2.2.2.1 SystemStructure
Figure 9 – Activation System Structure
2.2.2.2 ConfigurationMapping
In this section we describe how different configurations related to different states are folded and how the rule engine applies rules representing transitions between the states. To understand the mappings consider an example of a state machine with two states as shown in Figure 10(a). Each state has two configurations attached to it. When the activation system loads the state machine it applies the configuration folding algorithm on all configurations associated with both states and the result is shown in Figure 10(b).
Let’s assume the following rules for the two transitions:
Transition 12: IF result of Config. A OR result of Config. B EQUALS FALSE then State 2
Transition 21: IF result of Config. C OR result of Config. D EQUALS FALSE then State 1
Their abstract syntax trees and how they are integrated with folding is shown in Figure 11.
Figure 11 – Example for Active Graph Structures in Folded Configuration
2.2.2.3 PlatformSupport
The core abstractions of the activation systems will be implemented in Java 1.5. In order to support multiple platforms, different wrappers will be implemented that will simplify the usage of the activation system on platforms including Windows, Linux and Android.
2.2.2.4 ToolSupport
In addition to the platform support the activation system will provide offline tools to support rapid prototyping. These tools will include a visual editor which will be used for creating and updating configurations for context recognition applications. The visual editor will provide a user friendly interface which will allow developers to drag, drop, parameterize and wire existing configurations to create new state machines or update existing ones. The visual editor will be implemented as a plug‐ in for the Eclipse IDE.
Figure 12 – Activation System Tool Support
In addition to the visual editor the activation system will provide a set of configurations as part of the configuration toolkit for detecting different context such as location detection, speech, motion etc. With the availability of such a toolkit the developer will not have to create configurations from scratch, train classifiers for them and perform their testing, thus, saving a lot of development efforts. The tool support for component system is depicted in Figure 12.
2.3
Optimized
Configuration
Folding
The energy savings using the configuration folding algorithm for the component system and the activation system described earlier has a limitation when the components to be folded have different parameterizations. The difference in parameterization requires use of transformation components as described in [IQBA12]. Figure 13 highlights this limitation as the two configurations have same component A but these components differs in the parameterization. This difference is indicated as A and A’. The resulting folded configuration will have a transformation component to perform folding for A’ but this will result in ceasing of further folding of other components as shown in Figure 13.
Figure 13 – Folded Configuration with Transformation Component
As can be seen in Figure 13 the further folding of components could be beneficial as both configurations have same components at the higher levels. This could only be achieved if the transformation introduced for component A’ can be introduced later or in other words at a higher levels of folded configuration as shown in Figure 14.
Figure 14 – Folded Configuration with Delayed Transformation
transformation component does not affect the correctness of the configuration. These characteristics are being investigated and will be further explored during the course of the project.
3
Context
Recognition
Components
The context recognition components are the basic building blocks of a context recognition application. The component toolkit provided with the component system consists of a large number of sampling, preprocessing and classification components. These components can be used to create new applications. Moreover, with the toolkit support developers can implement their own components with little effort. For the scope of this project and test applications to be developed we mostly focus on physical activity recognition, location recognition and audio recognition components as shown in Figure 15. During the second iteration of the implementation, location prediction and duration components will also be developed. These components will be able to predict next location of the user and also their intended stay at a particular location. With the help of such prediction components, applications built on top of the GAMBAS middleware will be able to provide users with timely and relevant services. Moreover, the service providers will be able to better plan their businesses and services models.
Figure 15 – Context Recognition Components Overview
3.1
Activity
Recognition
Components
3.1.2 TripRecognition
Knowing the location of a user is an important piece of information both for the users as well as for the transport service providers. Having information about the mode of locomotion between two locations could be beneficial for service providers. Knowing how trip was done ‐ i.e. whether the user
thereby ensure improvement in quality over time. Concerning the part of the speech input that does not contain the location information the target is to allow the user to phrase his intention as freely as possible. This will be guaranteed by “semantically inflating” the recognition grammar, i.e. including linguistic phenomena like synonymous expressions in the speech recognition grammar.
4
Data
Acquisition
Framework
Integration
This section describes the integration of the data acquisition framework (DQF) with the other system elements of the GAMBAS middleware. The two other system elements with which the DQF directly collaborate are the privacy preserving framework (PRF) and the semantic data storage (SDS) as part of interoperable data representation and query processing work package. As described in [GCAD12] and depicted in Figure 16, the data collection by the user’s device – either for personal or collaborative use – requires the device to store the data either locally or remotely, depending upon the required services from the service providers. In either case there is the need for local and remote interfaces between these two system elements to store the data in the appropriate manner and in the right format. Another key feature of the GAMBAS architecture is the privacy preserving exchange of the user’s context data with the service providers. For this reason the PRF resides on the user’s device. In order to ensure that only the data in accordance with the privacy policy of the user is sent to the service providers, there is the need for control interfaces between these two system elements.
Figure 16 – Integration of DQF, SDS, PRF
The local and remote interfaces between DAQ and PRF and between DAQ and SDS are part of GAMBAS middleware and managed through middleware components namely the GAMBAS software development kit (SDK) and the GAMBAS core service. In the following a brief overview of the GAMBAS middleware is given, followed by details on the integration of DQF with other system components.
4.1
GAMBAS
Middleware
The application programming interface (API) and the service programming interface (SPI) together constitute the GAMBAS software development kit. In order to make use of GAMBAS functionalities, the GAMBAS applications need to interact with GAMBAS system components. GAMBAS applications make these interactions by calling the methods provided by the API. Depending upon the application requirements these calls may consist of requiring access to the data acquisition framework, the semantic data storage or the intent aware user interface components.
Figure 17 – Integrated GAMBAS Middleware
As mentioned earlier, the DQF has two direct interfaces with the PRF and the SDS system components. If the DQF requires access to the local PRF or local SDS framework then this interaction is handled solely by the core service running in the GAMBAS middleware and the methods provided by GAMBAS API. If the DQF requires access to remote SDS then this interaction further involves the use of the BASE middleware component for the communication between the local GAMBAS middleware and the GAMBAS components existing remotely.
4.2
DQF
Components
As specified earlier in the document, the DQF consists of two sub components namely the component system and the activation system. The component system provides component based abstraction for GAMBAS applications whereas the activation system provides state machine abstraction for them. Both of these components are equipped with component and configuration toolkits and the GAMBAS application using either of the abstraction requires access to these toolkits in order to instantiate their required configurations. The access to the toolkits is handled by the core service and the GAMBAS SDK. Upon receiving the requests from the applications, the SDK and the core service passes the request to the appropriate toolkit in the DQF.
4.2.1 PrivacyPreservationFrameworkInterface
GAMBAS middleware are handled by the PRF. The main purpose of the PRF in the GAMBAS middleware is to allow a controlled access to the personal and private information about the user. Since the potential services targeted in GAMBAS require service providers to access context information of the user, it is of utmost importance that only the relevant information is shared with DQF communicate this information check through control interfaces provided by the PRF. Specifically, PRF provides different methods that allow the DQF to check if a certain data type is middleware ensures that the context recognition applications can gather and export only the allowed context features. In order to achieve this, the data DQF checks for permissions with the privacy preserving framework whenever a new application is started.
Both SDS interfaces manage all database operations such as addition, deletion and modification of data according to the data models. The local SDS interface for GAMBAS applications running on Android is realized as an Android service whereas for GAMBAS applications running on J2SE devices are realized as a Java service. For both types of devices, the remote interface is realized as a BASE [BASE] service.
Figure 19 – Local and Remote Components for Storing Data in SDS
4.2.2.1 ApplicationExample
An example of sample application in which interaction between DQF and SDS is required could be a sensor data sampling application which samples sensor data such as data from audio sensor, accelerometer sensor, Wi‐Fi sensor, GPS sensor etc, and stores them either locally on the device or remotely at a server for further processing or analysis. If such an application uses the GAMBAS middleware and is required to store the data in the SDS, then the acquired sampled data by the application is passed as parameters to either the local SDS component or the remote SDS component. In either case, the sampled data passed as parameters should be in accordance with the semantic data model of the storage where it is intended to be stored. Possible fields for the sensing data to be stored may consist of sensor type, sensor reading, reading timestamp, etc. If the sensor data is to be stored on local SDS, it is first passed to the local SDS component, the data fields are serialized as string objects, converted to RDF triples and then stored in the SDS. In case the sensor data is to be stored on a remote SDS then the BASE middleware is additionally used to communicate with the remote site and the remote storage.
5
Requirements
Coverage
Analysis
In order to check whether the requirements about the data acquisition framework specified in D1.1 are covered in this specification, we perform a requirement coverage analysis to validate the design and details of the data acquisition framework given in the above sections. In the requirements specification document the requirements are categorized according to the various work packages. Moreover the requirements are further sub classified into different categories. Below we perform requirements coverage analysis according to the subcategories of requirements related to the adaptive data acquisition framework.
5.1
Framework
‐
related
Requirements
The framework related requirements define different requirements on the adaptive data acquisition framework. In the following, we briefly discuss whether they are covered and if so, where the associated discussion can be found.
ID AA_001
Description The data acquisition framework shall allow multimodal data acquisition.
Type Functional and data requirements
Description The data acquisition framework shall be extensible.
Type Functional and data requirements
Priority High
AA_002: This requirement has been covered in Section 2.1. The design description for the data acquisition framework given in previous sections ensures that the framework is extensible with regard to addition of new components, configurations, applications etc.
ID AA_003
Description The data acquisition framework shall allow automated acquisition of context data.
Type Functional and data requirements
Priority High
AA_003: This requirement has been covered in Section 2.2. The adaptive data acquisition framework ensures this requirement through the use of the activation system. The activation system enables automatic execution of context recognition configuration without any input from the user.
ID AA_012
Description The data acquisition framework shall allow optimization of context recognition components by allowing configurable parameters.
Type Functional and data requirements
Priority High
AA_012: This requirement has been covered in Section 2.1.1.1.1. The component model described allows developers to add/update parameters in the components to optimize their use.
ID AA_004
Description The data acquisition framework shall be lightweight.
Type Performance requirements
Description The data acquisition framework shall enable energy efficient context recognition for several applications.
Type Performance requirements
Priority Medium
AA_007: This requirement has been covered in Section 2.1.2. The component system and the activation system applies configuration folding on simultaneously executed configurations such that there is a single folded configuration for all with minimized occurrences of redundant computations.
ID AA_009
Description The data acquisition framework shall provide a control interface to the privacy framework. frameworks to ensure that the user’s context data is categorized appropriately with associated metadata when exported to the SDS.
prediction and destination prediction components the data acquisition system will be able to provide co‐use of public transport by friends or colleagues. speech and sound recognition components for detecting ambient sounds, announcements, discussions, etc. Moreover, with these components the user can speak his preferences to the system instead of typing.
5.2
Developer
‐
related
Requirements
The developer‐related requirements define different requirements on the adaptive data acquisition framework from an application developer’s perspective. In the following, we briefly discuss whether they are covered and if so, where the associated discussion can be found. components in their applications which can have totally different requirements from the applications for which the components were first developed.
ID AA_013
Description The data acquisition framework shall provide tools to simplify the offline training for recognizing different types of context.
ID AA_022
Description The data acquisition framework shall provide tool support to simplify the development of new methods to recognize context.
Type Functional and data requirements
Priority Low
AA_022: This requirement has been covered in Section 2.1.2.4 and 2.2.2.4. Both, the component system and the activation system will be equipped with toolkits and visual editors, which will allow the development/update of new /existing components and configurations.
5.3
Data
‐
related
Requirements
The data‐related requirements define different requirements on the adaptive data acquisition framework from a data acquisition perspective. In the following, we briefly discuss whether they are
ID AA_023
Description The intent recognition component must record context histories in a storage‐efficient manner.
Description The sound recognition components shall enable the classification of different environmental sounds.
Description The speech recognition components support server‐less operation with a limited vocabulary.
Type Functional and data requirements
Priority High
phones are resource constrained devices, the sound processing engine achieves this using limited vocabulary and minimal device resources.
6
Conclusion
This document described the details about the adaptive data acquisition framework to be developed as part of WP2 of GAMBAS project. The document has described different building blocks for the framework which mainly includes the component system and the activation system. The document has also highlighted different context recognition components to be developed during the course of the project. This document is a result of requirements specification document, use case and architecture overview documents of GAMBAS released in the previous months.
Due to the iterative development process followed by the GAMBAS project, the WP2 is refined in two iterations. As second of the two versions, this document not only provides details about the adaptive data acquisition framework but also extends the first version with details on the innovations being done in WP2 and the details on the integration of this framework with other parts of the GAMBAS middleware developed in other work packages such as WP3 and WP4.
7
Acronyms
Acronym Explanation
DQF Data Acquisition Framework
PRF Privacy Framework
SDS Semantic Data Storage
GAMBAS Generic Adaptive Middleware for Behavior‐driven Autonomous Services
GPS Global Positioning System
GSM Global System for Mobile Communication
Wi‐Fi Wireless Fidelity
FFT Fast Fourier Transform
8
Bibliography
[ANDR] Android Project, Project Homepage, September, 2012, online at http://developer.android.com/ [BASE] BASE Project, Project Homepage, September, 2012, online at http://code.google.com/p/pppc‐base/ [GAMM1] R. J. J. V. Erich Gamma, Richard Helm, “Design patterns elements of reusable object‐oriented
software, publisher: Addison‐Wesley.”
[GCRS12] GAMBAS Consortium, Requirement Specification, Public Deliverable D1.1, May, 2012, online at http://www.gambas‐ict.eu
[GCUC12] GAMBAS Consortium, Use Case Specification, Public Deliverable D1.2, May, 2012, online at http://www.gambas‐ict.eu
[GCAD12] GAMBAS Consortium, Architecture Design, Public Deliverable D1.2.1, July, 2012, online at http://www.gambas‐ict.eu
[IQBA12] Iqbal, M. U., Handte, M., Wagner, S., Apolinarski, W., Marron, P.J. , "Enabling energy‐efficient context recognition with configuration folding," Pervasive Computing and Communications (PerCom), 2012 [GCIS13] GAMBAS Consortium, Integrated System, Private Deliverable D1.4.1, May 2013.
[GCDDSQP13] GAMBAS Consortium, Distributed Data Storage And Query Processing Approaches for Dynamic Data, Public Deliverable D4.3.1, April, 2013 online at http://www.gambas‐ict.eu