This document gives a short overview of the iMinds wireless testbed facilities and describes how these testbeds can be used during the EVARILOS open challenge for the evaluation of indoor localization solutions.
The goal of the open challenge is to evaluate localization solutions in a realistic environment. Track 2 of the open challenge focuses on benchmarking complete localization solutions from contestants that deploy their own hardware or want to install custom software on the test bed. As such, this track will evaluate a full (commercial or non-commercial) localization system, including aspects such as installation time, energy consumption, etc.
To participate, contestants can either:
The EVARILOS challenge will provide the contestants with opportunities to:
Table of Contents
2. Open Challenge: Track 2 technical annexes
The solutions can be tested on one or two available test facilities. Full technical information is available on the w-iLab.t website.
The office environment testbed is deployed in the iMinds office spaces, meeting rooms, student lab rooms, corridors, etc. The office testbed consists of three floors which are actively used during the day. The testbed is a dynamic environment, meaning that there is a number of people moving in the premises, constant opening of doors or slight movement of infrastructure (chairs, tables, etc.) are expected and usual. Furthermore, uncontrolled wireless interference typical for office environments can be expected, including Wi-Fi, DECT, etc.
Devices from the contestants can be installed in the office testbed, or custom software can be deployed on approximately 200 wireless node locations.
Each node location is equipped with the following hardware.
More information is available at the w-iLab.t office configuration web site.
A floorplan of the third office floor (90m by 18m) is shown below (floorplans of the other two floors are very similar and are available on request).
A figure of the available node locations (on all three floors) is shown below.
The Zwijnaarde testbed is located in an unmanned utility room above a cleanroom. Very little outside interference is present in this testbed. Due to the presence of many metal objects, the environment resembles certain manufacturing environments. No persons are present in the environment, and as such the environment is very stable.
Devices from the contestants can be installed in the Zwijnaarde testbed, or custom software can be deployed on approximately 200 wireless node locations.
The w-iLab.t Zwijnaarde hosts 60 fixed wireless nodes, each equipped with:
Mobile wireless nodes are available for repeated remote testing. More information is available on the ilabt website.
A floorplan of the Zwijnaarde testbed (60m by 20m) is shown below:
Finally, a figure of the available node locations is shown below:
Participants are allowed to deploy their own hardware and/or software. All competing systems have to be either RF-based, or in case of multimodal systems, they have to possess a strong RF-based component. Hybrid systems where other technologies are used such as infrared, ultrasound, RFID, or other are allowed.
Software can be sensor firmware (TinyOS, Contiki) or PC software (preferably Linux, although windows is also available). This software can be installed remotely on the available hardware platforms.
If hardware is installed, the contestants are invited to visit the test facility and perform the deployment with support of the EVARILOS consortium. In this way, correct installation is guaranteed and remains the responsibility of the contestant. The deployment process is limited to at most two days. A separate time slot will be allocated to each contestant to preserve confidentiality. For installing additional hardware, power sources are power over Ethernet (max. 30W, 12V), USB or power plugs. A wireless/wired backbone is available. Location information should be available through an easy accessible software interface.
During the challenge, the performance of the localization solution will be evaluated at multiple locations and under different interference conditions (without interference, with Zigbee interference, with WiFi interference, ..).
For testing the performance of the localized device of the system under test at different evaluation points, either a test person will be used (w-iLab.t Ghent) or a remote controlled robot (w-iLab.t).
No action will be required from the participants to choose the evaluation points or to create interference. The only requirement from the system under test is that it can provide location estimates on request (see next section).
This section gives instructions for competitors for interacting with the benchmarking framework.
Competitors have to provide an HTTP Uniform Resource Identifier (URI) on which their algorithm listens for requests for location estimation. Upon request, the algorithms must be able to provide the location estimate as a JSON response in the following format:
JSON parameters coordinate_x and coordinate_y are required parameters and as such they must be reported upon request. Parameter coordinate_z is an optional parameter, due to the 2D evaluation environment. If this parameter is provided from a SUT, the evaluation team will also calculate 3D localization error, although this information will not be used in final scoring. Finally, parameter room_label is an optional parameter and if it is not provided the EVARILOS Benchmarking Platform will automatically map the room estimate from the estimated coordinates x and y.
The technical team will support the competitors in deploying their algorithms on desired hardware, interfacing SUT with the EVARILOS Benchmarking Platform, controlling the robotic mobility platform, generating different interference scenarios and interference monitoring. Furthermore, each competitor will be given 4 hours in order to train their algorithms in the testbed environment, before the evaluation process starts. During that time competitors will also be supported by the technical team.
This chapter presents interference scenarios that will be artificially generated in wilab testbeds in order to evaluate different indoor localization algorithms. The goal is to determine if and to which extend different types and amounts of RF interference can influence the indoor localization performance. The text below presents the reference scenario and describes three interference scenarios that will be used for evaluation in testbed.
2.3.1 Reference Scenario
This reference scenario is instantiated either on the 3rd floor of the w-iLab.t testbed in Ghent, or on the w-iLab.t Zwijnaarde testbed. It is called “Reference scenario”, since no artificial interference is generated and the presence of uncontrolled interference is monitored and minimized.
At multiple evaluation points, the indoor localization SUT will be requested to estimate the location. The SUT device will be carried to each evaluation point using the robotic platform (Zwijnaarde) or by a person (Ghent). The location estimations provided by the robot are highly accurate, achieving mean localization errors of around several cm, so these location estimations can be considered as ground truths for indoor localization experiments.
The experiments will be performed in the afternoons, so the influence of uncontrolled interference will be minimized. Furthermore, the wireless spectrum will be monitored using WiSpy devices and all measurements with the interference threshold above certain level will be repeated. Finally, during each experiment a measurement of the wireless spectrum will be taken with the spectrum analyser at a predefined location.
2.3.2 Interference Scenario 1
In this interference scenario interference is created using the IEEE 802.15.4 Tmote Sky nodes. The interference type is jamming on one IEEE 802.15.4 channel with a constant transmit power equal to 0 dBm. Five of these jamming nodes will be present in the testbed environment. A summary of this interference scenario is given below.
2.3.4 Interference Scenario 2
The second interference scenario instantiated utilizes interference types that are usual for office or home environments. Namely, interference is emulated using 4 Wireless Fidelity (WiFi) embedded Personal Computers (PCs): server, access point, data client, and video client. The server acts as a gateway for the emulated services. The data client is emulated as a TCP client continuously sending data over the AP to the server. Similarly, the video client is emulated as a continuous UDP stream source of 500 kbps with the bandwidth of 50 Mbps. The AP is working on a WiFi channel overlapping with the SUT’s channel and with the transmission power set to 20 dBm (100 mW). A summary of a described interference scenario is given below.
2.3.5 Interference Scenario 3
For the third interference scenario instantiated a signal generator will be used for generation of synthetic interference. The generated synthetic interference will have an envelope of the characteristic WiFi signals, but without any Carrier Sensing (CS). The summary of interference scenario 3 is given below.
This chapter describes the evaluation procedure that will be followed for Track 2 of the challenge. In order to objectively compare and evaluate the different solutions, the following methodology will be applied:
2.4.1 Evaluation points
The indoor localization algorithms will be evaluated in 10 different evaluation points (in iLab.t Ghent) or 25 evaluation points (w-iLab.t Zwijnaarde) under four interference scenarios. The evaluation points will be selected by the evaluation team and will be the same for all evaluated algorithms. In the first run, all algorithms will be evaluated in the environment without controlled interference and the metrics will be calculated. The following three runs of evaluation will be done in the environment with three different interference scenarios described before. The locations of interference sources will be selected by the evaluation team and will be the same for all evaluated algorithms. For each location at each measurement point, the EVARILOS Benchmarking Platform will request a location estimate from the SUT. The evaluated data at each location will be automatically stored and metrics will be calculated and presented in real-time.
2.4.2 Evaluation Metrics
For the Track 2 of the challenge the following metrics will be calculated:
Performance metrics - obtained from the experiment:
Derived metric - calculated from the performance metrics:
Deployment metrics - obtained during the experiment:
Point level accuracy at one evaluation point is defined as the Euclidean distance between the ground truth provided by the robotic platform (xGT, yGT) and the location estimated by indoor localization algorithm (xEST, yEST) , given with the following equation:
Room Level Accuracy
Room level accuracy of location estimation is a binary metrics stating the correctness of the estimated room, given with the following equation:
Latency of Location Estimation
Latency or delay of location estimation is the time that the SUT needs to report the location estimate when requested. The time that will be measured in the evaluation is the difference between the moment when the request for indoor localization has been sent to the SUT (trequest) and the moment when the response arrives (tresponse), given with the equation:
Energy efficiency of the localized node
This information needs to be provided by the contestant. The energy efficiency is expressed in Watt based on available datasheets.
Interference robustness of indoor localization algorithm in metric that reflects the influence of different interference types to the performance of the indoor localization algorithm. In this evaluation, interference robustness will be expressed as the percentage of change in other metrics in the scenarios with interference in comparison to the performance in the scenario without interference (reference scenario). For the case of generalized metric (M), the interference robustness is given with the following equation:
where Mreference is the value of metric M in the reference scenario, while Minterference is the value of metric M in the scenario with interference. Note that if the performance of an algorithm for the performance metric M is better in the scenario with interference, in comparison to the reference scenario, then the interference robustness metric will be set to 0 %.
Setup overhead - physical installation time
This metric measures the time that is needed to install the complete system. The time is measured from moment the installation of the SUT starts until all physical components are installed correctly. Also the number of men installing will be accounted for. This includes infrastructure, software, set-up and configuration time.
Setup overhead - configuration complexity
To capture relevant configuration data, a questionnaire will be used. This questionnaire will be used to evaluate the complexity of the configuration of the system. Example questions will include:
2.4.3 Capturing the Evaluation Metrics
The evaluation procedure will be done in four steps, namely in four interference scenarios. In each step, for each of the evaluation points, the set of metrics (point accuracy, room accuracy, latency) will be obtained. For each set, the 75th percentile of point level accuracy and latency will be calculated, together with percentage of correctly estimated rooms, as shown in the figure below.
Interference robustness is calculated as follows. For each interference scenario the interference robustness is calculated for each performance metric. The overall interference robustness is the averaged interference robustness over all interference scenarios and all performance metrics, given with following equation:
In the equation the sum goes over all three interference scenarios (i = 1, 2, 3), and M1(i), M2(i) and M3(i) are interference robustness of 75th percentile of point accuracy, interference robustness of percentage of room level accuracy and interference robustness of 75th percentile of latency for scenario i, respectively. This process is illustrated in the figure below.
2.4.4 Calculation of Final Score
Final scores will be calculated according to the approach described in the EVARILOS Benchmarking Handbook (EBH) and presented in the figure below.
The EBH proposes the calculation of the score for each metric according to a linear function that is defined by specifying minimal and maximal acceptable value for the metric. Furthermore, EBH proposes the use of weighting factors for defining the importance of each metric for a given use-case.
In general, the linear translation function for calculating the score of each particular metric is given in the figure below, where score can vary from 0 to 10.
Minimal and maximal acceptable values are defined with Mmin and Mmax, respectively. Note that Mmin can be bigger than Mmax, e.g. in defining the acceptable point accuracy values one can discuss about acceptable localization error margins. Here M min is the biggest acceptable error, while Mmax is the desired average localization error.
The following marginal values will be used for the different metrics.
For the calculation of the final score, the intermediate scores will be weighted. The individual weights depend on the actual use case one is interested in. Therefore different categories will be introduced (see next section).
2.4.5 Evaluation in different categories
Depending on the application, different metrics will have a different level of importance. Therefore, different categories are introduced. Based on a single measurement set, a final score will be calculated for each category.
For each of the categories, the following values will be used for the weights:
Winners will be declared for categories providing there are sufficient number of candidates. The price money will be devided equally amongst the winners of all categories.
By varying the acceptable values and desired values, as well as the weight factors, a number of categories will be defined (most accurate, best room accuracy, etc.) Only the top three of the commercial solutions will be mentioned by name in each category. After the competition, participants will have the opportunity to remain anonymous or can have their name indicated in the rankings.
For participating in the open challenge, an extended abstract (length should not exceed 4 pages, including figures and tables) should be submitted before May 1st, 2014.
The submissions details reported in the abstract should at least include:
Additional technical information is available upon request.