DAVIS Driving Dataset 2020 (DDD20) 【转载】
DDD20 是一个非常棒是数据集,DDD20 的 readme 文件在,方便国内研究这参考。
【转载】https://docs.google.com/document/d/1Nnyjo4j0rvdgHQ0cS8z0Q1QBRLHDtwh7XW1u9ygmQEs/edit#
DAVIS Driving Dataset 2020 (DDD20)
Yuhuang Hu, Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck
March 2020
Web: This document | analytics
DDD20 (DDD20 website) is an expanded release of the first public end-to-end training dataset of automotive driving using an inilabs DAVIS event+frame camera that was developed in the Sensors Group of the Inst. of Neuroinformatics, UZH-ETH Zurich.
DDD20 includes car data such as steering, throttle, GPS, etc. It can be used to evaluate the fusion of frame and event data for automobile driving applications.
See more Inst. of Neuroinformatics Sensors Group datasets here.
Change history
-
Feb 2020: retitled to DDD20, adjusted layout, added slides, replaced header image with heatmap of driving locations
-
Sept 2018: added folders for lens calibration and camera settings
-
Jun 2018: added RAL submission, added get_stats script usage
-
Nov 2017: added Ford Focus dataset to original DDD17
-
Sept 4, 2017: added wasabi download option, added DAVIS description
-
Aug 19, 2017, added more tips on recording
-
June 25, 2017 added more detailed dataset description
-
June 21, 2017 added initial info on recording new data
-
June 5, 2017 created
Table of contents
2. Download via a set of URLs from wasabi cloud storage
DDD20 dataset files spreadsheet
Preliminary jAER aedat recordings
Using HDFView to see structure of data files
Note for python newbies regarding PATH
view.py: Get help for the viewer
view.py: Play a file from the beginning
Play a file, starting at X percent
Play a file starting at second X
Rotate the scene during the playback
get_stats.py: Export statistics in a file
export.py: Export data for training a network (reduce data to frame-based representation):
Example: make HDF5 file with 10k event frames
Detecting corrupted recordings
Citing DDD20
The main DDD20 citation is
Hu, Y., Binas, J., Neil, D., Liu, S.-C., and Delbruck, T. (2018). "DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction". Special session Beyond Traditional Sensing for Intelligent Transportation, The 23rd IEEE International Conference on Intelligent Transportation Systems, September 20 – 23, 2020, Rhodes, Greece. arXiv [cs.CV]. arXiv. http://arxiv.org/abs/2005.08605
Previous publications include
-
-
Binas, J., Neil, D., Liu, S.-C., and Delbruck, T. (2017). DDD17: End-To-End DAVIS Driving Dataset. arXiv:1711.01458 [cs]. Available at: http://arxiv.org/abs/1711.01458 . (available online)
-
Binas, J., Neil, D., Liu, S.-C., and Delbruck, T. (2017). DDD17: End-To-End DAVIS Driving Dataset. in ICML’17 Workshop on Machine Learning for Autonomous Vehicles (MLAV 2017) (Sydney, Australia).
Available at: https://openreview.net/forum?id=HkehpKVG-¬eId=HkehpKVG- .(only available via openreviews)
ICML Workshop page: https://sites.google.com/site/ml4autovehicles2017/home
-
-
The sensor used for DDD17 is the DAVIS based on the original paper below (about a previous generation sensor IC)
-
Berner, Raphael, Christian Brandli, Minhao Yang, Shih-Chii Liu, and Tobi Delbruck. 2014. “A 240x180 10mW 12us Latency Sparse-Output Vision Sensor for Mobile Applications” In IEEE J. Solid State Circuits. 49(10) p. 2333-2341 10.1109/JSSC.2014.2342715 . Get PDF here.
-
The DAVIS is based on a seminal DVS paper
-
-
Lichtsteiner, Patrick, Christoph Posch, and Tobi Delbruck. 2008. “A 128 X 128 120 dB 15 Μs Latency Asynchronous Temporal Contrast Vision Sensor .” IEEE Journal of Solid-State Circuits 43 (2): p. 566–576. doi:10.1109/JSSC.2007.914337. Get PDF here.
-
Screen shot of view.py output from one DDD17 recording
<iframe width="560" height="315" src="https://www.youtube.com/embed/L_CFgfgsn7I" frameborder="0" allowfullscreen></iframe>
Samples of DDD20 videos
Contact
For questions related to the use of this dataset or the associated tools, please write [email protected] or [email protected]
Getting the dataset
The complete DDD20 dataset consists of two parts, the original DDD17 part and the additional DDD20 part.
There are 2 methods to get the data
-
Use Resilio Sync (a private bittorrent protocol) to get the data. Using this method spreads the bandwidth and doesn’t cost us anything.
-
Contact us if ResilioSync method is not possible. Download via a set of URLs from wasabi cloud storage. Each download costs us about $50 so we will disable this method if it exceeds our budget. We appreciate use of the bittorrent method above.
-
md5 checksums of all the files are available in this file.
1. Using Resilio Sync
Use Resilio Sync to get the DDD17/DDD20 datasets,
-
DDD17: Resilio Sync DDD17 dataset.
-
DDD20: Resilio Sync DDD20 dataset
Clicking on either link above will pop up a result like this:
Resilio Sync link result
On Windows, clicking the “I already have Sync” produces this result:
Selecting “Open Resilio Sync” results in this dialog:
Once you install and run Resilio Sync, you can select the option to “Selective Sync” to synchronize only part of the data.
On linux, you should copy the link above, and paste into the Resilio web GUI interface for sync by selecting the + button and selecting the “Enter a key or link” item.
Site hosting this and other data: http://sensors.ini.uzh.ch/databases.html
2. Download via a set of URLs from wasabi cloud storage
Each download of the DDD17/DDD20 dataset costs us about $50 so we have disabled this method.
Getting ddd20-utils software
We provide the ddd20-utils python software for recording, viewing, and exporting the data. Code and further instructions are available at https://github.com/SensorsINI/ddd20-utils. Clone it with
git clone https://github.com/SensorsINI/ddd20-utils
See the README.md for python library dependencies (opencv-python and h5py).
Useful command line
pip install opencv-contrib-python h5py
Branches
-
master branch is the main one.
See ddd20-utils usage for further information.
Dataset contents
DDD20 dataset files spreadsheet
The spreadsheet “DVS Driving Dataset 2017 (DDD20) description” describes each recording (see next section).
Maps
Maps of each recording route recorded by GPS waypoints (when available) are in DDD17plus-maps.zip.. Map links to HTML hosted on our institute website (google docs does not serve raw HTML) are included in the DDD20 file descriptions spreadsheet. Maps use our Google maps API key under our “argo” project.
The USA recordings cover this heat map. The HTML of this heat map is in our DDD20 gdrive and can also be accessed via our institute website here.
DAVIS camera
The cameras used for these recordings are advanced (at time of recording) 346x260 pixel DAVIS vision sensors. The DAVIS outputs conventional global shutter gray scale image frames and dynamic vision sensor (DVS) events. The DAVIS also includes its own inertial measurement unit (IMU) that measures camera rotation and acceleration.
Camera settings
Typical values for camera settings used in these recordings are stored in the folder fordFocus/camerasettings:
This caer-config.xml file is the file that should be placed in the caer startup folder to set bias currents and all other camera parameters.
Lens calibration
The camera body used for the recordings melted during a hot day and had to be jury-rigged back together. After the completion of recordings the body was replaced by a metal body and the original case was discarded. To provide a camera calibration that is indicative of the actual calibration, we recorded calibration data using the same Kowa 4.5mm C-mount lens type using the new body. This camera calibration data is stored in the folder fordfocus/lenscalibration:
Calibration was performed using the jAER SingleCameraCalibration class
The human readable calibration data is
cameraMatrix and distortionCoefs are in opencv xml format:
cameraMatrix.xml
<?xml version="1.0"?>
<opencv_storage>
<cameraMatrix type_id="opencv-matrix">
<rows>3</rows>
<cols>3</cols>
<dt>d</dt>
<data>
2.5075316722003879e+002 0. 1.8456304977790828e+002 0.
2.5078290094673741e+002 1.1789773837487046e+002 0. 0. 1.</data></cameraMatrix>
</opencv_storage>
distortionCoef.xml
<?xml version="1.0"?>
<opencv_storage>
<distortionCoefs type_id="opencv-matrix">
<rows>1</rows>
<cols>5</cols>
<dt>d</dt>
<data>
-3.4983978901537061e-001 2.2939425998194787e-001
1.5441863394185178e-004 6.8956256994944149e-004
-1.0553679782358873e-001</data></distortionCoefs>
</opencv_storage>
Vehicle data
The following vehicle-related variables have been recorded together with the visual DAVIS data.
Variable name |
Range |
Units |
Frequency (approx.) |
steering_wheel_angle |
-600 to +600 |
degrees |
max 10 Hz |
torque_at_transmission |
-500 to 1500 |
Nm |
max 10 Hz |
engine_speed |
0 to 16382 |
RPM |
max 10 Hz |
vehicle_speed |
0 to 655 |
km/h |
max 10 Hz |
accelerator_pedal_position |
0 to 100 |
% |
max 10 Hz |
parking_brake_status |
boolean (1 = engaged) |
1Hz and immediately on change |
|
brake_pedal_status |
boolean (1 = pressed) |
1Hz and immediately on change |
|
transmission_gear_position |
States: -1, 0, 1, 2, 3, 4, 5, 6, 7, 8 |
1Hz and immediately on change |
|
odometer |
0 to 16777214 |
km |
max 10 Hz |
ignition_status |
0, 1, 2, 3 |
1Hz and immediately on change |
|
fuel_level |
0 to 100 |
% |
max 2 Hz |
fuel_consumed_since_restart |
0 to 4294967295.0 |
L |
max 10 Hz |
headlamp_status |
boolean (1 = on) |
1Hz and immediately on change |
|
high_beam_status |
boolean (1 = on) |
1Hz and immediately on change |
|
windshield_wiper_status |
boolean (1 = on) |
1Hz and immediately on change |
|
latitude |
-89 to 89 |
degrees |
max 1 Hz |
longitude |
-179 to 179 |
degrees |
max 1 Hz |
The dataset is divided into various recording files, generated under different weather, driving, and street conditions.
There are some slightly tricky aspects to the data fields. The Mondeo and Focus have different automatic transmissions. For example the Mondeo has additional settings related to downhill mode. See OpenXC doc for firmware types, car models, and available data types.
Folder structure
The overall folder structures are shown below.
DDD17/DDD20 folder structures
Preliminary jAER aedat recordings
Several preliminary recordings in DDD17 run1_test were done using jAER and these recordings are supplied in AER-DAT2.0 format as aedat files. The associated OpenXC data are supplied as .dat files. These preliminary recordings can be played in jAER’s AEViewer using the AEChip class eu.seebetter.ini.chips.davis.Davis346B and the EventFilter ch.unizh.ini.jaer.projects.e2edriving.FordVIVisualizer. A screenshot is shown below. The aedat file should be opened after the .dat file is loaded using the LoadFordVIDataFile button.
Screen shot showing 80ms DVS frame and associated OpenXC data from the recording “Davis346B-2016-12-15T11-45-08+0100-00INX006-0 to airport.aedat” with OpenXC data file “FordVI-2016-12-15-rec01-to-airport-and-back.dat”
HDF5 file contents
Using HDFView to see structure of data files
We recommend HDFView for exploring the file container structure. Here is the steering wheel data for one file.
Note: attempting to explore the dvs data throws an exception in HDFView, but this is expected from the variable length cAER data that is contained in the dvs container.
HDFView inspecting steering wheel angle (Ford Mondeo) and Vehicle speed (Ford Focus) data for one recording
Data structures in HDF5 files
Data structure is defined by record.py code that writes HDF files using caer.py methods for packing the DAVIS data to HDF arrays.
The code fragment that stores a packet of DAVIS data is
def save_aer(ds, data): |
The rows are indexed by the DAVIS camera timestamp in seconds (dvs_timestamp). The individual DVS events and IMU samples and APS frames have their own camera-local timestamp in hardware int32 us from arbitrary (power on or timestamp reset) starting point.
Each row has the packet timestamp (again), the packet header, and the data payload.
Car data like steering_wheel_angle shown above are straightforward tables of rows of sensor readings, each with a timestamp in double seconds in the first column and data value in the second column. Data is saved with this code fragment
ds.save({ |
The additional timestamp vector with each topic (e.g. steering_wheel_angle/timestamp or dvs/timestamp) is common for each data type and is the timestamp in long us since 1970.
The dvs data has all the DAVIS camera data, packed in (awkward) variable length containers that each contain either DVS, APS, or IMU data. Each packet has only one type of data. This dvs data is extracted by the caer.py library. See export.py below for more information.
ddd20-utils usage
The following examples are from a terminal/console command prompt.
Note for python newbies regarding PATH
ddd20-utils should be on the PATH to enable python to find the scripts like view.py. It should also be on PYTHONPATH so that the scripts can import the libraries. Examples from .bashrc
export PATH="~/git/ddd20-utils:~/bin:~/jdk1.8.0_131/jre/bin:$PATH" |
If the script is on the PATH, and it is executable, then a leading python is not needed to launch the script.
Extra python libraries needed
The following may need to be run
pip install openxc h5py opencv-python numpy |
view.py: Get help for the viewer
$ view.py --help |
view.py: Play a file from the beginning
$ view.py <recorded_file.hdf5> |
Example
Assumes data is stored in /media/driving
$ view.py /media/driving/run3/rec1487355025.hdf5 |
Play a file, starting at X percent
$ view.py <recorded_file.hdf5> -s X% |
Play a file starting at second X
$ view.py <recorded_file.hdf5> -s Xs |
Rotate the scene during the playback
There is an option rotate180 in view.py that applies a 180 degree rotation to the APS and DVS displays. Set it true like this:
$ view.py -r True <recorded_file.hdf5> |
Controlling view.py
-
Left click mouse on the timeline window to select a new time.
-
Right click to bring up opencv menu and freeze playback
-
Mouse wheel zooms image on all opencv windows
-
Key i toggles car info display
-
Keys f and s play faster / slower by 20% (decrease or increase DVS frame duration)
-
Key b and d brighten /darken the DVS image by decreasing / increasing the full scale event count
Troubleshooting
-
If you get a gtk error when trying to run view.py, then you may need to use a different opencv or download/install opencv yourself, since pip install opencv-python may not install a working opencv. One instance of downloading/building/installing is
wget -O opencv.zip https://github.com/Itseez/opencv/archive/3.1.0.zip
unzip opencv.zip
cd opencv-3.1.0/
mkdir build
cd build/
cmake -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON ..
make -j 4
make install
-
If you get the following error:
$ python view.py
Traceback (most recent call last):
File "view.py", line 21, in <module>
import numpy as np
ImportError: No module named numpy
You need to install numpy. Do pip install numpy
-
If you get the following error:
[email protected]:~/ddd20-utils$ ./view.py
/home/tobi/anaconda/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "./view.py", line 28, in <module>
from interfaces.caer import DVS_SHAPE, unpack_header, unpack_data
File "/home/tobi/ddd20-utils/interfaces/__init__.py", line 3, in <module>
import oxc as openxc
File "/home/tobi/ddd17-utils/interfaces/oxc.py", line 15, in <module>
from openxc.tools import dump as oxc
ImportError: No module named openxc.tools
then install the openxc libraries to access the car data.
get_stats.py: Export statistics in a file
To output statistics for a particular variable, try this:
$ python get_stats.py attribute_name recording_name_1.hdf5 recording_name_2.hdf ... |
The attribute name can be one of the following attributes:
-
'steering_wheel_angle',
-
'brake_pedal_status',
-
'accelerator_pedal_position',
-
'engine_speed',
-
'vehicle_speed',
-
'windshield_wiper_status',
-
'headlamp_status',
-
'transmission_gear_position',
-
'torque_at_transmission',
-
'fuel_level',
-
'high_beam_status',
-
'ignition_status',
-
'latitude',
-
'longitude',
-
'odometer',
-
'parking_brake_status',
-
'fuel_consumed_since_restart'
export.py: Export data for training a network (reduce data to frame-based representation):
export.py creates another hdf5 file, containing the frame-based data.
$ export.py [-h] [--tstart TSTART] [--tstop TSTOP] [--binsize BINSIZE] |
-
All times are in seconds.
-
If BINSIZE is positive, events will be binned into time-slices of BINSIZE seconds (typical values are tens of milliseconds, e.g. 0.050 for 50 ms). Default binsize is 0.1 seconds.
-
If BINSIZE is a negative integer, frames will be generated every abs(BINSIZE) events (e.g. BINSIZE = -5000 for a constant event count of 5000).
-
EXPORT_APS and EXPORT_DVS are integer values, 1 for true (default is 1). They determine if DVS and/or APS frames are generated.
-
UPDATE_PROG_EVERY is the progress bar update interval (in percentage points), default is 0.01.
-
If -out_file is not supplied, then it takes the original filename and adds _export (i.e. rec01.hdf5 -> rec01_export.hdf5)
-
The combinations of options can result in the following kind of DVS and APS frame output:
-
-
Example: default export
Export a file with default settings
$ cd /media/sf_ddd17/run3 |
Example: make HDF5 file with 10k event frames
$ export.py --binsize -10000 --update_prog_every 10 --export_aps 0 --out_file /media/driving/run3/dvs1487355025.hdf5 /media/driving/run3/rec1487355025.hdf5 |
Figure: Different exporting modes explained. The mode (fixed time interval or fixed event count) is selected by positive (time interval in seconds) or negative (number of events) --binsize argument, respectively.
The output dvs_frame is a 3-tensor, with number_of_frames x width x height elements.
Example networks and tools
You should be able to access dvs_frames with h5py, and you can call .dtype on it to determine datatype or .size to get its multidimensional shape.
Before you call export, however, there won't be a dvs_frames entry.
We recommend looking at the ddd17-utils ipython notebooks Data Visualization.ipynb and Model Evaluation and Movie Render.ipynb. These have an example of how to to view the data and render the movies.
To start the notebook, type “ipython notebook” to a terminal. You can pass in the notebook name as the third argument.
-
Data Visualization.ipynb walks the user through loading the data and visualizing it
-
Model Evaluation and Movie Creation loads a trained network, plots the results, and renders one of those driving movies using whatever network is supplied
-
if you want to see how it’s done, check process_data.sh, which calls export, then preprocesses the data to resize it to something reasonable, then trains and tests a network on whatever datafiles are listed in the work list
-
The latest version supports multiple hdf5 input files, Check multiprocess_data.sh to see the script for running all the data from day 5.
Troubleshooting
$ ipython notebook
Could not start notebook. Please install ipython-notebook
$ pip install ipython-notebook
Collecting ipython-notebook
Could not find a version that satisfies the requirement ipython-notebook (from versions: )
No matching distribution found for ipython-notebook
$ sudo pip install --upgrade ipython[notebook]
Collecting ipython[notebook]
Downloading ipython-5.5.0-py2-none-any.whl (758kB)
100% |████████████████████████████████| 768kB 950kB/s
Collecting pickleshare (from ipython[notebook])
…..
Successfully uninstalled ipython-2.4.1
Successfully installed backports.shutil-get-terminal-size-1.0.0 bleach-2.1.1 certifi-2017.7.27.1 decorator-4.1.2 entrypoints-0.2.3 functools32-3.2.3.post2 html5lib-1.0b10 ipykernel-4.6.1 ipython-5.5.0 ipython-genutils-0.2.0 ipywidgets-7.0.3 jsonschema-2.6.0 jupyter-client-5.1.0 jupyter-core-4.3.0 mistune-0.7.4 nbconvert-5.3.1 nbformat-4.4.0 notebook-5.2.0 pandocfilters-1.4.2 pathlib2-2.3.0 pexpect-4.2.1 pickleshare-0.7.4 prompt-toolkit-1.0.15 ptyprocess-0.5.2 python-dateutil-2.6.1 pyzmq-16.0.2 scandir-1.6 setuptools-36.6.0 six-1.11.0 terminado-0.6 testpath-0.3.1 tornado-4.5.2 traitlets-4.3.2 wcwidth-0.1.7 webencodings-0.5.1 widgetsnbextension-3.0.5
$ ipython notebook
Data Visualization.ipynb
Steering wheel prediction
Statistics of steering wheel prediction as reported in the RAL paper are in the spreadsheet below. The smallest error for each recording is shown in boldface.
DDD20 steering angle prediction errors |
||||||||
RMSE steering wheel angle (deg) |
||||||||
recording |
DVS+APS |
stddev |
DVS |
stddev |
APS |
stddev |
||
night |
1 |
6.18 |
0.17 |
7.49 |
0.50 |
6.41 |
0.16 |
|
2 |
5.82 |
0.16 |
4.50 |
0.16 |
6.34 |
0.09 |
||
3 |
4.86 |
0.56 |
4.31 |
0.05 |
5.80 |
0.34 |
||
4 |
3.49 |
0.67 |
2.57 |
0.81 |
3.17 |
0.73 |
||
5 |
2.75 |
0.03 |
2.78 |
0.02 |
2.77 |
0.02 |
||
6 |
2.94 |
0.26 |
8.36 |
1.25 |
3.81 |
0.20 |
||
7 |
3.34 |
0.34 |
4.12 |
0.32 |
3.57 |
0.20 |
||
8 |
2.87 |
0.09 |
5.41 |
0.43 |
3.26 |
0.13 |
||
9 |
3.11 |
0.18 |
3.69 |
0.24 |
3.40 |
0.25 |
||
10 |
10.18 |
1.54 |
9.96 |
0.19 |
12.18 |
0.55 |
||
11 |
7.59 |
0.60 |
7.89 |
0.55 |
8.25 |
0.26 |
||
12 |
3.27 |
0.05 |
3.69 |
0.10 |
3.43 |
0.29 |
||
13 |
3.51 |
0.07 |
3.55 |
0.02 |
3.51 |
0.04 |
||
14 |
7.82 |
0.53 |
7.56 |
0.57 |
9.17 |
0.35 |
||
15 |
2.53 |
0.12 |
2.52 |
0.05 |
2.57 |
0.11 |
||
night avg |
4.68 |
0.52 |
5.23 |
0.48 |
5.18 |
0.31 |
||
day |
16 |
3.25 |
0.01 |
3.27 |
0.02 |
3.26 |
0.03 |
|
17 |
9.88 |
0.84 |
9.59 |
0.48 |
11.33 |
0.05 |
||
18 |
1.80 |
0.01 |
1.82 |
0.00 |
1.80 |
0.02 |
||
19 |
8.61 |
0.33 |
9.80 |
0.78 |
10.67 |
0.23 |
||
20 |
5.71 |
0.49 |
9.23 |
0.86 |
6.42 |
0.33 |
||
21 |
13.03 |
1.47 |
14.14 |
0.33 |
15.60 |
0.70 |
||
22 |
31.40 |
0.74 |
34.73 |
1.02 |
59.57 |
1.29 |
||
23 |
3.76 |
0.09 |
3.72 |
0.04 |
3.99 |
0.11 |
||
24 |
10.54 |
0.64 |
9.28 |
0.64 |
11.89 |
0.65 |
||
25 |
12.63 |
2.73 |
9.85 |
0.54 |
13.37 |
0.98 |
||
26 |
11.05 |
0.28 |
11.29 |
0.17 |
11.14 |
0.26 |
||
27 |
2.11 |
0.11 |
4.26 |
0.13 |
2.33 |
0.13 |
||
28 |
2.14 |
0.06 |
2.06 |
0.05 |
2.08 |
0.12 |
||
29 |
1.33 |
0.01 |
1.32 |
0.02 |
1.31 |
0.02 |
||
30 |
2.58 |
0.13 |
2.74 |
0.23 |
2.58 |
0.12 |
||
day avg |
7.99 |
0.88 |
8.47 |
0.49 |
10.49 |
0.51 |
||
overall avg |
6.34 |
0.73 |
6.85 |
0.48 |
7.83 |
0.42 |
||
Recording new data
In addition to software for playback and export of the DDD17 dataset files, we provide tools for recording new data. The recording framework can be modified / extended to record data from arbitrary sources by creating a new ‘interface’ class.
The following instructions focus on recording visual data from a DVS/DAVIS camera and vehicle diagnostic/control data from an OpenXC vehicle interface.
Requirements
Hardware
-
A DAVIS event camera with AER output
-
A compatible vehicle - see this useful spreadsheet of cars and data types available
-
Tested on linux Ubuntu 16 LTS. It also runs in a virtualbox linux guest under windows, but graphics slows it down probably too much for recording.
Software
For a virgin Ubuntu 16 LTS system, the script setup.sh can help guide specific steps needed for installation for recording.
In addition to the software required for viewing / exporting (opencv, opencv python bindings, hdf5 libraries, h5py; see Getting the software), a working installation of cAER is required, as well as the openxc tools for readout.
Specifically,
-
libcaer needs to be installed and functional (see libcaer README for installation instructions)
-
cAER should be built with ENABLE_NETWORK_OUTPUT=1 (see cAER README for details and additional software dependencies)
-
OpenXC python tools can be installed through “pip install openxc” (note that libusb needs to be present on the system for OpenXC to work.)
Firmware
-
Firmware needs to be installed to the OpenXC interface for the specific vehicle, or the emulation firmware can be used for testing a setup. Vehicle firmware requires registration with Ford as developer at https://developer.ford.com/register/ . Then firmware is available at https://developer.ford.com/pages/openxc
-
Open firmware repo (only for testing, not for actual vehicles) is https://github.com/openxc/vi-firmware . Latest 7.2 Ford release is here. See http://openxcplatform.com/vehicle-interface/firmware.html
-
Extract the emulation firmware from zip file; it is vi-emulator-firmware-FORDBOARD-ctv7.2.0.bin
-
In case of the Ford reference interface, firmware is installed by holding interface in reset with paperclip in reset button hole while USB is plugged in. The interface appears as a flash drive. The firmware.bin file can be replaced by one specific for vehicle or test emulation.
Recording checklist
-
Is the correct car firmware loaded to interface (and not emulator used for debugging)?
-
In terminal, run caer-bin to start camera
-
In 2nd terminal, cd to target directory for recording files, and ensure it is writeable
-
In 2nd terminal, run record.py to start recording.
-
If it dies complaining that Ford VI USB interface is not present,
-
is the car ignition turned on?
-
Does some other process still have the interface open?
-
Is the cable plugged in? Sometimes it is necessary to unplug both ends of interface, plug in USB to computer, then plug into interface to car.
-
-
-
Hit enter to start recording, enter to stop recording.
-
Check recording with view.py rec, where file is the la
-
test recorded HDF5 file with view.py
Usage record.py
python record.py
- records to a new file recXXX.hdf5
python record.py /media/sf_recordings
- records to a new file recXXX.hdf5 in the folder /media/sf_recordings
-
Run cAER (cd <caer>; ./caer-bin) to capture DVS visual data
-
From ddd20-utils, test interface with python interfaces/caer.py
-
Run python record.py. A preview of the captured data is displayed, this can be used to align the camera etc. Press enter to start writing to a file.
-
Terminate the recording with ctrl-c.
-
You might prefer to run “./record.py”, so that in the event of a crash it is not needed to “killall record.py”. You may need to “chmod +x record.py”
-
A single argument is allowed that specifies the directory to save hdf5 file to
Troubleshooting
You can test DAVIS with
python interfaces/caer.py, |
and test OpenXC with
python interfaces/oxc.py
You can also test the interface using
openxc-dashboard. |
This utility is installed with openxc.
If OpenXC interface appears in lsusb but cannot be opened, add udev rules:
Make a new file
/etc/udev/rules.d/98-fordvi.rules,
in this file put the line
SUBSYSTEM=="usb", ATTR{idVendor}=="1bc4", ATTR{idProduct}=="0001", MODE="666"
Then replug the interface.
On a virgin install, you may need to install rules for the DAVIS camera as well. See inilabs documentation for thesbe e rules.
Sensor alignment
-
Most files in the run2 directory contain the 'alignment device'.
-
Try rec1487333001.hdf5 or rec1487337800.hdf5, for instance (very dark, auto exposure didn't exist back then...)
-
You can skip to a brighter section by clicking the timeline window of the viewer (third window)
APS exposure control
Exposure control is done in interfaces/caer.py.ExposureCtl. Options should be checked to be correct for camera mounting, i.e. if camera is mounted upside down, then the options
exposure = ExposureCtl(cutoff_bot=80, cutoff_top=0) # set for upside down camera where sky will be at bottom
at line 311 of caer.py are appropriate
Detecting corrupted recordings
The utility h5check can be used to detect corrupted hdf5 files. (These can occur any time the HDF5 file is not properly closed). Once h5check is installed, then use this shell command to move corrupted files to a subfolder “corrupted”, first make the folder with “mkdir corrupted”, then
for i in rec*.hdf5; do h5check $i || mv $i corruupted/; done
H5check is not installed by default. To obtain it see https://support.hdfgroup.org/products/hdf5_tools/h5check.html
Useful aliases for recording
These aliases and environment variables can be modified and put in your .bashrc file
#DATA=/media/tobi/MYLINUXLIVE/data
#DATA='/media/tobi/F0E885B6E8857C1A/Users/Tobi Delbruck/Resilio Sync/DDD17-DavisDrivingDataset2017/fordfocus'
DATA='/mnt/F0E885B6E8857C1A/Users/Tobi Delbruck/Resilio Sync/DDD17-DavisDrivingDataset2017/fordfocus'
PATH="/home/tobi/setups/ddd20-utils:/home/tobi/bin:/home/tobi/jdk1.8.0_131/jre/bin:$PATH"
#PATH="/home/tobi/setups/ford:$PATH"
alias r="killall record.py;record.py \"$DATA\"" # starts a recording to the DATA folder
alias cdd='cd "$DATA"' # cd to recordings folder
alias cdc='cd ~/setups/ddd20-utils' # cd to code folder
bind TAB:menu-complete # use uparrow for history completion, allows c^
alias va='for i in rec*.hdf5; do view.py $i; done' # view all recordings in current folder
alias v='view.py' # alias for view
alias vl='view.py `ls -t rec* | head -1`' # view last recording in current folder
alias d='pushd'
alias u='popd'
Useful commands
Produce CSV file of recordings
find . -name rec* -printf '"%P","%Tc","%TD","%TT","%s",\n' > listing.csv |