data sets

Definition and prepartation of the data sets

deci seconds

Source: etl_telemetry_deci Query: network_log.sql

To analyze the deci second data we can use fewer features (because of sampling rate) like the computing features and vehicle_ping. Event rtp and modem features have to be removed. We collect other networking features from the network_log table where we subset the telemetry data into a series of spikes and we merge the network data information using the timestamp and the vehicle_id. We pivot the network_log table wrt to the modem name. We also discard ttl and interval_duration for the low information contained. We discard packets because the hava a correlation of 0.98 with bytes.

network_log_series time series of the most relevant features

We see that the quantity of transmitted information slightly correlates between modems signalizing similar networking issues across operators. The correlation between bytes and arrival time is lower than expected

network_log_corr correlation between the network log features

But in general modems have a different behaviors, cellular 3 is the most stable and transmits more data.

network_log_joyplot joyplot of network_log features

datasets

We prepare different datasets for different parts of the analysis:

The different sets have a different sampling of events, the latency set has few spike events while the spike set has all the perturbations before and after a disruption event.

From the two dataset we can see the different feature distribution and feature importance.

spike preprocessing

Query: resample_1sec Code: etl_spikes

We resample averaging 1 sec worth of data but some fields are still uncomplete, around 1% of the timse

feature_missing percentage of missing data in certain features

We identify the peak and split each time series in sub series where all peaks are aligned at the minute 4 and we leave one minute after the spike.

We artificially exagerate the peak to help the model understand it and set the peak max value to a plateau

spike_events series of spikes

We than abstract the time and create a stack of series where the peak in phase

spike_stack stack of spikes where the spike happens at minute 4

spike inspection

It is interesting to analyze some spike series to understand qualitatively what might happen during incidents. In this case we see two concurrent spikes in two different sessions

spike_simultaneous two spikes at the same time on different cars (24, 76)

Averaging the two series it seems that the room_cpu has a clear variation prior to the spike.

series_simultaneous average of both time series prior to the spike

We see 3 of such pairs: 2020-08-17T11, 2020-08-24T18, 2020-08-20T08.

feature selection

At first we discard poor features analyzing:

Spikes are rare events and some features might be as rare as spikes and than a indication of them but in case of ['td_brake','td_throttle','steering_interval'] we see that data are rare don’t happen in high latency cases.

poor_feature poor features

In comparison we see that rtp_lost and rtp_late have a clear distribution dinstinction although most of the time data are NaNs

rich_feature poor features but discriminators for latency

feature statistics

Code: stat_telemetry, stat_latency.

Of the most important feature we visualize the time series

series_log visualization of the time series

We collect a day of data and subset the time series prior to a spike

time_series time series of the most relevant features prior to a spike, we can see that after the disruption the vehicle brakes

We removed some features because of the obvious cross correlation (ex: brake_pressure ~ wheel_speed)

computing_correlation feature correlation, obvious cross correlation

We see a dependency between vehicle and control_room cpu and ram usage

vehicle_room vehicle and control room interdependency on computation

We see that not all the features are distributed regularly and some are pretty noisy.

feat_boxplot distribution of the latency features: boxplots

In case of the spike subset the distribution get a bit broader

feat_boxplot distribution of the spike features: boxplots

We than calculate the logaritmic of the most dispersed features (e.g. _force_) to gain more meaningful information. Changing the latencies to their logaritmic value doesn’t change much the distributions

feat_boxplot logaritmic transformation for some features

We see that some quantities are bimodal, interestingly latencies too. It seems there are clear distinguished operating regimes.

feat_joyplot distribution of the latency features: joyplots

Another feature selection takes more into account the networking features, the different time frames change a lot the data distribution. In this case we consider only events in the proximity of a spike and the features become even more bimodal

feat_joyplot distribution of the spike features: joyplots

Looking at the pair plot we see that only ram_usage has an interesting dependency on the camera_latency, the **purple set is the one corresponding to the larger bin of the camera_latency

We analyze the how the networing features distributes in different regimes of camera latency

feat_pairplot pairplot of the networking features

Same for the computing features

feat_pairplot pairplot of the computing features

Pairplot for vehicle dynamics features don’t show a clear dependency on latency

feat_pairplot pairplot of the vehicle dynamics features

latency statistical properties

We want to understand the periodicity and auto correlation of camera_latency to understand how to separate different regimes

latency_decay latency decays and frequency behaviour

We analyze the power spectrum for a normal latency or during a spike

latency_spectrum latency power spectrum for normal and spike regimes

During the spike event autocorrelation is a bit larger and more stable than in the normal regime

latency_autocor Autocorrelation for camera latency in normal and spike regimes

feature normalization

For a fast and reliable training a good normalization is essential for the performances of the predictions. We need to find a stable normalization to keep across all predictions. If we need to change the normalization than we need to retrain the model.

flatten_outlier flattening of outlier to let the normalization to operate in a consitently populated regime

denoise

To have a clear understanding on what is happening we stack many time series where the spike happens at the 0 second. We take 160+ time series and we calculate the average to denoise and have a more statistical understanding on the process

series_avExp average on spike events where the spike happens at second 0

If we have a clear look at the moment where the spike happens we clearly see that some features show a change previous to the spike and others consequent to the spike

series_spike detail on the few seconds around the spike event

Similarly we look at the signal differences

series_spike series evolution of the feature differences

and the close up

series_spike series evolution of the feature differences

We analyze the variance depending on the time to the spike

series_latencyVar variance of camera latency

Another important indicator of causal dependency is the dropping of variance before a spike which tell us that ram is clearly affected by spikes

series_variance variance of time series

For some features the variance is so high that we would consider the averages just random fluctuations

series_binned variance and mean for time series

We analyze the time series for the features with an higher firing rate, we see that in average the byte volume drops across all modems

series_deci time series on deci seconds

If we look at the few seconds close to the spike we don’t see clear signs of incoming spikes

series_deci time series on deci seconds, detail

Neither looking at the derivative we see a clear trigger

series_deci time series on deci seconds, derivative

We than smooth the series to have a better picture and start recognizing some patterns

series_deci time series derivative on deci seconds, smoothed