
In the previous blog post (link) I talked about how to interpret places from journey records, proposing a visualization based means to identify key places (“home”, “work”) and then to do a validation using common mapping services like Google. In this post I am going to explore analytic insights on the place information to address questions of the data related to likelihood of journeys and their habitual nature.
Preparing the data
The data set I used for the place frequency analysis until now was basically a list of journeys with start and stop places, with time and date information. A snippet is shown below:
Next I want to understand how often during the sample period the car in question went from one place to another. The data set contains actual journeys, but what is missing is the days and times there were no journeys. For example, if a car went from place 1 to place 2 on Monday 3 times in the data set, I don’t immediately know how habitual that is that is unless I know how many “Mondays” were present in the sample time-frame.
What I do know at this point is that depending on the car, I have about 1.5 – 2 months of data — between 6 and 8 weeks — which when considering insights into journey habits maybe not be a lot. My instincts at this point are to first consider coarse level temporal patterns – like daily events – before moving to more granular analysis – like time related hourly or time of day patterns.
I processed the actual journeys to expand the non-travel dates and also sum the number of journeys between places on each day of the week. You can see a snippet of the file below, focusing on Monday and Friday journeys for a particular car:
What you can see is that (on row 2) on “Mondays” there were a total of 13 journeys from place 3 to place 3 that occurred over 4 days (journey.days) and 2 days (non.journey.days) when there was no travel. If you add the jouney.days and non.journey.days features together you can figure out that there was a total of 6 “Mondays” in my sample set (i.e. 6 weeks of data). And you can also see (on row 64) that there were no journeys on Mondays between place 1 to place 1.
After expanded data set of journeys was built, I created 2 3-dimensional matrices:
- Total number of trips per day from one place to another (journey.count.x)
- Number of days in the sample at least 1 trip occurred per day from one place to another (pred.journey)
The matrix structure for each is:
(journey start place — “from” X journey end place — “to” X day of the week)
With cardinality:
(total number of places X total number of place X 7)
I have included the code snippets at the end of the blog to see the necessary transformations to make these matrices from the data frame above.
So now the fun can begin! Let’s consider some questions related to temporal journey patterns
Where does the car usually go?
This is really two questions hiding inside one:
- What journeys does the car take on a regular basis?
- What journeys does the car NOT take on a regular basis?
Both of these questions related to habitual patterns in the data, in contrast to non-habitual patterns (i.e. unpredictable patterns). Both babitual and non-habitual behaviour are interesting to know, since this will allow us to form quantitative statements on likelihood of journeys on certain days.
Take an example: Assume a particular car has gone from place 3 to place 6 five dates out of six on Tuesdays. This seems to a “usual” journey, but how usual is it?
This question can be formulated as a probabilistic hypothesis that we can use the sample data to resolve. Given we have trial information (5 out of 6 occurrences), this sounds like an application of binomial probability, but in this case we don’t know the probability (p) of the traveling from location 3 to 6. However, what we can do is calculate the probability of getting 5/6 occurrences with different p values and examine the likelihood that certain p values are “unlikely”. In this case I have rather arbitrarily set “unlikely” as less than 10%, meaning I want to have a 90% level of likeliness that my p value is correct.
To be a bit more precise, I want to find the value of p (pi) such that I can reject the following (null) hypothesis at 90% level.
Journeys between place 3 and place 5 on day Tuesdays are follow a binomial distribution with p value pi
Below are the probability values for different of p and different number of journeys out of a maximum of 6.
We can see that for 5 journey days out of 6 (“5T”), the p value (“p”) of 0.50 has a probability of 0.094, and a p value of 0.55 has a probability of 0.136. If the value of pi in the above hypothesis was 0.50, I could reject the hypothesis since there is only a 9.4% chance I can have 5/6 journeys with that p value. In fact, I can reject the hypothesis for all value less than 0.50 as well, because their probabilities are also lower than 10%. What I am left with is the statement that journeys between place 3 and 6 on Tuesdays is “usual”, where the p value is greater than 0.55, or in other words there is a >55% chance that a car will go from place 3 to place 6 on Tuesdays.
What about trips not taken? In our data set, there are lots of non-journeys on days (i.e. days of the week with no journeys). For any location pair on any day with no journeys (0 out of 6 – column “0T”), we can see that p value of 0.35 or less cannot be rejected at the 90% level of likeliness. So, in our case “unusual” means a p value of less than 0.35.
One take away from this analysis is that a sample of 6 weeks does not give us a lot information to make really significant statements on the data. Even with no observed travel between 2 places in a day we can only be confident that the probability of travel between them is less than 35%. If we had 10 weeks of information (see below for a table I worked out with 10 weeks journey info), then the binomial probability of taking no trips over 10 weeks would be less than 0.25 at a 90% confidence level. Similarly, if there was a journey 9 out of 10 weeks we could side the binomial probability would be greater than 0.8 at a 90% confidence level.
A bit unsatisfuing, but having some data is better than having no data, and being able to quantify our intuition about “usual” into a probabilistic statement is something.
In terms of scripting, R provides some simple ways to extract the relevant car information from our journey count matrix once they have been developed — arguably better than the original data frame.
To find all the days and journeys that have 4, 5 or 6 journeys, we can use the syntax:
which(pred.journey.x > 3, arr.in=TRUE)
The result is show below, where “dim1” is the row, “dim2” is the column and “dim3” is the day from the original matrix.
For example, we can see that on Mondays (“dim3” = 1) the car had more than 3 journey days out of 6 between places 3 to 3, 10 to 3 and 3 to 10. Similarly, on Tuesdays there were 4 or more journey days between 3 to 6 and 6 to 7.
You can play with the conditional statement in the “which” command to select exact journey values or other conditional queries you can imagine.
You can also dig a little deeper into the journey days for specific locations or days. Here is a command to show all the journey days per day of the week that occurred between location 3 and 6.
Are there places that the car goes to frequently?
On examining the data, I could see that for most day and journey pairs, there is 1 or 0 journeys. However, there are a few places and days where there are more frequent journeys. Below is a report (code at bottom to generate it) that shows the journey locations and days for a specific car (“dim1”, “dim2”, “dim3”) that had more journeys per day, with journeys count (“journey.count”) and days traveled (“journey.days”) and the ratio of journeys to days (“ratio.v”)
What you can observe is that on Mondays (“dim3” = 1) there are “some regular trips” (with trip probability of between 0.20 and 0.85 based on the fact 4 /6 weeks there was travel) from 3 to 3 (ratio 3.25) and 10 to 3 (ratio 1.25).
Summing up
While this has been an interesting exercise, the less than satisfying part for me is the lack of profound (even relatively profound) statements that can be made on the data, given the relatively little amount of information available. However, in the land of the blind the one-eyed man is king, so some information is better than none, and the ability to put some precision to our intuition of patterns I think is valuable.
If I had more data, I would have liked to expand the analysis to cover time as well, both hourly or time of day (e.g. morning, afternoon, evening). The structure of the analysis matrices would be the same, except I would have to add another dimension for time or time of day (thus making a 4-dimensional matrix). Based on the hypothesis testing I have done with the days, I would think much more than 10 weeks data should be sufficient to start considering this level of detail.
Happy location playing!
Creating the location and frequency matrices
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Section 1. Load the data from journey file | |
# this section takes in the dataframes (reads them in first from CSV files) | |
l.journeys <- list.files(pattern = "*_journey.csv") | |
# select a particlar car and day to examine | |
x.journey.number <- 4 | |
df.x <- read.csv(l.journeys[x.journey.number] | |
, stringsAsFactors = FALSE) | |
# | |
# > str(df.x) | |
#'data.frame': 700 obs. of 6 variables: | |
# $ from : int 1 1 1 1 1 1 1 1 1 1 … | |
#$ to : int 1 1 1 1 1 1 1 2 2 2 … | |
#$ day : chr "Monday" "Tuesday" "Wednesday" "Thursday" … | |
#$ journeys : int 0 0 0 0 0 0 0 0 1 1 … | |
#$ journey.days : int 0 0 0 0 0 0 0 0 1 1 … | |
#$ non.journey.days: int 6 6 6 6 6 6 6 6 5 5 … | |
#> head(df.x) | |
#from to day journeys journey.days non.journey.days | |
#1 1 1 Monday 0 0 6 | |
#2 1 1 Tuesday 0 0 6 | |
#3 1 1 Wednesday 0 0 6 | |
#4 1 1 Thursday 0 0 6 | |
#5 1 1 Friday 0 0 6 | |
#6 1 1 Saturday 0 0 6 | |
# | |
# find the max cluster number in the data frame | |
x.max.cluster <- max(c(max(df.x$from),max(df.x$to))) | |
# create a label vector of ordered days of week (1 = Monday) | |
v.days <- c("Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday") | |
# Section 2. Journey Count | |
# initialize a matrix to calculate frequency and probability with | |
# day, start and end places | |
journey.count.x <- array(rep(0, | |
(x.max.cluster)*(x.max.cluster)*7), | |
dim=c(x.max.cluster,x.max.cluster,7)) | |
# load the journey data from the dataframe into the matrix | |
for (i in 1:7) { | |
for (j in 1:x.max.cluster) { # from | |
for (k in 1:x.max.cluster) { # to | |
journey.count.x[j,k,i] <- df.x[df.x$from == j & | |
df.x$to == k & | |
df.x$day == v.days[i],"journeys"] | |
} | |
} | |
} | |
# add names to the matrix dimention. | |
# row = "from" place for a journey | |
# column = "to" place for a journey | |
# plane = day of week journey occured | |
rownames(journey.count.x) <- | |
rownames(journey.count.x, do.NULL = FALSE, prefix = "From.") | |
colnames(journey.count.x) <- | |
colnames(journey.count.x, do.NULL = FALSE, prefix = "To.") | |
dimnames(journey.count.x)[[3]] <- v.days | |
# here ia an example of queries on the matrix to find what to/from/days | |
# have more than 5 journeys | |
# dim 1 is row, dim 2 is column, dim 3 is days | |
which(journey.count.x >5, arr.in=TRUE) | |
# Section 3. Days of travel count | |
# | |
# Initialize a matrix to examine predictability of travel (using the count of | |
# days where at least 1 journey occured between 2 locations) with | |
# day, start and end places | |
pred.journey.x <- array(rep(0, | |
(x.max.cluster)*(x.max.cluster)*7), | |
dim=c(x.max.cluster,x.max.cluster,7)) | |
for (i in 1:7) { | |
for (j in 1:x.max.cluster) { # from | |
for (k in 1:x.max.cluster) { # to | |
pred.journey.x[j,k,i] <- df.x[df.x$from == j & | |
df.x$to == k & | |
df.x$day == v.days[i],"journey.days"] | |
} | |
} | |
} | |
# add names to the matrix dimention. | |
# row = "from" place for a journey | |
# column = "to" place for a journey | |
# plane = day of week journey occured | |
rownames(pred.journey.x) <- rownames(pred.journey.x, do.NULL = FALSE, prefix = "From.") | |
colnames(pred.journey.x) <- colnames(pred.journey.x, do.NULL = FALSE, prefix = "To.") | |
dimnames(pred.journey.x)[[3]] <- v.days | |
# here ia an example of queries on the matrix to get insights on | |
# journeys | |
# dim 1 is row, dim 2 is column, dim 3 is days | |
which(pred.journey.x >5, arr.in=TRUE) # journeys that occured 6 out of 6 days | |
which(pred.journey.x >4, arr.in=TRUE) # journeys that occured 5 out of 6 days | |
which(pred.journey.x >3, arr.in=TRUE) # journeys that occured 6 out of 6 days |
Script for multilocation report
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Section 1. Multiple Journey Days | |
# this section generates a report of the to,from and days that have more | |
# journeys than days travelled (i.e. there was on average more than 1 trip per day) | |
# select the days/to/from that have more journeys than journey days | |
out.m <- which(journey.count.x>pred.journey.x, arr.in=TRUE) | |
# select the journey count for the multi-trip dates | |
journey.count <- journey.count.x[journey.count.x>pred.journey.x] | |
# select the journey days for the multi-trip dates | |
journey.days <- pred.journey.x[journey.count.x>pred.journey.x] | |
# make a ratio of journey/journey days | |
ratio.v <- round(large.journey.count.v / large.journey.days.v,2) | |
# append the journeys, journey days, ratio as columns to | |
# the from, to, day information | |
out.m <- cbind(out.m,journey.count,journey.days, | |
ratio.v) | |
out.m |