Estimating Activity Using Mobility Data#

Understanding population movement can provide valuable insights for public policy and disaster response efforts, particularly during crises when less movement often correlates with reduced economic activity.

Similar to initiatives such as the COVID-19 Community Mobility Reports, Facebook Population During Crisis, and Mapbox Movement Data, we have developed a range of crisis-relevant indicators. These include baseline and subsequent device densities, as well as metrics like percent change and z-score. These indicators are derived by aggregating device counts within specific geographical tiles and across various time periods, utilizing longitudinal mobility data.

It’s important to note the inherent limitations associated with this approach, as detailed in Limitations. Notably, mobility data is typically collected through convenience sampling methods and lacks the controlled methodology of randomized trials.

Data#

In this section, we import from the data sources, available either publicly or via Datasets.

Area of Interest#

In this step, we import the clipping boundary and the H3 tessellation defined by area(s) of interest below.

Hide code cell source
AOI = geopandas.read_file("../../data/final/tessellation/SYRTUR_tessellation.gpkg")

AOI[["geometry", "hex_id", "distance_bin", "distance"]].explore(
    column="distance_bin",
    cmap="seismic_r",
    style_kwds={"stroke": True, "fillOpacity": 0.05},
)
Make this Notebook Trusted to load map: File -> Trust Notebook
../../_images/logo.png

Fig. 1 Visualization of the area of interest centered at the earthquake’s epicenter. The distance (in Km) to the epicenter is calculated for each on H3 (resolution 6) tile.#

Mobility Data#

The WB Data Lab team has acquired longitudinal human mobility data encompassing anonymized timestamped geographical points generated by GPS-enabled devices located in Türkiye and Syria. This dataset spans the specified timeframe below.

The project team has utilized the longitudinal mobility data to derive several key metrics. Specifically, we compute baseline and subsequent device densities, denoted as n_baseline and count respectively, along with metrics such as percent change (percent_change and Z-score (z-score). These indicators are derived by aggregating the device count within each tile and at each time period. The devices are sourced from the longitudinal mobility data. For further details, please refer to the documentation provided in Data and Methodology.

ddf = dd.read_parquet(
    f"../../data/final/panels/{PANEL}",
    columns=["hex_id", "longitude", "latitude", "datetime", "uid", "month"],
)

Note

Due to the data volume and velocity (updated daily), the panel’s computation from the raw mobility data took place on AWS. The resulting named dataset above is available on the project’s folder.

First, we calculate the cardinality,

Hide code cell source
len(ddf)
372967805

Now, we calculate the temporal extent,

Hide code cell source
print(
    "From",
    ddf["datetime"].min().compute().strftime("%b %d, %Y"),
    "to",
    ddf["datetime"].max().compute().strftime("%b %d, %Y"),
)
From Jun 28, 2022 to Nov 01, 2023

And visualize the mobility data panel’s spatial density.

../../_images/logo.png

Fig. 2 Visualization of the mobility data panel’s spatial distribution. The panel is composed of approximately 200 million points. Source: Veraset Movement.#

Methodology#

The methodology presented consists of generating a series of crisis-relevant metrics, including the baseline(sample) population density, percent change and z-score based on the number of devices in an area at a time. The device count is determined for each tile and for each time period, as defined by data standards and the spatial and temporal aggregations below. Similar approaches have been adopted, such as in [Maas, 2019]. The metrics may reveal movement trends in the sampled population that may indicate more or less activity.

Data Standards#

Population Sample#

The sampled population is composed of GPS-enabled devices drawn out from longituginal mobility data. It is important to emphasize the sampled population is obtained via convenience sampling and that the mobility data panel represents only a subset of the total population in an area at a time, specifically only users that turned on location tracking on their mobile device. Thus, derived metrics do not represent the total population density.

Spatial Aggregation#

The indicators are aggregated spatially on H3 resolution 6 tiles. This is equivalent to approximately to an area of \(36 Km^2\) on average as illustrated below.

Make this Notebook Trusted to load map: File -> Trust Notebook
../../_images/logo.png

Fig. 3 Illustration of H3 (resolution 6) tiles near Gaziantep, Türkiye. Gaziantep is among the most affected areas by the 2023 Türkiye–Syria Earthquake; a 2200-year-old Gaziantep Castle was destroyed after the seismic episodes.#

Temporal Aggregation#

The indicators are aggregated daily on the localized date in the Europe/Istanbul (UTC+3) timezone.

Implementation#

Calculate ACTIVITY#

In this step, we compute ACTIVITY as a density metric. Specifically, we tally the total number of devices detected within each designated area of interest, aggregated on a daily basis. It’s important to highlight that this calculation is based on a spatial join approach, which determines whether a device has been detected within an area of interest at least once. This method, while straightforward, represents a simplified approach compared to more advanced techniques such as estimating stay locations and visits.

ACTIVITY = (
    ddf.assign(date=lambda x: dd.to_datetime(ddf["datetime"].dt.date))
    .groupby(["hex_id", "date"])["uid"]
    .nunique()
    .to_frame("count")
    .reset_index()
    .compute()
)
2024-03-20 14:00:00,541 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 10.51 GiB -- Worker memory limit: 16.00 GiB

Additionally, we create a column weekday that will come handy later on when standardizing.

ACTIVITY["weekday"] = ACTIVITY["date"].dt.weekday

Calculate BASELINE#

In this step, we choose the period spanning July 1, 2022 to December 31, 2022 as the baseline. The baseline is calculated for each tile and for each time period, according to the spatial and temporal aggregations.

BASELINE = ACTIVITY[ACTIVITY["date"].between("2022-07-01", "2022-12-31")]

In fact, the result are 7 different baselines for each tile. We calculate the mean device density for each tile and for each day of the week (Mon-Sun).

MEAN = BASELINE.groupby(["hex_id", "weekday"]).agg({"count": ["mean", "std"]})

Taking a sneak peek,

MEAN[MEAN.index.get_level_values("hex_id").isin(["862da898fffffff"])]
count.mean count.std
hex_id weekday
862da898fffffff 0 8867.653846 9441.786543
1 8641.500000 9341.744035
2 8100.192308 8794.041446
3 8858.307692 8130.096180
4 10231.888889 10199.128712
5 10072.407407 10230.396328
6 9946.384615 9898.669483

Calculate Z-Score and Percent Change#

A z-score serves as a statistical metric indicating the deviation of a specific data point from the mean (average) of a given dataset, expressed in terms of standard deviations. It is particularly valuable for standardizing and facilitating meaningful comparisons across various datasets. By evaluating the z-scores, one can gauge the extent to which a dataset diverges from its mean, while accounting for variance. Conversely, a percent change offers a simpler interpretation but lacks the detailed information provided by z-scores.

Creating StandardScaler for each hex_id,

scalers = {}

for hex_id in BASELINE["hex_id"].unique():
    scaler = StandardScaler()
    scaler.fit(BASELINE[BASELINE["hex_id"] == hex_id][["count"]])

    scalers[hex_id] = scaler

Joining with the area of interest (AOI),

ACTIVITY = ACTIVITY.merge(AOI, how="left", on="hex_id").drop(["geometry"], axis=1)

Finally, merging with the (mean) baseline,

ACTIVITY = pd.merge(ACTIVITY, MEAN, on=["hex_id", "weekday"], how="left")

Calculating the z_score for each tile,

for hex_id, scaler in scalers.items():
    try:
        predicate = ACTIVITY["hex_id"] == hex_id
        score = scaler.transform(ACTIVITY[predicate][["count"]])
        ACTIVITY.loc[predicate, "z_score"] = score
    except Exception:
        pass

Additionally, we calculate the percent change. While the z-score offers more robustness to outliers and numerical stability, the percent change can be used when interpretability is most important. Thus, preparing columns,

ACTIVITY["n_baseline"] = ACTIVITY["count.mean"]
ACTIVITY["n_difference"] = ACTIVITY["count"] - ACTIVITY["n_baseline"]
ACTIVITY["percent_change"] = 100 * (ACTIVITY["count"] / (ACTIVITY["n_baseline"]) - 1)

Taking a sneak peek,

hex_id date count n_baseline n_difference percent_change z_score ADM0_PCODE ADM1_PCODE ADM2_PCODE
733408 862db3bafffffff 2023-10-31 1 13.923077 -12.923077 -92.817680 -3.693753 SY SY12 SY1200
733407 862db3bafffffff 2023-10-30 1 13.846154 -12.846154 -92.777778 -3.693753 SY SY12 SY1200
733406 862db3bafffffff 2023-10-29 1 11.884615 -10.884615 -91.585761 -3.693753 SY SY12 SY1200
733405 862db3bafffffff 2023-10-26 1 14.440000 -13.440000 -93.074792 -3.693753 SY SY12 SY1200
733404 862db3bafffffff 2023-10-25 1 11.961538 -10.961538 -91.639871 -3.693753 SY SY12 SY1200
... ... ... ... ... ... ... ... ... ... ...
29017 862c14807ffffff 2022-07-04 19 7.750000 11.250000 145.161290 2.617558 SY SY08 SY0803
29016 862c14807ffffff 2022-07-03 11 8.272727 2.727273 32.967033 0.908132 SY SY08 SY0803
23834 862c14807ffffff 2022-07-01 11 9.500000 1.500000 15.789474 0.908132 SY SY08 SY0803
23833 862c14807ffffff 2022-06-30 13 6.166667 6.833333 110.810811 1.335489 SY SY08 SY0803
23832 862c14807ffffff 2022-06-29 12 3.428571 8.571429 250.000000 1.121811 SY SY08 SY0803

223404 rows × 10 columns

Findings#

Less movement typically means less economic activity. A potential use of movement “activity” indicators could be to see the their evolution in time and correlatation to other features. We present the results (i.e., percent_change and z_score) on both governate and selected areas.

Percent Change in Activity#

Percent Change in Activity by Governorate#

In this section, we present visualizations of the aggregated percent_change for each governorate.

Hide code cell source
p = figure(
    title="Activity Trends: Percent Change by Governorate",
    width=800,
    height=700,
    x_axis_label="Date",
    x_axis_type="datetime",
    y_axis_label="Percent change (based on device density)",
    tools="pan,wheel_zoom,box_zoom,reset,save,box_select",
)
p.y_range = Range1d(-150, 500, bounds=(-150, None))
p.add_layout(
    Title(
        text="",
        text_font_size="12pt",
        text_font_style="italic",
    ),
    "above",
)
p.add_layout(
    Title(
        text="Percent change in device density for each time window and each first-leveml administrative division",
        text_font_size="12pt",
        text_font_style="italic",
    ),
    "above",
)
p.add_layout(
    Title(
        text=f"Source: Veraset Movement. Creation date: {datetime.today().strftime('%d %B %Y')}. Feedback: datalab@worldbank.org.",
        text_font_size="10pt",
        text_font_style="italic",
    ),
    "below",
)
p.add_layout(Legend(), "right")
p.renderers.extend(
    [
        Span(
            location=datetime(2023, 2, 6),
            dimension="height",
            line_color="grey",
            line_width=2,
            line_dash=(4, 4),
        ),
    ]
)
p.add_tools(
    HoverTool(
        tooltips="Date: @x{%F}, Percent Change: @y{00.0}%",
        formatters={"@x": "datetime"},
    )
)
renderers = []
for column, color in zip(data.columns, COLORS):
    try:
        r = p.line(
            data.index,
            data[column],
            legend_label=NAMES.get(column),
            line_color=color,
            line_width=2,
        )
        r.visible = False
        renderers.append(r)
    except Exception:
        pass

renderers[0].visible = True

p.legend.location = "bottom_left"
p.legend.click_policy = "hide"
p.title.text_font_size = "16pt"
p.sizing_mode = "scale_both"
Loading BokehJS ...

Percent Change in Activity for Specific Areas#

In this section, we present visualizations of the percent_change for specific areas, such as Aleppo, Syria, among others.

AREAS = ["Aleppo, SY", "Idlib, SY", "Sahinbey, TR", "Sehitkamil, TR"]
Aleppo, SY Idlib, SY Sahinbey, TR Sehitkamil, TR
date
2022-06-28 -85.061240 -88.611550 -10.942621 5.185608
2022-06-29 -70.404890 29.500632 107.927575 181.788463
2022-06-30 -66.838456 -10.724082 138.633859 164.734823
2022-07-01 -44.266469 100.041040 93.918238 171.873760
2022-07-02 -6.688375 -13.880445 85.930355 128.654067
... ... ... ... ...
2023-10-28 -91.808989 -87.043350 -73.099934 18.923707
2023-10-29 -92.734049 -94.128882 -87.115446 102.765045
2023-10-30 -93.881265 -90.154318 -82.134036 45.738152
2023-10-31 -93.883027 -95.050423 -86.393535 -28.208391
2023-11-01 -97.823924 -98.849558 -98.516320 -98.379058

480 rows × 4 columns

And we visualize the time series,

Z-Score#

Z-Score by Governorate#

In this section, we present visualizations of the aggregated z_score for each governorate.

../../_images/logo.png

Fig. 4 The map above shows the z-score for each H3 tile and each time period. The z-score shows the number of standard deviations that the data point diverges from the mean; in other words, whether the change in population for that area is statistically different from the baseline period. Click to see it on Foursquare Studio#

Alternatively, we present a time series plot of the aggregated z_score for each governorate

Hide code cell source
data = ACTIVITY.groupby(["date", "ADM1_PCODE"])["z_score"].mean().to_frame()
data = data.pivot_table(values=["z_score"], index=["date"], columns=["ADM1_PCODE"])
data.columns = [x[1] for x in data.columns]

data = data.groupby(pd.Grouper(freq=FREQ)).mean()

Limitations#

The methodology presented is an exploratory analysis pilot aiming to shed light on the economic situation in Syria and Türkiye leveraging alternative data, especially when we are confronted with the absence of traditional data and methods. Mobility data, like any other type of data, comes with limitations and underlying assumptions that should be considered when interpreting and using the data.

Caution

Here are some common limitations and assumptions associated with mobility data:

Limitations:

  • Sampling Bias: Mobility data is primarily collected through convenience sampling and lacks the controlled methodology of randomized trials.

  • Selection Bias: Users who opt to share their mobility data may not be representative of the entire population, potentially introducing selection bias.

  • Privacy Concerns: The collection of mobility data may raise privacy issues, as it can sometimes be linked to individuals, potentially violating their privacy.

  • Data Quality: Data quality can vary, and errors, inaccuracies, or missing data points may be present, which can affect the reliability of analyses.

  • Temporal and Spatial Resolution: Mobility data may not capture all movements or may lack fine-grained temporal or spatial resolution, limiting its utility for some applications.

  • Lack of Contextual Information: Mobility data primarily captures movement patterns and geolocation information. It may lack other crucial contextual information, such as transactional data, business types, or specific economic activities, which are essential for accurate estimation of economic activity.

  • Private Intent Data: The methodology relies on private intent data. In other words, the input data, i.e. the mobility data, was not produced or collected to analyze the population of interest or address the research question as its primary objective but it was repurposed for the public good. The benefits and caveats when using private intent data have been discussed extensively in the World Development Report 2021 [World Bank, 2021].

Assumptions:

  • Homogeneity: Mobility data often assumes that the mobility patterns of individuals or groups are relatively consistent over time and space, which may not always be the case.

  • Consistency in Data Sources: Mobility data may assume consistency in data sources and methodologies across different regions or datasets, which may not always hold true.

  • User Behavior: Assumptions about user behavior, such as the purpose of travel or preferred routes, are often made when interpreting mobility data.

  • Implicit Data Interpretation: Interpretation of mobility data often assumes that certain behaviors or patterns observed in the data have a specific meaning, which may not always be accurate without additional context.

  • App Usage as a Proxy: In some cases, the use of specific apps or devices may be used as a proxy for mobility data, assuming that it accurately represents individual movements.

It’s important to be aware of these limitations and assumptions when working with mobility data and to consider their potential impact on the conclusions drawn from the data. Additionally, researchers and analysts should explore ways to address these limitations and validate assumptions when conducting mobility data analyses.

See also

For further discussion on limitations and assumptions, please check out the Development Data Partnership Documentation on Mobility Data.

References#

Maa19

Paige Maas. Facebook disaster maps: aggregate insights for crisis response & recovery. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '19, 3173. New York, NY, USA, 2019. Association for Computing Machinery. URL: https://doi.org/10.1145/3292500.3340412, doi:10.1145/3292500.3340412.

WorldBank21

World Bank. World Development Report 2021 : Data for Better Lives. World Bank, 2021. License: CC BY 3.0 IGO. URL: http://hdl.handle.net/10986/35218.