Explore
BrightCat
Get sample data
Technical Guide

How pre-mover data works in Canada

A technical guide for enterprise data teams. Methodology, freshness, delivery shape, and privacy posture — without the marketing gloss.

Published April 2026 · 12 min read

Pre-mover data is a weekly record of households that have started the move process, flagged at the listing event rather than after the move is complete. The whole discipline is built around one simple fact: by the time a household physically moves, most of the commercial decisions around that move have already been made.

The Canadian pre-mover market is small and technical. A handful of serious providers, a larger handful of resellers, and very little neutral writing about how the signal is actually produced. This page is the technical explainer we wish existed when enterprise data teams started asking us what was under the hood. It covers methodology, freshness, pipeline architecture, delivery shape, and the privacy posture — the four or five things that actually matter when a bank, a telecom, or an insurer is deciding whether to integrate this data into a production workflow.

It's written from the side of the table that operates a pipeline. But the framing works on any vendor, including us. If anything here doesn't hold up, that's useful to know before the procurement review.

Definition

What pre-mover data is

Pre-mover data is a record of a household that has started the observable steps of moving, flagged at the earliest public signal. In Canada, that signal is almost always the property listing event — the moment a home enters the public for-sale or for-rent market.

The defining feature of a pre-mover record is its timing. A pre-mover household is one that has signalled the intent to move but has not yet moved. The window between the signal and the physical move is typically four to twelve weeks. That window is the commercial asset. Everything else (address validation, matchkey logic, delivery format) is plumbing that makes the window usable.

A pre-mover dataset is not a list of houses for sale. It's a weekly, deduplicated, address-validated record of households transitioning through the move lifecycle — listed, re-listed, price-changed, withdrawn, sold — with enough structure for a CRM or data warehouse to join on the household rather than the property.

The commercial anchor The value of the data is in the four-to-twelve-week window between the listing event and the move. Inside that window, retention and acquisition teams have time to act. Outside it, the customer has already been lost to the competitor who moved earlier.
Category distinction

Pre-mover data is not new-mover data

The most common source of confusion in this category is the overlap with new-mover data. The two are sold side by side, often by the same vendors, and the difference looks cosmetic until a buyer tries to run a retention campaign on the wrong one.

Pre-mover data flags the household before the move, from the listing event. New-mover data flags the household after the move, from post-move records like change-of-address filings, utility hookups, or mail-forwarding registrations.

The gap between the two is roughly eight to sixteen weeks, and in most workflows that gap is the entire campaign window.

Pre-mover data New-mover data
Signal source Public listing event Change-of-address, utility hookups, post-move filings
Timing 4–12 weeks before the move 4–8 weeks after the move
Commercial use Retention intervention, pre-move acquisition, quote capture Post-move re-acquisition, welcome campaigns, service activation
Competitive window Open — incumbent has not yet been displaced Closed — switching decision already made
Best fit Banks, telecoms, insurers, utilities defending existing customers Home services, welcome-kit providers, new-address outreach

A retention team using new-mover data is doing post-mortem outreach. A retention team using pre-mover data is doing pre-emptive outreach. The two workflows look similar from the outside and diverge completely on results.

Methodology

Three ways the signal gets captured

Not all pre-mover datasets are captured the same way. There are three methodologies in active use in Canada, and they produce data with different shapes, different latencies, and different failure modes.

Listing-based capture

The dominant methodology. The household is flagged when its property enters the public listing market — for sale, for lease, or in some cases withdrawn and re-listed. The capture reads listing feeds directly, validates each address, and maintains lifecycle state (active, price-changed, sold, expired) as the listing moves through its lifecycle.

Strengths: earliest signal, highest specificity, strongest provenance. The household is flagged at the first public indication of intent and remains tracked through the full sale lifecycle.

Weaknesses: requires continuous capture infrastructure. Missing a week of capture creates a permanent gap in the historical record. Vendors who don't operate the capture themselves can't fill those gaps later.

Mobility-based capture

Inferred movement from mobile-device location data. The household is flagged when aggregated device patterns indicate a residential move — typically a sustained change in overnight device location from one dissemination area to another.

Strengths: captures moves that never reach the listing market (rentals, family moves, corporate relocations). Useful for demographic-level migration analysis.

Weaknesses: signal is laggy (the move has usually already started) and the spatial resolution is coarser. Not suitable for household-level retention or acquisition workflows. Useful for planning and territory analysis, less useful for direct outreach.

Post-move registration capture

The traditional "new-mover" methodology. The household is flagged from post-move records: change-of-address filings, credit-bureau updates, utility hookups, voter registration updates. Technically this is new-mover data, but some vendors bundle it into their pre-mover product line.

Strengths: easy to source, legally well-understood, familiar to direct-mail infrastructure.

Weaknesses: arrives weeks after the move. The commercial window is already closed by the time the data is delivered.

The question to ask a vendor "Which of these three is your primary methodology, and which is bundled?" Most serious Canadian pre-mover providers lead with listing-based capture. Vendors who lead with mobility-based or post-move data are delivering a different product and should be evaluated as such.
Freshness

How fast the signal arrives

Refresh cadence is the single number that most directly determines the commercial value of a pre-mover dataset. Two datasets can be drawn from the same listing source and have very different usefulness depending on how often the pipeline refreshes.

Cadence Median signal age at delivery Commercial fit
Monthly ~15 days old Territory planning, quarterly campaign targeting, analytical backfill
Weekly ~3–4 days old Retention intervention, pre-move acquisition, insurance quoting, mortgage defence
Daily < 24 hours Real-time trigger systems, agent-driven workflows, AI-native retention flows

Weekly refresh is the current standard for serious enterprise use. Daily refresh adds operational cost without materially changing outcomes for most retention and acquisition workflows, because most campaign infrastructure is not itself running at daily cadence. Monthly refresh loses the window for any workflow that depends on intervening before the move.

A vendor who reports "regularly updated" or "rolling cadence" without naming a specific interval is usually delivering monthly data. The specific cadence is not a hard question to answer, and vagueness about it is almost always informative.

Pipeline architecture

What a production pipeline actually does

Enterprise buyers sometimes assume pre-mover data is pulled from a feed and packaged. It isn't. A production pipeline does five things every week, every week of the year, without gaps. If any one of them fails, the output is compromised.

  • Source capture. Weekly ingestion of the full Canadian listing market across all ten provinces. Handling format changes, encoding issues, column additions, and the occasional bad file from the source system.
  • Address validation. Every address normalized, standardized, and flagged as valid, corrected, uncorrectable, or failed. This is the step that lets downstream joins to the client's CRM work reliably.
  • Matchkey construction. A stable join key derived from address and postal code, designed to survive the small variations that make raw address fields useless for joining. The matchkey is what lets a pre-mover record survive re-listing, re-pricing, and withdrawal without being double-counted.
  • Lifecycle state tracking. New, re-listed, price-changed, withdrawn, sold, expired. A weekly pipeline that only tracks "currently active" is missing most of the intelligence — the re-listed and price-changed events are often the strongest signals of imminent move.
  • Deduplication and historical consistency. Cross-week deduplication to prevent the same household appearing as multiple records, and back-consistency checks against the historical record to catch pipeline regressions before they ship.

None of this is exotic engineering. All of it requires continuous operation to produce a dataset that doesn't decay. A pipeline missing a month of capture can never produce a clean historical view of that month — the data simply is not recoverable after the fact.

This is the reason pipeline continuity is one of the hardest questions for a new entrant to answer. A vendor launched in the last two years cannot show twelve years of clean weekly capture because they haven't run the pipeline for twelve years. The record is missing and no amount of back-purchasing of snapshots fills the gap cleanly.

The architectural test Ask the vendor how their pipeline handles a specific edge case — COVID-era format changes, provincial postal code variations, a file format change from the source system in 2021. A vendor who operates the pipeline will have a story. A vendor who resells a snapshot will have a generic answer.
Delivery shape

How the data reaches the customer

The same underlying dataset can be delivered three ways. The shape matters because it determines what the buyer's team has to do to activate it.

Flat-file drop

A CSV or Parquet file delivered weekly to an SFTP location, a cloud bucket, or a direct email. The oldest shape in the category. Easy to understand, fits existing direct-marketing infrastructure, slow to activate in modern data stacks. Fine for campaign-driven teams, friction for data teams building production pipelines.

Snowflake Marketplace share

A live data share inside the buyer's own Snowflake environment. No ETL, no file drops, no schema drift. The buyer's SQL can treat the data as a native table and join it to their first-party CRM the same way they join any other table. This is the shape most new enterprise data contracts default to in 2026 because it removes a whole class of integration problems.

MCP endpoint

An AI-native access point for Claude, ChatGPT, and agent stacks. The data is available as tool calls inside an AI agent's workflow, which makes it usable for triggered retention flows, automated next-best-offer systems, and conversational analytics. New shape, specific to the agentic AI stack, rapidly becoming a table-stakes capability for buyers who are building AI-driven customer operations.

All three delivery shapes serve the same underlying data. The choice is operational, not substantive. Flat file for legacy martech, Snowflake share for modern data teams, MCP endpoint for AI-native retention and acquisition systems.

Privacy posture

Property-level data and why it matters

Canadian pre-mover data is regulated under PIPEDA federally and, in Quebec, under Law 25. The specific scrutiny a dataset receives depends heavily on what shape it arrives in.

Property-level vs person-level

Property-level data describes the property and the lifecycle event — address, postal code, listing date, sold date, price. It does not include names, phone numbers, emails, or demographic enrichment. The buyer joins the data to their first-party CRM inside their own consent framework.

Person-level data adds enriched contact records to the property record — names, phones, emails, demographic segments — usually sourced from consumer-contact databases and joined to the listing record.

Both shapes are available in Canada. The regulatory treatment is different.

Property-level Person-level
PIPEDA posture Source signal is public; customer match happens inside the buyer's existing consent framework Enriched personal information; requires procurement to handle consent and subprocessor disclosure explicitly
Quebec Law 25 Generally straightforward; no cross-border personal-information transfer at the vendor stage Requires explicit transfer-impact assessment and vendor privacy officer disclosure
Procurement friction Lower; the buyer's existing PIPEDA framework usually covers it Higher; regulated buyers often require additional legal and privacy review
Activation speed Slower out of the gate (requires CRM join) Faster for direct-outbound teams with existing contact infrastructure
Best fit Banks, telecoms, insurers, government, regulated buyers with strong CRM Direct-marketing shops with existing consumer-contact infrastructure

The right shape depends on the buyer's procurement tolerance and the maturity of their first-party CRM. A regulated enterprise with a mature CRM usually clears property-level data more cleanly. A small direct-mail shop without a CRM usually prefers person-level because the activation is simpler.

Enterprise decision-making

What the data actually drives

Pre-mover data shows up in four categories of enterprise decision-making. The mechanics differ by industry, but the underlying pattern is the same: surface the household during the window, and route it to the team that has time to act.

Retention intervention

A bank, telecom, insurer, or utility matches the weekly pre-mover file against their existing customer base on address. Matched customers are flagged as at-risk and routed to a proactive-retention workflow — a transfer offer, a renewal call, a price hold. The intervention happens before the customer receives a competitor's offer tied to the new address.

Pre-move acquisition

An acquirer (mortgage lender, internet provider, insurance broker) matches the weekly file against their prospect universe and prioritizes outbound to households inside the active-listing window. The acquisition happens before the incumbent has a chance to send a retention offer.

Underwriting and risk

Insurers use the address-change event for re-rating workflows. Mortgage lenders use it for portfolio-health monitoring and discharge prediction. The signal is joined to the customer record to refine models that are otherwise running on stale address data.

Territory and operational planning

Residential services teams (home services, telecom infrastructure, retail expansion) use aggregated pre-mover activity to monitor where demand is shifting week over week. The use is analytical rather than operational, but the frequency of the signal lets planning cycles tighten from quarterly to monthly.

Frequently asked questions

What is pre-mover data?
Pre-mover data is a weekly record of households that have started the move process, flagged at the listing event rather than after the move is complete. In Canada, the signal typically surfaces the household four to twelve weeks before the physical move, giving retention and acquisition teams a window to intervene before a competitor does.
How is pre-mover data captured in Canada?
Primarily through listing-based capture — reading public property listings directly, validating addresses, and tracking lifecycle state week over week. Secondary methodologies include mobility-based capture (inferred from aggregated mobile-device patterns) and post-move registration capture (change-of-address, utility hookups). Listing-based capture produces the earliest and most specific signal.
How is pre-mover data different from new-mover data?
Pre-mover data flags the household before the move, from the listing event. New-mover data flags the household after the move, from change-of-address filings and similar post-move records. Pre-mover gives four to eight weeks of runway before the move. New-mover arrives four to eight weeks after the move, which is usually after the commercial window has already closed.
How often should pre-mover data be refreshed?
Weekly is the current standard for serious enterprise use. Daily refresh adds operational cost without materially changing outcomes for most retention and acquisition workflows. Monthly refresh loses the four-to-twelve-week window that makes the data commercially valuable.
Is pre-mover data compliant with PIPEDA and Quebec Law 25?
It depends on the shape of the delivered data. Property-level and address-level data sourced from public listing events is generally straightforward under PIPEDA and Quebec Law 25, because the customer match happens inside the buyer's own consent framework. Person-level enriched records (names, phones, demographic profiles) carry more weight in privacy review and typically require additional transfer-impact assessment.
How does a bank or telecom use pre-mover data operationally?
The standard pattern is a weekly address-level join between the pre-mover file and the buyer's customer database. Matched households are flagged as at-risk (for retention) or as priority prospects (for acquisition) and routed to the relevant workflow. The join happens inside the buyer's environment, inside their existing consent framework, and the outreach happens within days of the listing event rather than weeks after the move.
What does a continuous pre-mover pipeline look like?
Five recurring operations, every week: source capture, address validation, matchkey construction, lifecycle state tracking, and deduplication. Pipelines that have run continuously for many years produce datasets that resist decay. Pipelines that were assembled from acquired snapshots produce datasets that look current but degrade against the live market.
What's the difference between a pre-mover pipeline operator and a pre-mover data reseller?
An operator runs the capture, validation, and lifecycle logic directly. A reseller buys output from an operator (or scrapes a snapshot) and repackages it. The two look similar from a buyer's perspective in the first month. They diverge by month six, because a reseller cannot evolve the dataset as the market changes, and by month twelve the reseller's data no longer matches live conditions.
Continue reading
See it in your environment

Read the technical guide. Now run it against real data.

Free 25,000-record sample in the region of your choosing. One to three business days. Designed to be validated against your own conversion data before any contract conversation.

Get sample data Read the buyer's guide →