Data Sources & Methodology
Last updated: 2026-03-29
Written Submissions
Public Submissions
- Counts and stances are based on written submissions to the City for the March 23, 2026 public hearing.
- "In favour", "Opposed", or "Neither" are determined from the submitter's stated position in their submission.
Communities & Population
- Community boundaries and population figures are from the 2017 Calgary Civic Census and City of Calgary spatial data.
Equity Overlays
- Equity overlays are built from census and demographic datasets, and represent the characteristics of residents in each area — not the submitters themselves.
- Colour scales are chosen to be accessible to users with colour vision deficiencies.
How We Processed the Data
- We extracted data from public submission PDF files using custom scripts to convert the submissions into a structured format.
- Submission addresses and content were matched with Calgary's open data to determine which communities submissions were coming from.
- We then used census data to overlay various equity measures on the map, allowing for deeper analysis of how submissions relate to community demographics.
Public Hearing Analysis
Overview
This analysis covers 5 days of public hearings on Calgary’s 2026 rezoning bylaw (March 23–27, 2026). Individual claims were extracted from speaker testimony and classified using an AI-assisted pipeline. We used AI to help group and count ideas from the hearings, but it can make mistakes, so treat this page as a guide, not a perfect record.
Evidence types
Each claim is categorized as Data-Referencing (cites specific numbers, statistics, reports, or measurable outcomes), Experience-Based (based on personal or community experience), or Value-Based (expresses an opinion, value, or preference).
Evidence strength
Hard Fact claims cite quantifiable data. Lived Reality reflects personal experience. Value / Fear expresses beliefs, preferences, or concerns.
Themes
Claims are grouped into narrative themes across issue categories. Each theme represents a coherent argument or concern raised by multiple speakers.
Limitations
AI classification may contain errors. Stance normalization reduces nuanced positions to three categories. Claim boundaries are approximate — a single speaker statement may be split into multiple claims or combined. Verification sources are as cited by speakers and have not been independently confirmed.
Claim Verification
Claim verification: scope
Of the 4,339 claims made by 154 speakers across five days, 895 were classified as data-referencing — meaning they assert specific facts, data points, or verifiable statements. These were deduplicated into 274 distinct factual assertions, since many speakers made the same factual claim in different words.
Step 1: Claim deduplication
Data-referencing claims were grouped by theme and consolidated using AI-assisted analysis. Claims stating the same underlying fact in different words were merged into a single assertion. Each assertion preserves full traceability: the original claim texts, speaker names, hearing dates, and stances.
Step 2: Sub-fact decomposition
Many assertions are composite claims containing multiple checkable points. Before verification, each assertion is decomposed into specific, independently verifiable sub-facts. Each sub-fact is independently assessed as evidence supports, evidence contradicts, or no evidence found.
Step 3: Evidence collection
For each sub-fact, we search for primary data sources — not articles about the topic, but the actual data: City of Calgary open data and budget documents, Statistics Canada census data, CMHC housing market data, Alberta government data portals, council meeting minutes, and university research with original data.
Step 4: Verdict assignment
Each assertion receives one of four verdicts: ‘Evidence Found: Supporting’ (public sources contain data consistent with the claim), ‘Evidence Found: Contradicting’ (public sources contain conflicting data), ‘Evidence Found: Mixed’ (some parts supported, others not), or ‘No Public Evidence Found’ (no qualifying sources found to confirm or deny).
Source strength score
Each verdict includes a source strength score (0–100) based on four factors: Source Count (up to 40 points — no sources: 0, one: 20, two: 30, three or more: 40), Source Quality (up to 30 points — government or academic: 30 each, established media: 15, averaged across sources), Source Agreement (up to 20 points — multiple sources agree: 20, single source: 10, sources disagree: 5), and Verdict Clarity (up to 10 points — clear verdict: 10, ambiguous: 5, unverifiable: 0). A score of 100 means multiple government or academic sources all agree, producing a clear verdict. A score of 0 means no qualifying sources were found. The score reflects how well-sourced the assessment is — not how ‘true’ the claim is.
Source standards
Accepted sources include government reports and open data (.gc.ca, .calgary.ca, .alberta.ca), peer-reviewed academic research, and factual news reporting from established outlets containing specific data points. Excluded sources include opinion pieces, editorials, think tank publications, blog posts, social media, and advocacy group materials.
Verification limitations
A verdict of ‘No Public Evidence Found’ does not mean a claim is false — many legitimate facts are not captured in publicly indexed data. A verdict of ‘Evidence Found: Supporting’ does not mean a claim is definitively true — sources may be incomplete or reflect institutional perspectives. Speaker-cited data has not been independently re-analyzed; verification checks public sources, not the speaker’s specific dataset.
Contact & Feedback
For questions or feedback, please visit pixeltree.ca/contact.
Disclaimer
We have made every effort to ensure the accuracy of the data and analysis presented here. However, no guarantee is made as to the completeness or correctness of the information. The creators of this site are not liable for any errors, omissions, or any outcomes resulting from the use of this data.