The Double Standard: Blocking AI While Deploying AI

In an era when artificial intelligence threatens to displace traditional journalism, a glaring contradiction has emerged: news organizations that block AI crawlers from accessing their content are increasingly using AI to generate the very content they deny to AI. This move not only undermines the values of transparency and fairness, but also exposes a troubling hypocrisy in the media’s engagement with AI.

Fortifying the Gates Against AI
Many established news outlets have taken concrete steps to prevent AI from accessing their content. As of early 2024, over 88 percent of top news outlets, including The New York TimesThe Washington Post, and The Guardian, were blocking AI data-collection bots such as OpenAI’s GPTBot via their robots.txt files. Echoing these moves, a Reuters Institute report found that nearly 80 percent of prominent U.S. news organizations blocked OpenAI’s crawlers by the end of 2023, while roughly 36 percent blocked Google’s AI crawler.

These restrictions are not limited to voluntary technical guidelines. Cloudflare has gone further, blocking known AI crawlers by default and offering publishers a “Pay Per Crawl” model, allowing access to their content only under specific licensing terms. The intent is clear: content creators want to retain control, demand compensation, and prevent unlicensed harvesting of their journalism.

But Then They Use AI To Generate Their Own Content
While these publishers fortify their content against external AI exploitation, they increasingly turn to AI internally to produce articles, summaries, and other content. This shift has real consequences: jobs are being cut and AI-generated content is being used to replace human-created journalism.
Reach plc, publisher of MirrorExpress, and others, recently announced a restructuring that places 600 jobs at risk, including 321 editorial positions, as it pivots toward AI-driven formats like video and live content.
Business Insider CEO Barbara Peng confirmed that roughly 21 percent of the staff were laid off to offset declines in search traffic, while the company shifts resources toward AI-generated features such as automated audio briefings.
• CNET faced backlash after it published numerous AI-generated stories under staff bylines, some containing factual errors. The fallout led to corrections and a renewed pushback from newsroom employees.

The Hypocrisy Unfolds
This dissonance, blocking AI while deploying it, lies at the heart of the hypocrisy. On one hand, publishers argue for content sovereignty: preventing AI from freely ingesting and repurposing their work. On the other hand, they quietly harness AI for their own ends, often reducing staffing under the pretense of innovation or cost-cutting.

This creates a scenario in which:
AI is denied access to public content, while in-house AI is trusted with producing public-facing content.
Human labor is dismissed in the name of progress, even though AI is not prevented from tapping into the cultural and journalistic capital built over years.
Control and compensation arguments are asserted to keep AI out, yet the same AI is deployed strategically to reshape newsroom economics.

This approach fails to reconcile the ethical tensions it embodies. If publishers truly value journalistic integrity, transparency, and compensation, then applying those principles selectively, accepting them only when convenient, is disingenuous. The news media’s simultaneous rejection and embrace of AI reflect a transactional, rather than principled, stance.

A Path Forward – or a Mirage?
Some publishers are demanding fair licensing models, seeking to monetize AI access rather than simply deny it. The emergence of frameworks like the Really Simple Licensing (RSL) standard allows websites to specify terms, such as royalties or pay-per-inference charges, in their robots.txt, aiming for a more equitable exchange between AI firms and content creators.

Still, that measured approach contrasts sharply with using AI to cut costs internally, a strategy that further alienates journalists and erodes trust in media institutions.

Integrity or Expedience?
The juxtaposition of content protection and AI deployment in newsrooms lays bare a cynical calculus: AI is off-limits when others use it, but eminently acceptable when it serves internal profit goals. This selective embrace erodes the moral foundation of journalistic institutions and raises urgent questions:
• Can publishers reconcile the need for revenue with the ethical imperatives of transparency and fairness?
• Will the rapid rise of AI content displace more journalists than it empowers?
• And ultimately, can media institutions craft coherent policies that honor both their creators and the audience’s right to trustworthy news

Perhaps there is a path toward licensing frameworks and responsible AI use that aligns with journalistic values, but as long as the will to shift blame, “not us scraping, but us firing”, persists, the hypocrisy remains undeniable.

AI and the Future of Professional Writing: A Reframing

For centuries, every major technological shift has sparked fears about the death of the crafts it intersects. The printing press didn’t eliminate scribes, it transformed them. The rise of the internet and word processors didn’t end journalism, they redefined its forms. Now, artificial intelligence fronts the same familiar conversation: is AI killing professional writing, or is it once again reshaping it?

As a business consultant, I’ve immersed myself in digital tools: from CRMs to calendars, word processors to spreadsheets, not as existential threats, but as extensions of my capabilities. AI fits into that lineage. It doesn’t render me obsolete. It offers capacity, particularly, the capacity to offload mechanical work, and reclaim time for strategic, empathic, and creative labor.

The data shows this isn’t just a sentimental interpretation. Multiple studies document significant declines in demand for freelance writing roles. A Harvard Business Review–cited study that tracked 1.4 million freelance job listings found that, post-ChatGPT, demand for “automation-prone” jobs fell by 21%, with writing roles specifically dropping 30%  . Another analysis on Upwork revealed a 33% drop in writing postings between late 2022 and early 2024, while a separate study observed that, shortly after ChatGPT’s debut, freelance job hires declined by nearly 5% and monthly earnings by over 5% among writers.  These numbers are real. The shift has been painful for many in the profession.

Yet the picture isn’t uniform. Other data suggests that while routine or templated writing roles are indeed shrinking, strategic and creatively nuanced writing remains vibrant. Upwork reports that roles demanding human nuance: like copywriting, ghostwriting, and marketing content have actually surged, rising by 19–24% in mid-2023. Similarly, experts note that although basic web copy and boilerplate content are susceptible to automation, high-empathy, voice-driven writing continues to thrive.

My daily experience aligns with that trend. I don’t surrender to AI. I integrate it. I rely on it to break the blank page, sketch a structure, suggest keywords, or clarify phrasing. Yet I still craft, steer, and embed meaning, because that human judgment, that voice, is irreplaceable.

Many professionals are responding similarly. A qualitative study exploring how writers engage with AI identified four adaptive strategies, from resisting to embracing AI tools, each aimed at preserving human identity, enhancing workflow, or reaffirming credibility. A 2025 survey of 301 professional writers across 25+ languages highlighted both ethical concerns, and a nuanced realignment of expectations around AI adoption.

This is not unprecedented in academia: AI is already assisting with readability, grammar, and accessibility, especially for non-native authors, but not at the expense of critical thinking or academic integrity.  In fact, when carefully integrated, AI shows promise as an aid, not a replacement.

In this light, AI should not be viewed as the death of professional writing, but as a test of its boundaries: Where does machine-assisted work end and human insight begin? The profession isn’t collapsing, it’s clarifying its value. The roles that survive will not be those that can be automated, but those that can’t.

In that regard, we as writers, consultants, and professionals must decide: will we retreat into obsolescence or evolve into roles centered on empathy, strategy, and authentic voice? I choose the latter, not because it’s easier, but because it’s more necessary.

Sources
• Analysis of 1.4 million freelance job listings showing a 30% decline in demand for writing positions post-ChatGPT release
• Upwork data indicating a 33% decrease in writing job postings from late 2022 to early 2024
• Study of 92,547 freelance writers revealing a 5.2% drop in earnings and reduced job flow following ChatGPT’s launch  ort showing growth in high-nuance writing roles (copywriting, ghostwriting, content creation) in Q3 2023
• Analysis noting decreased demand (20–50%) for basic writing and translation, while creative and high-empathy roles remain resilient
• Qualitative research on writing professionals’ adaptive strategies around generative AI
• Survey of professional writers on AI usage, adoption challenges, and ethical considerations
• Academic studies indicating that AI tools can enhance writing mechanics and accessibility if integrated thoughtfully

Strategic Pricing Adjustment to Accelerate User Growth and Revenue

Dear OpenAI Leadership,

I am writing to propose a strategic adjustment to ChatGPT’s subscription pricing that could substantially increase both user adoption and revenue. While ChatGPT has achieved remarkable success, the current $25/month subscription fee may be a barrier for many potential users. In contrast, a $9.95/month pricing model aligns with industry standards and could unlock significant growth.

Current Landscape

As of mid-2025, ChatGPT boasts:

  • 800 million weekly active users, with projections aiming for 1 billion by year-end. (source)
  • 20 million paid subscribers, generating approximately $500 million in monthly revenue. (source)

Despite this success, the vast majority of users remain on the free tier, indicating a substantial untapped market.

The Case for $9.95/Month

A $9.95/month subscription fee is a proven price point for digital services, offering a balance between affordability and perceived value. Services like Spotify, Netflix, and OnlyFans have thrived with similar pricing, demonstrating that users are willing to pay for enhanced features and experiences at this price point.

Projected Impact

If ChatGPT were to lower its subscription fee to $9.95/month, the following scenarios illustrate potential outcomes:

  • Scenario 1: 50% Conversion Rate
    50% of current weekly active users (400 million) convert to paid subscriptions.
    200 million paying users × $9.95/month = $1.99 billion/month.
    Annual revenue: $23.88 billion.
  • Scenario 2: 25% Conversion Rate
    25% conversion rate yields 100 million paying users.
    100 million × $9.95/month = $995 million/month.
    Annual revenue: $11.94 billion.

Even at a conservative 25% conversion rate, annual revenue would exceed current projections, highlighting the significant financial upside.

Strategic Considerations

  • Expand the user base: Attract a broader audience, including students, professionals, and casual users.
  • Enhance user engagement: Increased adoption could lead to higher usage rates and data insights, further improving the product.
  • Strengthen market position: A more accessible price point could solidify ChatGPT’s dominance in the AI chatbot market, currently holding an 80.92% share. (source)

Conclusion

Adopting a $9.95/month subscription fee could be a transformative move for ChatGPT, driving substantial revenue growth and reinforcing its position as a leader in the AI space. I urge you to consider this strategic adjustment to unlock ChatGPT’s full potential.

Sincerely,
The Rowanwood Chronicles

#ChatGPT #PricingStrategy #SubscriptionModel #AIAdoption #DigitalEconomy #OpenAI #TechGrowth

When 10 Meters Isn’t Enough: Understanding AlphaEarth’s Limits in Operational Contexts

In the operational world, data is only as valuable as the decisions it enables, and as timely as the missions it supports. I’ve worked with geospatial intelligence in contexts where every meter mattered and every day lost could change the outcome. AlphaEarth Foundations is not the sensor that will tell you which vehicle just pulled into a compound or how a flood has shifted in the last 48 hours, but it may be the tool that tells you exactly where to point the sensors that can. That distinction is everything in operational geomatics.

With the public release of AlphaEarth Foundations, Google DeepMind has placed a new analytical tool into the hands of the global geospatial community. It is a compelling mid-tier dataset – broad in coverage, high in thematic accuracy, and computationally efficient. But in operational contexts, where missions hinge on timelines, revisit rates, and detail down to the meter, knowing exactly where AlphaEarth fits, and where it does not, is essential.

Operationally, AlphaEarth is best understood as a strategic reconnaissance layer. Its 10 m spatial resolution makes it ideal for detecting patterns and changes at the meso‑scale: agricultural zones, industrial developments, forest stands, large infrastructure footprints, and broad hydrological changes. It can rapidly scan an area of operations for emerging anomalies and guide where scarce high‑resolution collection assets should be deployed. In intelligence terms, it functions like a wide-area search radar, identifying sectors of interest, but not resolving the individual objects within them.

The strengths are clear. In broad-area environmental monitoring, AlphaEarth can reveal where deforestation is expanding most rapidly or where wetlands are shrinking. In agricultural intelligence, it can detect shifts in cultivation boundaries, large-scale irrigation projects, or conversion of rangeland to cropland. In infrastructure analysis, it can track new highway corridors, airport expansions, or urban sprawl. Because it operates from annual composites, these changes can be measured consistently year-over-year, providing reliable trend data for long-term planning and resource allocation.

In the humanitarian and disaster-response arena, AlphaEarth offers a quick way to establish pre‑event baselines. When a cyclone strikes, analysts can compare the latest annual composite to prior years to understand how the landscape has evolved, information that can guide relief planning and longer‑term resilience efforts. In climate-change adaptation, it can help identify landscapes under stress, informing where to target mitigation measures.

But operational users quickly run into resolution‑driven limitations. At 10 m GSD, AlphaEarth cannot identify individual vehicles, small boats, rooftop solar installations, or artisanal mining pits. Narrow features – rural roads, irrigation ditches, hedgerows – disappear into the generalised pixel. In urban ISR (urban Intelligence, Surveillance, and Reconnaissance), this makes it impossible to monitor fine‑scale changes like new rooftop construction, encroachment on vacant lots, or the addition of temporary structures. For these tasks, commercial very high resolution (VHR) satellites, crewed aerial imagery, or drones are mandatory.

Another constraint is temporal granularity. The public AlphaEarth dataset is annual. This works well for detecting multi‑year shifts in land cover but is too coarse for short-lived events or rapidly evolving situations. A military deployment lasting two months, a flash‑flood event, or seasonal agricultural practices will not be visible. For operational missions requiring weekly or daily updates, sensors like PlanetScope’s daily 3–5 m imagery or commercial tasking from Maxar’s WorldView fleet are essential.

There is also the mixed‑pixel effect, particularly problematic in heterogeneous environments. Each embedding is a statistical blend of everything inside that 100 m² tile. In a peri‑urban setting, a pixel might include rooftops, vegetation, and bare soil. The dominant surface type will bias the model’s classification, potentially misrepresenting reality in high‑entropy zones. This limits AlphaEarth’s utility for precise land‑use delineation in complex landscapes.

In operational geospatial workflows, AlphaEarth is therefore most effective as a triage tool. Analysts can ingest AlphaEarth embeddings into their GIS or mission‑planning system to highlight AOIs where significant year‑on‑year change is likely. These areas can then be queued for tasking with higher‑resolution, higher‑frequency assets. In resource-constrained environments, this can dramatically reduce unnecessary collection, storage, and analysis – focusing effort where it matters most.

A second valuable operational role is in baseline mapping. AlphaEarth can provide the reference layer against which other sources are compared. For instance, a national agriculture ministry might use AlphaEarth to maintain a rolling national crop‑type map, then overlay drone or VHR imagery for detailed inspections in priority regions. Intelligence analysts might use it to maintain a macro‑level picture of land‑cover change across an entire theatre, ensuring no sector is overlooked.

It’s important to stress that AlphaEarth is not a targeting tool in the military sense. It does not replace synthetic aperture radar for all-weather monitoring, nor does it substitute for daily revisit constellations in time-sensitive missions. It cannot replace the interpretive clarity of high‑resolution optical imagery for damage assessment, facility monitoring, or urban mapping. Its strength lies in scope, consistency, and analytical efficiency – not in tactical precision.

The most successful operational use cases will integrate AlphaEarth into a tiered collection strategy. At the top tier, high‑resolution sensors deliver tactical detail. At the mid‑tier, AlphaEarth covers the wide‑area search and pattern detection mission. At the base, raw satellite archives remain available for custom analyses when needed. This layered approach ensures that each sensor type is used where it is strongest, and AlphaEarth becomes the connective tissue between broad‑area awareness and fine‑scale intelligence.

Ultimately, AlphaEarth’s operational value comes down to how it’s positioned in the workflow. Used to guide, prioritize, and contextualize other intelligence sources, it can save time, reduce costs, and expand analytical reach. Used as a standalone decision tool in missions that demand high spatial or temporal resolution, it will disappoint. But as a mid‑tier, strategic reconnaissance layer, it offers an elegant solution to a long-standing operational challenge: how to maintain global awareness without drowning in raw data.

For geomatics professionals, especially those in the intelligence and commercial mapping sectors, AlphaEarth is less a silver bullet than a force multiplier. It can’t tell you everything, but it can tell you where to look, and in operational contexts, knowing where to look is often the difference between success and failure.

Correcting the Map: Africa and the Push for Equal Earth

As regular readers know, I often write about geomatics, its services, and products. While I tend to be a purist when it comes to map projections, favouring the Cahill-Keyes and AuthaGraph projections, I can understand why the Equal Earth projection might be more popular, as it still looks familiar enough to resemble a traditional map.

The Equal Earth map projection is gaining prominence as a tool for reshaping global perceptions of geography, particularly in the context of Africa’s representation. Endorsed by the African Union and advocacy groups like Africa No Filter and Speak Up Africa, the “Correct The Map” campaign seeks to replace the traditional Mercator projection with the Equal Earth projection to more accurately depict Africa’s true size and significance. 

Origins and Design of the Equal Earth Projection
Introduced in 2018 by cartographers Bojan Šavrič, Bernhard Jenny, and Tom Patterson, the Equal Earth projection is an equal-area pseudocylindrical map designed to address the distortions inherent in the Mercator projection. While the Mercator projection is useful for navigation, it significantly enlarges regions near the poles and shrinks equatorial regions, leading to a misrepresentation of landmass sizes. In contrast, the Equal Earth projection maintains the relative sizes of areas, offering a more accurate visual representation of continents.  

Africa’s Distorted Representation in Traditional Maps
The Mercator projection, created in 1569, has been widely used for centuries. However, it distorts the size of continents, particularly those near the equator. Africa, for instance, appears smaller than it actually is, which can perpetuate stereotypes and misconceptions about the continent. This distortion has implications for global perceptions and can influence educational materials, media portrayals, and policy decisions.    

The “Correct The Map” Campaign
The “Correct The Map” campaign aims to challenge these historical inaccuracies by promoting the adoption of the Equal Earth projection. The African Union has actively supported this initiative, emphasizing the importance of accurate geographical representations in reclaiming Africa’s rightful place on the global stage. By advocating for the use of the Equal Earth projection in schools, media, and international organizations, the campaign seeks to foster a more equitable understanding of Africa’s size and significance.   

Broader Implications and Global Support
The push for the Equal Earth projection is part of a broader movement to decolonize cartography and challenge Eurocentric perspectives. By adopting map projections that accurately reflect the true size of continents, especially Africa, the global community can promote a more balanced and inclusive worldview. Institutions like NASA and the World Bank have already begun to recognize the value of the Equal Earth projection, and its adoption is expected to grow in the coming years. 

The Equal Earth map projection represents more than just a technical advancement in cartography; it symbolizes a shift towards greater equity and accuracy in how the world is represented. By supporting initiatives like the “Correct The Map” campaign, individuals and organizations can contribute to a more just and accurate portrayal of Africa and other regions, fostering a global environment where all continents are recognized for their true size and importance.

AlphaEarth Foundations as a Strategic Asset in Global Geospatial Intelligence

Over the course of my career in geomatics, I’ve watched technology push our field forward in leaps – from hand‑drawn topographic overlays to satellite constellations capable of imaging every corner of the globe daily. Now we stand at the edge of another shift. Google DeepMind’s AlphaEarth Foundations promises a new way to handle the scale and complexity of Earth observation, not by giving us another stack of imagery, but by distilling it into something faster, leaner, and more accessible. For those of us who have spent decades wrangling raw pixels into usable insight, this is a development worth pausing to consider.

This year’s release of AlphaEarth Foundations marks a major milestone in global-scale geospatial analytics. Developed by Google DeepMind, the model combines multi-source Earth observation data into a 64‑dimensional embedding for every 10 m × 10 m square of the planet’s land surface. It integrates optical and radar imagery, digital elevation models, canopy height, climate reanalyses, gravity data, and even textual metadata into a single, analysis‑ready dataset covering 2017–2024. The result is a tool that allows researchers and decision‑makers to map, classify, and detect change at continental and global scales without building heavy, bespoke image‑processing pipelines.

The strategic value proposition of AlphaEarth rests on three pillars: speed, accuracy, and accessibility. Benchmarking against comparable embedding models shows about a 23–24% boost in classification accuracy. This comes alongside a claimed 16× improvement in processing efficiency – meaning tasks that once consumed days of compute can now be completed in hours. And because the dataset is hosted directly in Google Earth Engine, it inherits an established ecosystem of workflows, tutorials, and a user community that already spans NGOs, research institutions, and government agencies worldwide.

From a geomatics strategy perspective, this efficiency translates directly into reach. Environmental monitoring agencies can scan entire nations for deforestation or urban growth without spending weeks on cloud masking, seasonal compositing, and spectral index calculation. Humanitarian organizations can identify potential disaster‑impact areas without maintaining their own raw‑imagery archives. Climate researchers can explore multi‑year trends in vegetation cover, wetland extent, or snowpack with minimal setup time. It is a classic case of lowering the entry barrier for high‑quality spatial analysis.

But the real strategic leverage comes from integration into broader workflows. AlphaEarth is not a replacement for fine‑resolution imagery, nor is it meant to be. It is a mid‑tier, broad‑area situational awareness layer. At the bottom of the stack, Sentinel‑2, Landsat, and radar missions continue to provide open, raw data for those who need pixel‑level spectral control. At the top, commercial sub‑meter satellites and airborne surveys still dominate tactical decision‑making where object‑level identification matters. AlphaEarth occupies the middle: fast enough to be deployed often, accurate enough for policy‑relevant mapping, and broad enough to be applied globally.

This middle layer is critical in national‑scale and thematic mapping. It enables ministries to maintain current, consistent land‑cover datasets without the complexity of traditional workflows. For large conservation projects, it provides a harmonized baseline for ecosystem classification, habitat connectivity modelling, and impact assessment. In climate‑change adaptation planning, AlphaEarth offers the temporal depth to see where change is accelerating and where interventions are most urgent.

The public release is also a democratizing force. By making the embeddings openly available in Earth Engine, Google has effectively provided a shared global resource that is as accessible to a planner in Nairobi as to a GIS analyst in Ottawa. In principle, this levels the playing field between well‑funded national programs and under‑resourced local agencies. The caveat is that this accessibility depends entirely on Google’s continued support for the dataset. In mission‑critical domains, no analyst will rely solely on a corporate‑hosted service; independent capability remains essential.

Strategically, AlphaEarth’s strength is in guidance and prioritization. In intelligence contexts, it is the layer that tells you where to look harder — not the layer that gives you the final answer. In resource management, it tells you where land‑cover change is accelerating, not exactly what is happening on the ground. This distinction matters. For decision‑makers, AlphaEarth can dramatically shorten the cycle between question and insight. For field teams, it can focus scarce collection assets where they will have the greatest impact.

It also has an important capacity‑building role. By exposing more users to embedding‑based analysis in a familiar platform, it will accelerate the adoption of machine‑learning approaches in geospatial work. Analysts who start with AlphaEarth will be better prepared to work with other learned representations, multimodal fusion models, and even custom‑trained embeddings tailored to specific regions or domains.

The limitations – 10 m spatial resolution, annual temporal resolution, and opaque high‑dimensional features – are real, but they are also predictable. Any experienced geomatics professional will know where the model’s utility ends and when to switch to finer‑resolution or more temporally agile sources. In practice, the constraints make AlphaEarth a poor choice for parcel‑level cadastral mapping, tactical ISR targeting, or rapid disaster damage assessment. But they do not diminish its value in continental‑scale environmental intelligence, thematic mapping, or strategic planning.

In short, AlphaEarth Foundations fills a previously awkward space in the geospatial data hierarchy. It’s broad, fast, accurate, and globally consistent, but not fine enough for micro‑scale decisions. Its strategic role is as an accelerator: turning complex, multi‑source data into actionable regional or national insights with minimal effort. For national mapping agencies, conservation groups, humanitarian planners, and climate analysts, it represents a genuine step change in how quickly and broadly we can see the world.

Why I Always Start With Quebec When Researching Canadian Federal Projects

After decades of consulting across Canada on everything from agri-food frameworks to integrating geomatics into healthcare systems, I’ve developed a habit: whenever I’m tasked with researching a new federal project, my first instinct is to see what Quebec is doing. It’s not just a reflex; it’s a practical strategy. Time and again, Quebec has shown itself to be a few steps ahead of the rest of the country, not by accident, but because of how it approaches policy, innovation, and institutional design.

Let me explain why, using a few concrete examples that illustrate how Quebec’s leadership offers valuable lessons for any serious federal undertaking.

A Culture of Long-Term Planning and Strong Public Institutions
One of Quebec’s greatest strengths lies in its culture of policy sovereignty combined with a deep commitment to long-term planning. Unlike the often reactive or fragmented approaches seen elsewhere, Quebec’s government institutions are built with foresight. Their mandates encourage anticipating future challenges, not just responding to current problems.

Take water management, for instance. When federal policymakers started talking about a national water agency, Quebec already had a robust system in place, the Centrale de Suivi Hydrologique. This province-wide network connects sensors, real-time data, and forecasting tools to monitor freshwater systems. It’s a sophisticated marriage of geomatics, technology, and environmental science that functions as an operational model rather than a concept.

For consultants or project managers tasked with building a national water infrastructure or climate resilience framework, Quebec’s example isn’t just inspirational; it’s foundational. You start there because it shows you what is possible when policy vision meets institutional commitment.

Integration Across Sectors: Health, Geography, and Data
Quebec’s approach goes beyond individual projects. It’s about integration, the seamless connection between government ministries, academia, and industry research. This “triple helix” collaboration model is well developed in Quebec and is crucial when addressing complex, cross-sectoral challenges.

A case in point is CartoSanté, Quebec’s health geography initiative. By linking demographic data with healthcare service delivery, spatial planning, and public health metrics, this platform creates a living map of healthcare needs and capacities. It is precisely this kind of data integration that federal agencies seek today as they try to bring geomatics and health information systems together at scale.

Starting a federal health-geomatics project without examining CartoSantéwould be like trying to build a house without a foundation. Quebec’s work offers a tested blueprint on data interoperability, system architecture, and stakeholder coordination.

Agri-Food Resilience as a Model of Regional Sovereignty
While Canada has traditionally focused on food safety and quality, Quebec has been pioneering food security and sovereignty strategies for years. Its Politique bioalimentaire 2018–2025 is a comprehensive framework that stretches beyond farming techniques to include local processing, distribution, and regional branding.

During the COVID-19 pandemic, the federal government’s interest in “food sovereignty” suddenly became a priority. Quebec was already there, with initiatives like Zone Agtech that connect innovation hubs, farmers, and distributors to strengthen local food systems. Their experience provides invaluable insight into how to balance global markets with local resilience.

For any consultant or policymaker working on national agri-food strategies, Quebec offers a real-world laboratory of what works, from land-use policy to market development, rather than abstract policy drafts.

An Intellectual Independence That Drives Innovation
One factor often overlooked is Quebec’s distinct intellectual culture shaped by its French language and European influences. This has fostered a different approach to systems-thinking, less tied to U.S.-centric models and more open to integrated, interdisciplinary frameworks.

The Ouranos Consortium is a prime example. Long before climate adaptation became a nationwide buzzword, Ouranos was advancing applied climate services by blending meteorology, municipal planning, and risk insurance. Their work has influenced not just provincial but global climate resilience strategies.

This intellectual independence means Quebec often anticipates emerging challenges and responds with unique, well-rounded solutions. When federal agencies look for tested climate data platforms or governance models, Ouranos is frequently the starting point.

Institutional Continuity and Data Stewardship
Finally, Quebec benefits from a more stable and professionalized civil service in key areas like environmental monitoring and statistical data management. This continuity allows Quebec to maintain extensive, clean, and spatially tagged historical data sets, a rarity in many jurisdictions.

For example, when Meteorological Service of Canada sought to modernize weather station instruments metadata standards, Quebec’s Centre d’Expertise Hydrique stood out for its meticulously curated archives and consistent protocols. This institutional memory isn’t just a bureaucratic nicety; it’s critical infrastructure for evidence-based policy.

Starting federal projects by engaging with Quebec’s institutional frameworks means tapping into decades of disciplined data stewardship and knowledge management.

Quebec’s leadership in areas like agri-food resilience, climate and water data, and health geomatics is no accident. It’s the product of a distinct political culture, strong public institutions, integrated knowledge networks, and intellectual independence. When you’re consulting or managing complex federal projects, recognizing this is key.

By beginning your research with Quebec’s frameworks and models, you gain access to tested strategies, operational systems, and a vision for long-term resilience. While other regions may still be drafting proposals or testing pilots, Quebec is often already producing data and outcomes.

So the next time you embark on a new federal initiative, whether it’s improving food security, building climate-adaptive infrastructure, or integrating spatial data into healthcare, remember this: start with Quebec. It’s where the future of Canadian innovation often begins.

Rebooting the Net: Building a User-First Internet for All Canadians

Canada stands at a pivotal moment in its digital evolution. As underscored by a recent CBC Radio exploration of internet policy and trade, the current digital ecosystem often prioritizes commercial and regulatory players, rather than everyday users. To truly serve all Canadians, we must shift to an intentionally user‑centric internet; one that delivers equitable access, intuitive public services, meaningful privacy, and digital confidence.

Closing the Digital Divide: Beyond Access
While Infrastructure Canada reports 93 % national broadband availability at 50/10 Mbps, rural, Northern and Indigenous communities continue to face significant shortfalls. Just 62 % of rural households enjoy such speeds vs. 91 % of urban dwellers.   Additionally, cost remains a barrier, Canadians pay among the highest broadband prices in the OECD, exacerbated by data caps and limited competition.

Recent federal investments in the Universal Broadband Fund (C$3.2 B) and provincial connectivity strategies have shown gains: 2 million more Canadians connected by mid‑2024, with a 23 % increase in rural speed‑test results. Yet, hardware, affordability, and “last mile” digital inclusion remain hurdles. LEO satellites, advancements already underway with Telesat and others, offer cost-effective backhaul solutions for remote regions.

To be truly user‑focused, Canada must pair infrastructure rollout with subsidized hardware, low-cost data plans, and community Wi‑Fi in public spaces, mirroring what CAP once offered, and should reinvigorate .

Prioritizing Digital Literacy & Inclusion
Access means little if users lack confidence or fluency. Statistics Canada places 24 % of Canadians in “basic” or non‑user categories, with seniors especially vulnerable (62 % in 2018, down to 48 % by 2020). Further, Toronto-based research reveals that while 98 % of households are nominally connected, only precarious skill levels and siloed services keep Canada from being digitally inclusive.

We must emulate Ontario’s inclusive design principle: “When we design for the edges, we design for everyone”. Programs like CAP and modern iterations in schools, libraries, community centres, and First Nations-led deployments (e.g., First Mile initiatives) must be expanded to offer digital mentorship, lifelong e‑skills training, and device recycling initiatives with security support. 

Transforming Public Services with Co‑Design
The Government of Canada’s “Digital Ambition” (2024‑25) enshrines user‑centric, trusted, accessible services as its primary outcome. Yet progress relies on embedding authentic user input. Success stories from Code for Canada highlight the power of embedding designers and technologists into service teams, co‑creating solutions that resonate with citizen realities.  

Additionally, inclusive design guru Jutta Treviranus points out that systems built for users with disabilities naturally benefit all, promoting scenarios that anticipate diverse needs from launch.   Government adoption of accessible UX components, like Canada’s WET toolkit aligned with WCAG 2.0 AA, is commendable, but needs continuous testing by diverse users.

Preserving Openness and Trust
Canada’s 1993 Telecommunications Act prohibits ISPs from prioritizing or throttling traffic, anchoring net neutrality in law. Public support remains high, two‑thirds of internet users back open access. Upholding this principle ensures that small businesses, divisive news outlets, and marginalized voices aren’t silenced by commercial gatekeepers.

Meanwhile, Freedom House still rates Canada among the most open digital nations, though concerns persist about surveillance laws and rural cost differentials. Privacy trust can be further solidified through transparency mandates, public Wi‑Fi privacy guarantees, and clear data‑minimization standards where user data isn’t exploited post‑consent.

Cultivating a Better Digital Ecosystem
While Canada’s Connectivity Strategy unites government, civil society, and industry, meaningful alignment on digital policy remains uneven.   We need a human‑centred policy playbook: treat emerging tech (AI, broadband, fintech) as programmable infrastructure tied to inclusive economic goals. 

Local governments and Indigenous groups must be empowered as co‑designers, with funding and regulation responsive to community‑level priorities. Lessons from rural digital inclusion show collaborative successes when demand‑side (training, digital culture) and supply‑side (infrastructure, affordability) converge.

Canada’s digital future must be anchored in the user experience. That means:
• Universal access backed by public hardware, affordable plans, and modern connectivity technologies like LEO satellite
• Sustained digital literacy programs, especially for low‑income, elderly, newcomer, and Indigenous populations
• Public service design led by users and accessibility standards
• Firm protection of net neutrality and strengthened privacy regulations
• Bottom‑up: including Indigenous and local, participation in digital policy and infrastructure planning

This is not merely a public service agenda, it’s a growth imperative. By centering users, Canada can build a digital ecosystem that’s trustworthy, inclusive, and innovation-ready. That future depends on federal action, community engagement, and sustained investment, but the reward is a true digital renaissance that serves every Canadian.

Beyond the Hype: Why Your AI Assistant Must Be Your First Line of Digital Defense

The age of the intelligent digital assistant has finally arrived, not as a sci-fi dream, but as a powerful, practical reality. Tools like ChatGPT have evolved far beyond clever conversation partners. With the introduction of integrated features like ConnectorsMemory, and real-time Web Browsing, we are witnessing the early formation of AI systems that can manage calendars, draft emails, conduct research, summarize documents, and even analyze business workflows across platforms.

The functionality is thrilling. It feels like we’re on the cusp of offloading the drudgery of digital life, the scheduling, the sifting, the searching, to a competent and tireless assistant that never forgets, never judges, and works at the speed of thought.

Here’s the rub: the more capable this assistant becomes, the more it must connect with the rest of your digital life, and that’s where the red flags start waving.

The Third-Party Trap
OpenAI, to its credit, has implemented strong safeguards. For paying users, ChatGPT does not use personal conversations to train its models unless explicitly opted in. Memory is fully transparent and user-controllable. And the company is not in the business of selling ads or user data, a refreshing departure from Big Tech norms.

Yet, as soon as your assistant reaches into your inbox, calendar, notes, smart home, or cloud drives via third-party APIs, you enter a fragmented privacy terrain. Each connected service; be it Google, Microsoft, Notion, Slack, or Dropbox, carries its own privacy policies, telemetry practices, and data-sharing arrangements. You may trust ChatGPT, but once you authorize a Connector, you’re often surrendering data to companies whose business models still rely heavily on behavioural analytics, advertising, or surveillance capitalism.

In this increasingly connected ecosystem, you are the product, unless you are exceedingly careful.

Functionality Without Firewalls Is Just Feature Creep
This isn’t paranoia. It’s architecture. Most consumer technology was never built with your sovereignty in mind; it was built to collect, predict, nudge, and sell. A truly helpful AI assistant must do more than function, it must protect.

And right now, there’s no guarantee that even the most advanced language model won’t become a pipe that leaks your life across platforms you can’t see, control, or audit. Unless AI is designed from the ground up to serve as a digital privacy buffer, its revolutionary potential will simply accelerate the same exploitative systems that preceded it.

Why AI Must Become a Personal Firewall
If artificial intelligence is to serve the individual; not the advertiser, not the platform, not the algorithm, it must evolve into something more profound than a productivity tool.

It must become a personal firewall.

Imagine a digital assistant that doesn’t just work within the existing digital ecosystem, but mediates your exposure to it. One that manages your passwords, scans service agreements, redacts unnecessary data before sharing it, and warns you when a Connector or integration is demanding too much access. One that doesn’t just serve you but defends you; actively, intelligently, and transparently.

This is not utopian dreaming. It is an ethical imperative for the next stage of AI development. We need assistants that aren’t neutral conduits between you and surveillance systems, but informed guardians that put your autonomy first.

Final Thought
The functionality is here. The future is knocking. Yet, if we embrace AI without demanding it also protect us, we risk handing over even more of our lives to systems designed to mine them.

It’s time to build AI, not just as an assistant, but as an ally. Not just to manage our lives, but to guard them.

America’s Orbital Firewall: Starlink, Starshield, and the Quiet Struggle for Internet Control

This is the fourth in a series of posts discussing U.S. military strategic overreach. 

In recent years, the United States has been quietly consolidating a new form of power, not through bases or bullets, but through satellites and bandwidth. The global promotion of Starlink, Elon Musk’s satellite internet system, by US embassies, and the parallel development of Starshield, a defense-focused communications platform, signals a strategic shift; the internet’s future may be American, orbital, and increasingly militarized. Far from a neutral technology, this network could serve as a vehicle for U.S. influence over not just internet access, but the very flow of global information.

Starlink’s stated goal is noble: provide high-speed internet to remote and underserved regions. In practice, however, the system is becoming a critical instrument of U.S. foreign policy. From Ukraine, where it has kept communications running amidst Russian attacks, to developing nations offered discounted or subsidized service via embassy connections, Starlink has been embraced not simply as an infrastructure solution, but as a tool of soft, and sometimes hard, power. This adoption often comes with implicit, if not explicit, alignment with U.S. strategic interests.

At the same time, Starshield, SpaceX’s parallel venture focused on secure, military-grade communications for the Pentagon, offers a glimpse into the future of digitally enabled warfare. With encrypted satellite communications, surveillance integration, and potential cyber-capabilities, Starshield will do for the battlefield what Starlink is doing for the civilian world; create reliance on U.S.-controlled infrastructure. And that reliance translates into leverage.

The implications are profound. As more countries become dependent on American-owned satellite internet systems, the U.S. gains not only the ability to monitor traffic but, more subtly, to control access and shape narratives. The technical architecture of these satellite constellations gives the provider, and by extension, the U.S. government, potential visibility into vast amounts of global data traffic. While public assurances are given about user privacy and neutrality, there are few binding international legal frameworks governing satellite data sovereignty or traffic prioritization.

Moreover, the capacity to shut down, throttle, or privilege certain kinds of data flows could offer new tools of coercion. Imagine a regional conflict where a state dependent on Starlink finds its communications subtly slowed or interrupted unless it aligns with U.S. policy. Or a regime facing domestic protest suddenly discovers that encrypted messaging apps are unusable while government-friendly media loads perfectly. These aren’t science fiction scenarios, they are plausible in a world where one nation owns the sky’s infrastructure.

To be clear, other countries are attempting to catch up. China’s satellite internet megaconstellation, Europe’s IRIS² project, and various regional efforts reflect a growing recognition that information access is the new frontier of sovereignty; but the U.S. currently leads, and its fusion of commercial innovation with military application through companies like SpaceX blurs the line between public and private power in ways few international institutions are prepared to regulate.

The result is a form of orbital hegemony, an American-controlled internet superstructure with global reach and few checks. The world must now grapple with a fundamental question: in surrendering communications infrastructure to the stars, have we handed the keys to global discourse to a single country?

Sources
• U.S. Department of Defense (2023). “DOD and SpaceX Collaborate on Starshield.”
• U.S. State Department (2024). Embassy outreach documents promoting Starlink in developing nations.
• Reuters (2023). “SpaceX’s Starlink critical to Ukraine war effort.”
• European Commission (2023). “Secure Connectivity Initiative: IRIS² Explained.”