The rise of high-speed fibre internet has done more than just make Netflix faster and video calls clearer, it has opened the door for ordinary people to run powerful technologies from the comfort of their own homes. One of the most exciting of these possibilities is self-hosted artificial intelligence. While most people are used to accessing AI through big tech companies’ cloud platforms, the time has come to consider what it means to bring this capability in-house. For everyday users, the advantages come down to three things: security, personalization, and independence.
The first advantage is data security. Every time someone uses a cloud-based AI service, their words, files, or images travel across the internet to a company’s servers. That data may be stored, analyzed, or even used to improve the company’s products. For personal matters like health information, financial records, or private conversations, that can feel intrusive. Hosting an AI at home flips the equation. The data never leaves your own device, which means you, not a tech giant, are the one in control. It’s like the difference between storing your photos on your own hard drive versus uploading them to a social media site.
The second benefit is customization. The AI services offered online are built for the masses: general-purpose, standardized, and often limited in what they can do. By hosting your own AI, you can shape it around your life. A student could set it up to summarize their textbooks. A small business owner might feed it product information to answer customer questions quickly. A parent might even build a personal assistant trained on family recipes, schedules, or local activities. The point is that self-hosted AI can be tuned to match individual needs, rather than forcing everyone into a one-size-fits-all mold.
The third reason is independence. Relying on external services means depending on their availability, pricing, and rules. We’ve all experienced the frustration of an app changing overnight or a service suddenly charging for features that used to be free. A self-hosted AI is yours. It continues to run regardless of internet outages, company decisions, or international disputes. Just as personal computers gave households independence from corporate mainframes in the 1980s, self-hosted AI promises a similar shift today.
The good news is that ordinary users don’t need to be programmers or engineers to start experimenting. Open-source projects are making AI more accessible than ever. GPT4All offers a desktop app that works much like any other piece of software: you download it, run it, and interact with the AI through a simple interface. Ollama provides an easy way to install and switch between different AI models on your computer. Communities around these tools offer clear guides, friendly forums, and video tutorials that make the learning curve far less intimidating. For most people, running a basic AI system today is no harder than setting up a home printer or Wi-Fi router.
Of course, there are still limits. Running the largest and most advanced models may require high-end hardware, but for many day-to-day uses: writing, brainstorming, answering questions, or summarizing text, lighter models already perform impressively on standard laptops or desktop PCs. And just like every other piece of technology, the tools are becoming easier and more user-friendly every year. What feels like a hobbyist’s project in 2025 could be as common as antivirus software or cloud storage by 2027.
Self-hosted AI isn’t just for tech enthusiasts. Thanks to fibre internet and the growth of user-friendly tools, it is becoming a real option for everyday households. By bringing AI home, users can protect their privacy, shape the technology around their own lives, and free themselves from the whims of big tech companies. Just as personal computing once shifted power from corporations to individuals, the same shift is now within reach for artificial intelligence.
In contemporary organizational theory, the capacity to share knowledge efficiently is increasingly recognized not merely as a good practice, but as one of the central levers of influence, innovation, and competitive advantage. Influence in the workplace is no longer determined solely by formal authority or proximity to decision-makers; it hinges instead on who opens up their ideas, disseminates outcomes, and builds collective awareness. Knowledge sharing, properly conceived, is a social process that undergirds learning, creativity, and organizational agility.
Why Sharing Still Matters Even with advances in digital collaboration tools, hybrid work environments, and more explicit knowledge management policies, many organizations continue to wrestle with information silos, “knowledge hoarding,” and weak visibility of what colleagues are doing. These behaviors impose hidden costs: duplication of work, failure to capitalize on existing insights, slow adoption of innovations, and organizational inertia.
Empirical studies confirm that when organizational climate is supportive, when centralization and formalization are lower, knowledge sharing behavior (KSB) tends to increase. For example, a recent study of IT firms in Vietnam (n = 529) found that a positive organizational climate had a direct positive effect on KSB, while high degrees of centralization and formalization decreased knowledge‐sharing intentions.
Moreover, knowledge sharing is strongly associated with improved performance outcomes. In technological companies in China, for instance, research shows that AI-augmented knowledge sharing, along with organizational learning and dynamic capabilities, positively affect job performance.
Theoretical Foundations & Diffusion of Influence A number of established frameworks help us understand both how knowledge spreads and why sharing can shift influence within organizations. • Diffusion of Innovations (Everett Rogers et al.): This theory explains how new ideas are adopted across a social system over time via innovators, early adopters, early majority etc. Key variables include communication channels, time, social systems, and the characteristics of the innovation itself. • Threshold Models & Critical Mass: Recent experiments suggest that when a certain proportion of individuals (often around 20-30%) behave in a particular way (e.g. adopting or sharing an innovation), that can tip the whole system into broader adoption. For example, one study found that social diffusion leading to change in norms becomes much more probable once a committed minority exceeds roughly 25% of the population. • Organizational Climate & Intention/Behavior Models: Behavior intentions (e.g. willingness to share) are shaped by trust, perceived support, alignment of individual and organizational values, and perceived risk/benefit. These mediate whether knowledge is actually shared or hidden.
Barriers & Enablers Understanding why people don’t share is as important as understanding why they do.
Barriers include: • Structural impediments like overly centralized decision frameworks, rigid hierarchy, heavy formalization. These reduce the avenues for informal sharing and flatten the perceived payoff for going outside established channels. • Cognitive or psychological obstacles, such as fear of criticism, loss of advantage (“knowledge as power”), lack of trust, or simply not knowing who might benefit from what one knows. • Technological and process deficiencies: poor documentation practices, weak knowledge management systems, lack of standard archiving, difficult to locate material, etc. These make sharing costly in terms of effort, risk of misunderstanding, or duplication.
Enablers include: • Cultivating a learning culture: where mistakes are not punished, where experimentation is supported, and where informal learning is valued. Studies in team climate show that the presence of an “organizational learning culture” correlates strongly with innovative work behavior. • Leadership that is supportive of sharing: transformational, inclusive leadership, openness to new ideas even when they challenge orthodoxy. Leaders who make visible their support for sharing set norms. • Recognition, incentive alignment, and reward systems that explicitly value sharing. When sharing contributes to promotions, performance evaluations, or peer recognition, people are more likely to invest effort in it.
Influence through Sharing: A Refined Model Putting this together, here is a refined model of how sharing translates into influence: 1. Visibility: Sharing makes one’s work visible across formal and informal networks. Visibility breeds recognition. 2. Peer Adoption & Critical Mass: Innovation often needs a threshold of peer adoption. Once enough people (often around 20-30%) accept or discuss an idea, it tends to propagate more broadly. Early informal sharing helps reach that threshold. 3. Legitimization & Institutionalization: When enough peers accept an idea, it begins to be noticed by formal leadership, which may then adopt it as part of official strategy or practice. What was once “radical” becomes “official.” 4. Influence & Reward: As an individual or team’s ideas get absorbed into the organizational narrative, their influence increases. They may be entrusted with leadership, provided more resources, or seen as agents of change.
Recent Considerations: Hybrid Work, Digital Tools, AI Over the past few years, changes in how and where people work, plus the integration of AI into knowledge-sharing tools, add new dimensions: • Remote and hybrid setups tend to magnify the problems of invisibility and isolation; informal corridor conversations or impromptu check-ins become less likely. Organizations must work harder to construct virtual equivalents (e.g. asynchronous documentation, digital forums, internal social networks). • AI and knowledge-management platforms can help accelerate sharing, reduce friction (e.g. discovery of existing reports, automatic tagging, summarisation), but they also risk over-trust in automation or leaving behind tacit knowledge that is hard to codify. • Given the increasing volume of information, selective sharing and curating become skills. Not every detail needs to be shared widely, but knowing what, when, and how to share is part of influence.
Implications for Practice For individuals aiming to increase their influence via sharing: • Embed documentation and archival processes into every project (e.g. phase reports, lessons learned). • Use both formal and informal channels: internal blogs or newsletters, but also coffee chats, virtual social spaces. • Be willing to experiment, share preliminary findings; feedback improves ideas and increases visibility.
For organizations: • Build a culture that rewards sharing explicitly through performance systems. • Reduce structural barriers like overly centralized control or onerous formalization. • Provide tools and training to lower the effort of sharing; make knowledge easier to find and use. • Encourage cross-team interactions, peer networks, communities of practice.
Final Word Sharing is not just a morally good or nice thing to do, it is one of the most potent forms of influence in knowledge-based work. It transforms static assets into living processes, elevates visibility, enables innovation, and shapes organization culture. As the world of work continues to evolve, those who master the art and science of sharing will increasingly become the architects of change.
References: Here are key sources that discuss the concepts above. You can draw on these for citations or further reading. 1. Xu, J., et al. (2023). A theoretical review on the role of knowledge sharing and … [PMC] 2. Peters, L.D.K., et al. (2024). “‘The more we share, the more we have’? Analyses of identification with the company positively influencing knowledge-sharing behaviour…” 3. Greenhalgh, T., et al. (2004). “Diffusion of Innovations in Service Organizations.” Milbank Quarterly – literature review on spreading and sustaining innovations. 4. Ye, M., et al. (2021). “Collective patterns of social diffusion are shaped by committed minorities …” Nature Communications 5. Bui, T. T., Nguyen, L. P., Tran, A. P., Nguyen, H. H., & Tran, T. T. (2023). “Organizational Factors and Knowledge Sharing Behavior: Mediating Model of Knowledge Sharing Intention.” 6. Abbasi, S. G., et al. (2021). “Impact of Organizational and Individual Factors on Knowledge Sharing Behavior.” 7. He, M., et al. (2024). “Sharing or Hiding? Exploring the Influence of Social … Knowledge sharing & knowledge hiding mechanisms.” 8. Sudibjo, N., et al. (2021). “The effects of knowledge sharing and person–organization fit on teachers’ innovative work …” 9. Academia preprint: Cui, J., et al. (2025). “The Explore of Knowledge Management Dynamic Capabilities, AI-Driven Knowledge Sharing, Knowledge-Based Organizational Support, and Organizational Learning on Job Performance: Evidence from Chinese Technological Companies.” 10. Koivisto, K., & Taipalus, T. (2023). “Pitfalls in Effective Knowledge Management: Insights from an International Information Technology Organization.”
In an era when artificial intelligence threatens to displace traditional journalism, a glaring contradiction has emerged: news organizations that block AI crawlers from accessing their content are increasingly using AI to generate the very content they deny to AI. This move not only undermines the values of transparency and fairness, but also exposes a troubling hypocrisy in the media’s engagement with AI.
Fortifying the Gates Against AI Many established news outlets have taken concrete steps to prevent AI from accessing their content. As of early 2024, over 88 percent of top news outlets, including The New York Times, The Washington Post, and The Guardian, were blocking AI data-collection bots such as OpenAI’s GPTBot via their robots.txt files. Echoing these moves, a Reuters Institute report found that nearly 80 percent of prominent U.S. news organizations blocked OpenAI’s crawlers by the end of 2023, while roughly 36 percent blocked Google’s AI crawler.
These restrictions are not limited to voluntary technical guidelines. Cloudflare has gone further, blocking known AI crawlers by default and offering publishers a “Pay Per Crawl” model, allowing access to their content only under specific licensing terms. The intent is clear: content creators want to retain control, demand compensation, and prevent unlicensed harvesting of their journalism.
But Then They Use AI To Generate Their Own Content While these publishers fortify their content against external AI exploitation, they increasingly turn to AI internally to produce articles, summaries, and other content. This shift has real consequences: jobs are being cut and AI-generated content is being used to replace human-created journalism. • Reach plc, publisher of Mirror, Express, and others, recently announced a restructuring that places 600 jobs at risk, including 321 editorial positions, as it pivots toward AI-driven formats like video and live content. • Business Insider CEO Barbara Peng confirmed that roughly 21 percent of the staff were laid off to offset declines in search traffic, while the company shifts resources toward AI-generated features such as automated audio briefings. • CNET faced backlash after it published numerous AI-generated stories under staff bylines, some containing factual errors. The fallout led to corrections and a renewed pushback from newsroom employees.
The Hypocrisy Unfolds This dissonance, blocking AI while deploying it, lies at the heart of the hypocrisy. On one hand, publishers argue for content sovereignty: preventing AI from freely ingesting and repurposing their work. On the other hand, they quietly harness AI for their own ends, often reducing staffing under the pretense of innovation or cost-cutting.
This creates a scenario in which: AI is denied access to public content, while in-house AI is trusted with producing public-facing content. Human labor is dismissed in the name of progress, even though AI is not prevented from tapping into the cultural and journalistic capital built over years. Control and compensation arguments are asserted to keep AI out, yet the same AI is deployed strategically to reshape newsroom economics.
This approach fails to reconcile the ethical tensions it embodies. If publishers truly value journalistic integrity, transparency, and compensation, then applying those principles selectively, accepting them only when convenient, is disingenuous. The news media’s simultaneous rejection and embrace of AI reflect a transactional, rather than principled, stance.
A Path Forward – or a Mirage? Some publishers are demanding fair licensing models, seeking to monetize AI access rather than simply deny it. The emergence of frameworks like the Really Simple Licensing (RSL) standard allows websites to specify terms, such as royalties or pay-per-inference charges, in their robots.txt, aiming for a more equitable exchange between AI firms and content creators.
Still, that measured approach contrasts sharply with using AI to cut costs internally, a strategy that further alienates journalists and erodes trust in media institutions.
Integrity or Expedience? The juxtaposition of content protection and AI deployment in newsrooms lays bare a cynical calculus: AI is off-limits when others use it, but eminently acceptable when it serves internal profit goals. This selective embrace erodes the moral foundation of journalistic institutions and raises urgent questions: • Can publishers reconcile the need for revenue with the ethical imperatives of transparency and fairness? • Will the rapid rise of AI content displace more journalists than it empowers? • And ultimately, can media institutions craft coherent policies that honor both their creators and the audience’s right to trustworthy news
Perhaps there is a path toward licensing frameworks and responsible AI use that aligns with journalistic values, but as long as the will to shift blame, “not us scraping, but us firing”, persists, the hypocrisy remains undeniable.
For centuries, every major technological shift has sparked fears about the death of the crafts it intersects. The printing press didn’t eliminate scribes, it transformed them. The rise of the internet and word processors didn’t end journalism, they redefined its forms. Now, artificial intelligence fronts the same familiar conversation: is AI killing professional writing, or is it once again reshaping it?
As a business consultant, I’ve immersed myself in digital tools: from CRMs to calendars, word processors to spreadsheets, not as existential threats, but as extensions of my capabilities. AI fits into that lineage. It doesn’t render me obsolete. It offers capacity, particularly, the capacity to offload mechanical work, and reclaim time for strategic, empathic, and creative labor.
The data shows this isn’t just a sentimental interpretation. Multiple studies document significant declines in demand for freelance writing roles. A Harvard Business Review–cited study that tracked 1.4 million freelance job listings found that, post-ChatGPT, demand for “automation-prone” jobs fell by 21%, with writing roles specifically dropping 30% . Another analysis on Upwork revealed a 33% drop in writing postings between late 2022 and early 2024, while a separate study observed that, shortly after ChatGPT’s debut, freelance job hires declined by nearly 5% and monthly earnings by over 5% among writers. These numbers are real. The shift has been painful for many in the profession.
Yet the picture isn’t uniform. Other data suggests that while routine or templated writing roles are indeed shrinking, strategic and creatively nuanced writing remains vibrant. Upwork reports that roles demanding human nuance: like copywriting, ghostwriting, and marketing content have actually surged, rising by 19–24% in mid-2023. Similarly, experts note that although basic web copy and boilerplate content are susceptible to automation, high-empathy, voice-driven writing continues to thrive.
My daily experience aligns with that trend. I don’t surrender to AI. I integrate it. I rely on it to break the blank page, sketch a structure, suggest keywords, or clarify phrasing. Yet I still craft, steer, and embed meaning, because that human judgment, that voice, is irreplaceable.
Many professionals are responding similarly. A qualitative study exploring how writers engage with AI identified four adaptive strategies, from resisting to embracing AI tools, each aimed at preserving human identity, enhancing workflow, or reaffirming credibility. A 2025 survey of 301 professional writers across 25+ languages highlighted both ethical concerns, and a nuanced realignment of expectations around AI adoption.
This is not unprecedented in academia: AI is already assisting with readability, grammar, and accessibility, especially for non-native authors, but not at the expense of critical thinking or academic integrity. In fact, when carefully integrated, AI shows promise as an aid, not a replacement.
In this light, AI should not be viewed as the death of professional writing, but as a test of its boundaries: Where does machine-assisted work end and human insight begin? The profession isn’t collapsing, it’s clarifying its value. The roles that survive will not be those that can be automated, but those that can’t.
In that regard, we as writers, consultants, and professionals must decide: will we retreat into obsolescence or evolve into roles centered on empathy, strategy, and authentic voice? I choose the latter, not because it’s easier, but because it’s more necessary.
Sources • Analysis of 1.4 million freelance job listings showing a 30% decline in demand for writing positions post-ChatGPT release • Upwork data indicating a 33% decrease in writing job postings from late 2022 to early 2024 • Study of 92,547 freelance writers revealing a 5.2% drop in earnings and reduced job flow following ChatGPT’s launch ort showing growth in high-nuance writing roles (copywriting, ghostwriting, content creation) in Q3 2023 • Analysis noting decreased demand (20–50%) for basic writing and translation, while creative and high-empathy roles remain resilient • Qualitative research on writing professionals’ adaptive strategies around generative AI • Survey of professional writers on AI usage, adoption challenges, and ethical considerations • Academic studies indicating that AI tools can enhance writing mechanics and accessibility if integrated thoughtfully
I am writing to propose a strategic adjustment to ChatGPT’s subscription pricing that could substantially increase both user adoption and revenue. While ChatGPT has achieved remarkable success, the current $25/month subscription fee may be a barrier for many potential users. In contrast, a $9.95/month pricing model aligns with industry standards and could unlock significant growth.
Current Landscape
As of mid-2025, ChatGPT boasts:
800 million weekly active users, with projections aiming for 1 billion by year-end. (source)
20 million paid subscribers, generating approximately $500 million in monthly revenue. (source)
Despite this success, the vast majority of users remain on the free tier, indicating a substantial untapped market.
The Case for $9.95/Month
A $9.95/month subscription fee is a proven price point for digital services, offering a balance between affordability and perceived value. Services like Spotify, Netflix, and OnlyFans have thrived with similar pricing, demonstrating that users are willing to pay for enhanced features and experiences at this price point.
Projected Impact
If ChatGPT were to lower its subscription fee to $9.95/month, the following scenarios illustrate potential outcomes:
Scenario 1: 50% Conversion Rate 50% of current weekly active users (400 million) convert to paid subscriptions. 200 million paying users × $9.95/month = $1.99 billion/month. Annual revenue: $23.88 billion.
Even at a conservative 25% conversion rate, annual revenue would exceed current projections, highlighting the significant financial upside.
Strategic Considerations
Expand the user base: Attract a broader audience, including students, professionals, and casual users.
Enhance user engagement: Increased adoption could lead to higher usage rates and data insights, further improving the product.
Strengthen market position: A more accessible price point could solidify ChatGPT’s dominance in the AI chatbot market, currently holding an 80.92% share. (source)
Conclusion
Adopting a $9.95/month subscription fee could be a transformative move for ChatGPT, driving substantial revenue growth and reinforcing its position as a leader in the AI space. I urge you to consider this strategic adjustment to unlock ChatGPT’s full potential.
In the operational world, data is only as valuable as the decisions it enables, and as timely as the missions it supports. I’ve worked with geospatial intelligence in contexts where every meter mattered and every day lost could change the outcome. AlphaEarth Foundations is not the sensor that will tell you which vehicle just pulled into a compound or how a flood has shifted in the last 48 hours, but it may be the tool that tells you exactly where to point the sensors that can. That distinction is everything in operational geomatics.
With the public release of AlphaEarth Foundations, Google DeepMind has placed a new analytical tool into the hands of the global geospatial community. It is a compelling mid-tier dataset – broad in coverage, high in thematic accuracy, and computationally efficient. But in operational contexts, where missions hinge on timelines, revisit rates, and detail down to the meter, knowing exactly where AlphaEarth fits, and where it does not, is essential.
Operationally, AlphaEarth is best understood as a strategic reconnaissance layer. Its 10 m spatial resolution makes it ideal for detecting patterns and changes at the meso‑scale: agricultural zones, industrial developments, forest stands, large infrastructure footprints, and broad hydrological changes. It can rapidly scan an area of operations for emerging anomalies and guide where scarce high‑resolution collection assets should be deployed. In intelligence terms, it functions like a wide-area search radar, identifying sectors of interest, but not resolving the individual objects within them.
The strengths are clear. In broad-area environmental monitoring, AlphaEarth can reveal where deforestation is expanding most rapidly or where wetlands are shrinking. In agricultural intelligence, it can detect shifts in cultivation boundaries, large-scale irrigation projects, or conversion of rangeland to cropland. In infrastructure analysis, it can track new highway corridors, airport expansions, or urban sprawl. Because it operates from annual composites, these changes can be measured consistently year-over-year, providing reliable trend data for long-term planning and resource allocation.
In the humanitarian and disaster-response arena, AlphaEarth offers a quick way to establish pre‑event baselines. When a cyclone strikes, analysts can compare the latest annual composite to prior years to understand how the landscape has evolved, information that can guide relief planning and longer‑term resilience efforts. In climate-change adaptation, it can help identify landscapes under stress, informing where to target mitigation measures.
But operational users quickly run into resolution‑driven limitations. At 10 m GSD, AlphaEarth cannot identify individual vehicles, small boats, rooftop solar installations, or artisanal mining pits. Narrow features – rural roads, irrigation ditches, hedgerows – disappear into the generalised pixel. In urban ISR (urban Intelligence, Surveillance, and Reconnaissance), this makes it impossible to monitor fine‑scale changes like new rooftop construction, encroachment on vacant lots, or the addition of temporary structures. For these tasks, commercial very high resolution (VHR) satellites, crewed aerial imagery, or drones are mandatory.
Another constraint is temporal granularity. The public AlphaEarth dataset is annual. This works well for detecting multi‑year shifts in land cover but is too coarse for short-lived events or rapidly evolving situations. A military deployment lasting two months, a flash‑flood event, or seasonal agricultural practices will not be visible. For operational missions requiring weekly or daily updates, sensors like PlanetScope’s daily 3–5 m imagery or commercial tasking from Maxar’s WorldView fleet are essential.
There is also the mixed‑pixel effect, particularly problematic in heterogeneous environments. Each embedding is a statistical blend of everything inside that 100 m² tile. In a peri‑urban setting, a pixel might include rooftops, vegetation, and bare soil. The dominant surface type will bias the model’s classification, potentially misrepresenting reality in high‑entropy zones. This limits AlphaEarth’s utility for precise land‑use delineation in complex landscapes.
In operational geospatial workflows, AlphaEarth is therefore most effective as a triage tool. Analysts can ingest AlphaEarth embeddings into their GIS or mission‑planning system to highlight AOIs where significant year‑on‑year change is likely. These areas can then be queued for tasking with higher‑resolution, higher‑frequency assets. In resource-constrained environments, this can dramatically reduce unnecessary collection, storage, and analysis – focusing effort where it matters most.
A second valuable operational role is in baseline mapping. AlphaEarth can provide the reference layer against which other sources are compared. For instance, a national agriculture ministry might use AlphaEarth to maintain a rolling national crop‑type map, then overlay drone or VHR imagery for detailed inspections in priority regions. Intelligence analysts might use it to maintain a macro‑level picture of land‑cover change across an entire theatre, ensuring no sector is overlooked.
It’s important to stress that AlphaEarth is not a targeting tool in the military sense. It does not replace synthetic aperture radar for all-weather monitoring, nor does it substitute for daily revisit constellations in time-sensitive missions. It cannot replace the interpretive clarity of high‑resolution optical imagery for damage assessment, facility monitoring, or urban mapping. Its strength lies in scope, consistency, and analytical efficiency – not in tactical precision.
The most successful operational use cases will integrate AlphaEarth into a tiered collection strategy. At the top tier, high‑resolution sensors deliver tactical detail. At the mid‑tier, AlphaEarth covers the wide‑area search and pattern detection mission. At the base, raw satellite archives remain available for custom analyses when needed. This layered approach ensures that each sensor type is used where it is strongest, and AlphaEarth becomes the connective tissue between broad‑area awareness and fine‑scale intelligence.
Ultimately, AlphaEarth’s operational value comes down to how it’s positioned in the workflow. Used to guide, prioritize, and contextualize other intelligence sources, it can save time, reduce costs, and expand analytical reach. Used as a standalone decision tool in missions that demand high spatial or temporal resolution, it will disappoint. But as a mid‑tier, strategic reconnaissance layer, it offers an elegant solution to a long-standing operational challenge: how to maintain global awareness without drowning in raw data.
For geomatics professionals, especially those in the intelligence and commercial mapping sectors, AlphaEarth is less a silver bullet than a force multiplier. It can’t tell you everything, but it can tell you where to look, and in operational contexts, knowing where to look is often the difference between success and failure.
As regular readers know, I often write about geomatics, its services, and products. While I tend to be a purist when it comes to map projections, favouring the Cahill-Keyes and AuthaGraph projections, I can understand why the Equal Earth projection might be more popular, as it still looks familiar enough to resemble a traditional map.
The Equal Earth map projection is gaining prominence as a tool for reshaping global perceptions of geography, particularly in the context of Africa’s representation. Endorsed by the African Union and advocacy groups like Africa No Filter and Speak Up Africa, the “Correct The Map” campaign seeks to replace the traditional Mercator projection with the Equal Earth projection to more accurately depict Africa’s true size and significance.
Origins and Design of the Equal Earth Projection Introduced in 2018 by cartographers Bojan Šavrič, Bernhard Jenny, and Tom Patterson, the Equal Earth projection is an equal-area pseudocylindrical map designed to address the distortions inherent in the Mercator projection. While the Mercator projection is useful for navigation, it significantly enlarges regions near the poles and shrinks equatorial regions, leading to a misrepresentation of landmass sizes. In contrast, the Equal Earth projection maintains the relative sizes of areas, offering a more accurate visual representation of continents.
Africa’s Distorted Representation in Traditional Maps The Mercator projection, created in 1569, has been widely used for centuries. However, it distorts the size of continents, particularly those near the equator. Africa, for instance, appears smaller than it actually is, which can perpetuate stereotypes and misconceptions about the continent. This distortion has implications for global perceptions and can influence educational materials, media portrayals, and policy decisions.
The “Correct The Map” Campaign The “Correct The Map” campaign aims to challenge these historical inaccuracies by promoting the adoption of the Equal Earth projection. The African Union has actively supported this initiative, emphasizing the importance of accurate geographical representations in reclaiming Africa’s rightful place on the global stage. By advocating for the use of the Equal Earth projection in schools, media, and international organizations, the campaign seeks to foster a more equitable understanding of Africa’s size and significance.
Broader Implications and Global Support The push for the Equal Earth projection is part of a broader movement to decolonize cartography and challenge Eurocentric perspectives. By adopting map projections that accurately reflect the true size of continents, especially Africa, the global community can promote a more balanced and inclusive worldview. Institutions like NASA and the World Bank have already begun to recognize the value of the Equal Earth projection, and its adoption is expected to grow in the coming years.
The Equal Earth map projection represents more than just a technical advancement in cartography; it symbolizes a shift towards greater equity and accuracy in how the world is represented. By supporting initiatives like the “Correct The Map” campaign, individuals and organizations can contribute to a more just and accurate portrayal of Africa and other regions, fostering a global environment where all continents are recognized for their true size and importance.
Over the course of my career in geomatics, I’ve watched technology push our field forward in leaps – from hand‑drawn topographic overlays to satellite constellations capable of imaging every corner of the globe daily. Now we stand at the edge of another shift. Google DeepMind’s AlphaEarth Foundations promises a new way to handle the scale and complexity of Earth observation, not by giving us another stack of imagery, but by distilling it into something faster, leaner, and more accessible. For those of us who have spent decades wrangling raw pixels into usable insight, this is a development worth pausing to consider.
This year’s release of AlphaEarth Foundations marks a major milestone in global-scale geospatial analytics. Developed by Google DeepMind, the model combines multi-source Earth observation data into a 64‑dimensional embedding for every 10 m × 10 m square of the planet’s land surface. It integrates optical and radar imagery, digital elevation models, canopy height, climate reanalyses, gravity data, and even textual metadata into a single, analysis‑ready dataset covering 2017–2024. The result is a tool that allows researchers and decision‑makers to map, classify, and detect change at continental and global scales without building heavy, bespoke image‑processing pipelines.
The strategic value proposition of AlphaEarth rests on three pillars: speed, accuracy, and accessibility. Benchmarking against comparable embedding models shows about a 23–24% boost in classification accuracy. This comes alongside a claimed 16× improvement in processing efficiency – meaning tasks that once consumed days of compute can now be completed in hours. And because the dataset is hosted directly in Google Earth Engine, it inherits an established ecosystem of workflows, tutorials, and a user community that already spans NGOs, research institutions, and government agencies worldwide.
From a geomatics strategy perspective, this efficiency translates directly into reach. Environmental monitoring agencies can scan entire nations for deforestation or urban growth without spending weeks on cloud masking, seasonal compositing, and spectral index calculation. Humanitarian organizations can identify potential disaster‑impact areas without maintaining their own raw‑imagery archives. Climate researchers can explore multi‑year trends in vegetation cover, wetland extent, or snowpack with minimal setup time. It is a classic case of lowering the entry barrier for high‑quality spatial analysis.
But the real strategic leverage comes from integration into broader workflows. AlphaEarth is not a replacement for fine‑resolution imagery, nor is it meant to be. It is a mid‑tier, broad‑area situational awareness layer. At the bottom of the stack, Sentinel‑2, Landsat, and radar missions continue to provide open, raw data for those who need pixel‑level spectral control. At the top, commercial sub‑meter satellites and airborne surveys still dominate tactical decision‑making where object‑level identification matters. AlphaEarth occupies the middle: fast enough to be deployed often, accurate enough for policy‑relevant mapping, and broad enough to be applied globally.
This middle layer is critical in national‑scale and thematic mapping. It enables ministries to maintain current, consistent land‑cover datasets without the complexity of traditional workflows. For large conservation projects, it provides a harmonized baseline for ecosystem classification, habitat connectivity modelling, and impact assessment. In climate‑change adaptation planning, AlphaEarth offers the temporal depth to see where change is accelerating and where interventions are most urgent.
The public release is also a democratizing force. By making the embeddings openly available in Earth Engine, Google has effectively provided a shared global resource that is as accessible to a planner in Nairobi as to a GIS analyst in Ottawa. In principle, this levels the playing field between well‑funded national programs and under‑resourced local agencies. The caveat is that this accessibility depends entirely on Google’s continued support for the dataset. In mission‑critical domains, no analyst will rely solely on a corporate‑hosted service; independent capability remains essential.
Strategically, AlphaEarth’s strength is in guidance and prioritization. In intelligence contexts, it is the layer that tells you where to look harder — not the layer that gives you the final answer. In resource management, it tells you where land‑cover change is accelerating, not exactly what is happening on the ground. This distinction matters. For decision‑makers, AlphaEarth can dramatically shorten the cycle between question and insight. For field teams, it can focus scarce collection assets where they will have the greatest impact.
It also has an important capacity‑building role. By exposing more users to embedding‑based analysis in a familiar platform, it will accelerate the adoption of machine‑learning approaches in geospatial work. Analysts who start with AlphaEarth will be better prepared to work with other learned representations, multimodal fusion models, and even custom‑trained embeddings tailored to specific regions or domains.
The limitations – 10 m spatial resolution, annual temporal resolution, and opaque high‑dimensional features – are real, but they are also predictable. Any experienced geomatics professional will know where the model’s utility ends and when to switch to finer‑resolution or more temporally agile sources. In practice, the constraints make AlphaEarth a poor choice for parcel‑level cadastral mapping, tactical ISR targeting, or rapid disaster damage assessment. But they do not diminish its value in continental‑scale environmental intelligence, thematic mapping, or strategic planning.
In short, AlphaEarth Foundations fills a previously awkward space in the geospatial data hierarchy. It’s broad, fast, accurate, and globally consistent, but not fine enough for micro‑scale decisions. Its strategic role is as an accelerator: turning complex, multi‑source data into actionable regional or national insights with minimal effort. For national mapping agencies, conservation groups, humanitarian planners, and climate analysts, it represents a genuine step change in how quickly and broadly we can see the world.
After decades of consulting across Canada on everything from agri-food frameworks to integrating geomatics into healthcare systems, I’ve developed a habit: whenever I’m tasked with researching a new federal project, my first instinct is to see what Quebec is doing. It’s not just a reflex; it’s a practical strategy. Time and again, Quebec has shown itself to be a few steps ahead of the rest of the country, not by accident, but because of how it approaches policy, innovation, and institutional design.
Let me explain why, using a few concrete examples that illustrate how Quebec’s leadership offers valuable lessons for any serious federal undertaking.
A Culture of Long-Term Planning and Strong Public Institutions One of Quebec’s greatest strengths lies in its culture of policy sovereignty combined with a deep commitment to long-term planning. Unlike the often reactive or fragmented approaches seen elsewhere, Quebec’s government institutions are built with foresight. Their mandates encourage anticipating future challenges, not just responding to current problems.
Take water management, for instance. When federal policymakers started talking about a national water agency, Quebec already had a robust system in place, the Centrale de Suivi Hydrologique. This province-wide network connects sensors, real-time data, and forecasting tools to monitor freshwater systems. It’s a sophisticated marriage of geomatics, technology, and environmental science that functions as an operational model rather than a concept.
For consultants or project managers tasked with building a national water infrastructure or climate resilience framework, Quebec’s example isn’t just inspirational; it’s foundational. You start there because it shows you what is possible when policy vision meets institutional commitment.
Integration Across Sectors: Health, Geography, and Data Quebec’s approach goes beyond individual projects. It’s about integration, the seamless connection between government ministries, academia, and industry research. This “triple helix” collaboration model is well developed in Quebec and is crucial when addressing complex, cross-sectoral challenges.
A case in point is CartoSanté, Quebec’s health geography initiative. By linking demographic data with healthcare service delivery, spatial planning, and public health metrics, this platform creates a living map of healthcare needs and capacities. It is precisely this kind of data integration that federal agencies seek today as they try to bring geomatics and health information systems together at scale.
Starting a federal health-geomatics project without examining CartoSantéwould be like trying to build a house without a foundation. Quebec’s work offers a tested blueprint on data interoperability, system architecture, and stakeholder coordination.
Agri-Food Resilience as a Model of Regional Sovereignty While Canada has traditionally focused on food safety and quality, Quebec has been pioneering food security and sovereignty strategies for years. Its Politique bioalimentaire 2018–2025 is a comprehensive framework that stretches beyond farming techniques to include local processing, distribution, and regional branding.
During the COVID-19 pandemic, the federal government’s interest in “food sovereignty” suddenly became a priority. Quebec was already there, with initiatives like Zone Agtech that connect innovation hubs, farmers, and distributors to strengthen local food systems. Their experience provides invaluable insight into how to balance global markets with local resilience.
For any consultant or policymaker working on national agri-food strategies, Quebec offers a real-world laboratory of what works, from land-use policy to market development, rather than abstract policy drafts.
An Intellectual Independence That Drives Innovation One factor often overlooked is Quebec’s distinct intellectual culture shaped by its French language and European influences. This has fostered a different approach to systems-thinking, less tied to U.S.-centric models and more open to integrated, interdisciplinary frameworks.
The Ouranos Consortium is a prime example. Long before climate adaptation became a nationwide buzzword, Ouranos was advancing applied climate services by blending meteorology, municipal planning, and risk insurance. Their work has influenced not just provincial but global climate resilience strategies.
This intellectual independence means Quebec often anticipates emerging challenges and responds with unique, well-rounded solutions. When federal agencies look for tested climate data platforms or governance models, Ouranos is frequently the starting point.
Institutional Continuity and Data Stewardship Finally, Quebec benefits from a more stable and professionalized civil service in key areas like environmental monitoring and statistical data management. This continuity allows Quebec to maintain extensive, clean, and spatially tagged historical data sets, a rarity in many jurisdictions.
For example, whenMeteorological Service of Canada sought to modernize weather station instruments metadata standards, Quebec’s Centre d’Expertise Hydrique stood out for its meticulously curated archives and consistent protocols. This institutional memory isn’t just a bureaucratic nicety; it’s critical infrastructure for evidence-based policy.
Starting federal projects by engaging with Quebec’s institutional frameworks means tapping into decades of disciplined data stewardship and knowledge management.
Quebec’s leadership in areas like agri-food resilience, climate and water data, and health geomatics is no accident. It’s the product of a distinct political culture, strong public institutions, integrated knowledge networks, and intellectual independence. When you’re consulting or managing complex federal projects, recognizing this is key.
By beginning your research with Quebec’s frameworks and models, you gain access to tested strategies, operational systems, and a vision for long-term resilience. While other regions may still be drafting proposals or testing pilots, Quebec is often already producing data and outcomes.
So the next time you embark on a new federal initiative, whether it’s improving food security, building climate-adaptive infrastructure, or integrating spatial data into healthcare, remember this: start with Quebec. It’s where the future of Canadian innovation often begins.
Canada stands at a pivotal moment in its digital evolution. As underscored by a recent CBC Radio exploration of internet policy and trade, the current digital ecosystem often prioritizes commercial and regulatory players, rather than everyday users. To truly serve all Canadians, we must shift to an intentionally user‑centric internet; one that delivers equitable access, intuitive public services, meaningful privacy, and digital confidence.
Closing the Digital Divide: Beyond Access While Infrastructure Canada reports 93 % national broadband availability at 50/10 Mbps, rural, Northern and Indigenous communities continue to face significant shortfalls. Just 62 % of rural households enjoy such speeds vs. 91 % of urban dwellers. Additionally, cost remains a barrier, Canadians pay among the highest broadband prices in the OECD, exacerbated by data caps and limited competition.
Recent federal investments in the Universal Broadband Fund (C$3.2 B) and provincial connectivity strategies have shown gains: 2 million more Canadians connected by mid‑2024, with a 23 % increase in rural speed‑test results. Yet, hardware, affordability, and “last mile” digital inclusion remain hurdles. LEO satellites, advancements already underway with Telesat and others, offer cost-effective backhaul solutions for remote regions.
To be truly user‑focused, Canada must pair infrastructure rollout with subsidized hardware, low-cost data plans, and community Wi‑Fi in public spaces, mirroring what CAP once offered, and should reinvigorate .
Prioritizing Digital Literacy & Inclusion Access means little if users lack confidence or fluency. Statistics Canada places 24 % of Canadians in “basic” or non‑user categories, with seniors especially vulnerable (62 % in 2018, down to 48 % by 2020). Further, Toronto-based research reveals that while 98 % of households are nominally connected, only precarious skill levels and siloed services keep Canada from being digitally inclusive.
We must emulate Ontario’s inclusive design principle: “When we design for the edges, we design for everyone”. Programs like CAP and modern iterations in schools, libraries, community centres, and First Nations-led deployments (e.g., First Mile initiatives) must be expanded to offer digital mentorship, lifelong e‑skills training, and device recycling initiatives with security support.
Transforming Public Services with Co‑Design The Government of Canada’s “Digital Ambition” (2024‑25) enshrines user‑centric, trusted, accessible services as its primary outcome. Yet progress relies on embedding authentic user input. Success stories from Code for Canada highlight the power of embedding designers and technologists into service teams, co‑creating solutions that resonate with citizen realities.
Additionally, inclusive design guru Jutta Treviranus points out that systems built for users with disabilities naturally benefit all, promoting scenarios that anticipate diverse needs from launch. Government adoption of accessible UX components, like Canada’s WET toolkit aligned with WCAG 2.0 AA, is commendable, but needs continuous testing by diverse users.
Preserving Openness and Trust Canada’s 1993 Telecommunications Act prohibits ISPs from prioritizing or throttling traffic, anchoring net neutrality in law. Public support remains high, two‑thirds of internet users back open access. Upholding this principle ensures that small businesses, divisive news outlets, and marginalized voices aren’t silenced by commercial gatekeepers.
Meanwhile, Freedom House still rates Canada among the most open digital nations, though concerns persist about surveillance laws and rural cost differentials. Privacy trust can be further solidified through transparency mandates, public Wi‑Fi privacy guarantees, and clear data‑minimization standards where user data isn’t exploited post‑consent.
Cultivating a Better Digital Ecosystem While Canada’s Connectivity Strategy unites government, civil society, and industry, meaningful alignment on digital policy remains uneven. We need a human‑centred policy playbook: treat emerging tech (AI, broadband, fintech) as programmable infrastructure tied to inclusive economic goals.
Local governments and Indigenous groups must be empowered as co‑designers, with funding and regulation responsive to community‑level priorities. Lessons from rural digital inclusion show collaborative successes when demand‑side (training, digital culture) and supply‑side (infrastructure, affordability) converge.
Canada’s digital future must be anchored in the user experience. That means: • Universal access backed by public hardware, affordable plans, and modern connectivity technologies like LEO satellite • Sustained digital literacy programs, especially for low‑income, elderly, newcomer, and Indigenous populations • Public service design led by users and accessibility standards • Firm protection of net neutrality and strengthened privacy regulations • Bottom‑up: including Indigenous and local, participation in digital policy and infrastructure planning
This is not merely a public service agenda, it’s a growth imperative. By centering users, Canada can build a digital ecosystem that’s trustworthy, inclusive, and innovation-ready. That future depends on federal action, community engagement, and sustained investment, but the reward is a true digital renaissance that serves every Canadian.