The State of Geomatics in Paraguay

As of 2025, Paraguay’s mapping and cartography landscape is undergoing significant transformation, driven by technological advancements, collaborative initiatives, and institutional reforms. I was working in country on an agri-traceability USAID initiative, just over a decade ago, when Paraguay received funding from a development bank for a national mapping project. Problems with a clear mandate, objectives, and governance limited the outcomes of that project, and it’s good to say that huge progress has since been made.  

National and Thematic Mapping

MapBiomas Paraguay
Launched in 2023, MapBiomas Paraguay is a collaborative initiative involving Guyra Paraguay and WWF Paraguay. Utilizing Google Earth Engine, it produces annual land use and land cover (LULC) maps from 1985 to 2022 at a 30-meter resolution. These maps, encompassing ten LULC classes, are instrumental for environmental monitoring, policy-making, and land management. The platform offers open access to raster maps, transition statistics, and satellite mosaics, with updates planned annually.  

Historical Thematic Mapping
Historically, Paraguay’s thematic mapping has been limited. Notable efforts include a 1:500,000 scale resource mapping project in 1975, covering geology, soils, vegetation, and population, and a 1995 publication focusing on soil and land use in the Oriental Region. These maps were produced with support from international organizations and are based on the WGS84 datum.  

Urban and Regional Mapping

YouthMappersUNA and Atlas Urbano Py
Addressing the scarcity of up-to-date urban data, the YouthMappersUNA chapter at the National University of Asunción initiated the Atlas Urbano Py project. This project employs open-source tools like OpenStreetMap (OSM), Mapillary, and QGIS to map urban areas. Fieldwork includes 360° photomapping and drone-based orthophotography, resulting in detailed building use and height data for municipalities along Route PY02. To date, over 21,000 georeferenced images and 3,700 building polygons have been documented.   

Cadastral Mapping and Property Fabric

Servicio Nacional de Catastro (SNC)
The SNC is responsible for Paraguay’s cadastral mapping. Recognizing the need for modernization, a comprehensive reform is underway to streamline procedures, update technological infrastructure, and enhance legal certainty. A significant development is the proposed National Unified Registry (RUN), aiming to integrate the General Directorate of Public Registries, the General Directorate of National Cadaster Services, and the Department of Surveying and Geodesy. This integration seeks to reduce processing times by at least 20% and improve transparency.   

Indigenous Land Mapping

A participatory project focused on indigenous land delimitation has been conducted in six communities of the Mbya Guaraní and Yshir peoples. Covering approximately 35,828 hectares, this initiative involved geolocating traditional boundaries, documenting land invasions, and integrating data into the SNC’s digital cadaster. The project provides legal tools for communities to assert land rights and seek regularization.  

Open Geospatial Data Infrastructure

Paraguay is part of the GeoSUR initiative, a regional network promoting free access to geospatial data across Latin America and the Caribbean. GeoSUR supports the development of spatial data infrastructures (SDIs) by providing tools for data sharing, visualization, and analysis. While progress has been made, challenges remain in ensuring data interoperability, standardization, and widespread adoption of open data practices.   

Paraguay’s cartographic landscape is evolving through collaborative efforts, technological integration, and institutional reforms. National initiatives like MapBiomas Paraguay enhance environmental monitoring, while grassroots projects such as Atlas Urbano Py address urban data gaps. Reforms in cadastral systems aim to improve land administration and legal certainty. Continued investment in open data infrastructures and capacity building will be crucial for sustaining and advancing these developments.  

When 10 Meters Isn’t Enough: Understanding AlphaEarth’s Limits in Operational Contexts

In the operational world, data is only as valuable as the decisions it enables, and as timely as the missions it supports. I’ve worked with geospatial intelligence in contexts where every meter mattered and every day lost could change the outcome. AlphaEarth Foundations is not the sensor that will tell you which vehicle just pulled into a compound or how a flood has shifted in the last 48 hours, but it may be the tool that tells you exactly where to point the sensors that can. That distinction is everything in operational geomatics.

With the public release of AlphaEarth Foundations, Google DeepMind has placed a new analytical tool into the hands of the global geospatial community. It is a compelling mid-tier dataset – broad in coverage, high in thematic accuracy, and computationally efficient. But in operational contexts, where missions hinge on timelines, revisit rates, and detail down to the meter, knowing exactly where AlphaEarth fits, and where it does not, is essential.

Operationally, AlphaEarth is best understood as a strategic reconnaissance layer. Its 10 m spatial resolution makes it ideal for detecting patterns and changes at the meso‑scale: agricultural zones, industrial developments, forest stands, large infrastructure footprints, and broad hydrological changes. It can rapidly scan an area of operations for emerging anomalies and guide where scarce high‑resolution collection assets should be deployed. In intelligence terms, it functions like a wide-area search radar, identifying sectors of interest, but not resolving the individual objects within them.

The strengths are clear. In broad-area environmental monitoring, AlphaEarth can reveal where deforestation is expanding most rapidly or where wetlands are shrinking. In agricultural intelligence, it can detect shifts in cultivation boundaries, large-scale irrigation projects, or conversion of rangeland to cropland. In infrastructure analysis, it can track new highway corridors, airport expansions, or urban sprawl. Because it operates from annual composites, these changes can be measured consistently year-over-year, providing reliable trend data for long-term planning and resource allocation.

In the humanitarian and disaster-response arena, AlphaEarth offers a quick way to establish pre‑event baselines. When a cyclone strikes, analysts can compare the latest annual composite to prior years to understand how the landscape has evolved, information that can guide relief planning and longer‑term resilience efforts. In climate-change adaptation, it can help identify landscapes under stress, informing where to target mitigation measures.

But operational users quickly run into resolution‑driven limitations. At 10 m GSD, AlphaEarth cannot identify individual vehicles, small boats, rooftop solar installations, or artisanal mining pits. Narrow features – rural roads, irrigation ditches, hedgerows – disappear into the generalised pixel. In urban ISR (urban Intelligence, Surveillance, and Reconnaissance), this makes it impossible to monitor fine‑scale changes like new rooftop construction, encroachment on vacant lots, or the addition of temporary structures. For these tasks, commercial very high resolution (VHR) satellites, crewed aerial imagery, or drones are mandatory.

Another constraint is temporal granularity. The public AlphaEarth dataset is annual. This works well for detecting multi‑year shifts in land cover but is too coarse for short-lived events or rapidly evolving situations. A military deployment lasting two months, a flash‑flood event, or seasonal agricultural practices will not be visible. For operational missions requiring weekly or daily updates, sensors like PlanetScope’s daily 3–5 m imagery or commercial tasking from Maxar’s WorldView fleet are essential.

There is also the mixed‑pixel effect, particularly problematic in heterogeneous environments. Each embedding is a statistical blend of everything inside that 100 m² tile. In a peri‑urban setting, a pixel might include rooftops, vegetation, and bare soil. The dominant surface type will bias the model’s classification, potentially misrepresenting reality in high‑entropy zones. This limits AlphaEarth’s utility for precise land‑use delineation in complex landscapes.

In operational geospatial workflows, AlphaEarth is therefore most effective as a triage tool. Analysts can ingest AlphaEarth embeddings into their GIS or mission‑planning system to highlight AOIs where significant year‑on‑year change is likely. These areas can then be queued for tasking with higher‑resolution, higher‑frequency assets. In resource-constrained environments, this can dramatically reduce unnecessary collection, storage, and analysis – focusing effort where it matters most.

A second valuable operational role is in baseline mapping. AlphaEarth can provide the reference layer against which other sources are compared. For instance, a national agriculture ministry might use AlphaEarth to maintain a rolling national crop‑type map, then overlay drone or VHR imagery for detailed inspections in priority regions. Intelligence analysts might use it to maintain a macro‑level picture of land‑cover change across an entire theatre, ensuring no sector is overlooked.

It’s important to stress that AlphaEarth is not a targeting tool in the military sense. It does not replace synthetic aperture radar for all-weather monitoring, nor does it substitute for daily revisit constellations in time-sensitive missions. It cannot replace the interpretive clarity of high‑resolution optical imagery for damage assessment, facility monitoring, or urban mapping. Its strength lies in scope, consistency, and analytical efficiency – not in tactical precision.

The most successful operational use cases will integrate AlphaEarth into a tiered collection strategy. At the top tier, high‑resolution sensors deliver tactical detail. At the mid‑tier, AlphaEarth covers the wide‑area search and pattern detection mission. At the base, raw satellite archives remain available for custom analyses when needed. This layered approach ensures that each sensor type is used where it is strongest, and AlphaEarth becomes the connective tissue between broad‑area awareness and fine‑scale intelligence.

Ultimately, AlphaEarth’s operational value comes down to how it’s positioned in the workflow. Used to guide, prioritize, and contextualize other intelligence sources, it can save time, reduce costs, and expand analytical reach. Used as a standalone decision tool in missions that demand high spatial or temporal resolution, it will disappoint. But as a mid‑tier, strategic reconnaissance layer, it offers an elegant solution to a long-standing operational challenge: how to maintain global awareness without drowning in raw data.

For geomatics professionals, especially those in the intelligence and commercial mapping sectors, AlphaEarth is less a silver bullet than a force multiplier. It can’t tell you everything, but it can tell you where to look, and in operational contexts, knowing where to look is often the difference between success and failure.

A Virtual Satellite for the World: Understanding the Promise and Limits of AlphaEarth

Geomatics, as my regular readers know, is a field in which I have worked for over four decades, spanning the intelligence community, Silicon Valley technology firms, and the geomatics private sector here in Ottawa. I’ve seen our discipline evolve from analog mapping and painstaking photogrammetry to real‑time satellite constellations and AI‑driven spatial analytics. This post marks the first in a new series exploring AI and geospatial data modeling, and I thought it fitting to begin with AlphaEarth Foundations – Google DeepMind’s ambitious “virtual satellite” model that promises to reshape how we approach broad‑area mapping and analysis.

Last week, Google DeepMind publicly launched AlphaEarth Foundations, its new geospatial AI model positioned as a “virtual satellite” capable of mapping the planet in unprecedented analytical form. Built on a fusion of multi-source satellite imagery, radar, elevation models, climate reanalyses, canopy height data, gravity data, and even textual metadata, AlphaEarth condenses all of this into a 64‑dimensional embedding for every 10 m × 10 m square on Earth’s land surface. The initial public dataset spans 2017 to 2024, hosted in Google Earth Engine and ready for direct analysis. In one stroke, DeepMind has lowered the barrier for environmental and land‑cover analytics at continental to global scales.

The value proposition is as much about efficiency as it is about accuracy. Google claims AlphaEarth delivers mapping results roughly 16 times fasterthan conventional remote sensing pipelines while cutting compute and storage requirements. It’s also about accuracy: in benchmark comparisons, AlphaEarth shows about 23–24% improvement over comparable global embedding models. In a field where percent‑level gains are celebrated, such a margin is significant. This efficiency comes partly from doing away with some of the pre‑processing rituals that have been standard for years. Cloud masking, seasonal compositing, and spectral index calculation are baked implicitly into the learned embeddings. Analysts can skip the pixel‑level hygiene and get straight to thematic mapping, change detection, or clustering.

That acceleration is welcome in both research and operational contexts. Environmental monitoring agencies can move faster from data ingestion to insight. NGOs can classify cropland or detect urban expansion without building a bespoke Landsat or Sentinel‑2 pipeline. Even large corporate GIS teams will find they can prototype analyses in days instead of weeks. The model’s tight integration with Google Earth Engine also means it sits within an established analytical environment, where a community of developers and analysts already shares code, workflows, and thematic layers.

Yet, as with any sensor or model, AlphaEarth must be understood for what it is, and what it is not. At 10 m ground sample distance, the model resolves features at the meso‑scale. It will confidently map an agricultural field, a city block, a wide river channel, or a forest stand. But it will not resolve a single vehicle in a parking lot, a shipping container, a rooftop solar array, or an artisanal mining pit. In urban contexts, narrow alleys vanish, backyard pools disappear, and dense informal settlements blur into homogeneous “built‑up” pixels. For tactical intelligence, precision agriculture at the plant or row scale, cadastral mapping, or detailed disaster damage assessment, sub‑meter resolution from airborne or commercial VHR satellites remains indispensable.

There’s also the mixed‑pixel problem. Each embedding represents an averaged, high‑dimensional signature for that 100 m² cell. In heterogeneous landscapes, say, the interface between urban and vegetation, one dominant surface type tends to mask the rest. High‑entropy pixels in peri‑urban mosaics, riparian corridors, or fragmented habitats can yield inconsistent classification results. In intelligence work, that kind of ambiguity means you cannot use AlphaEarth as a primary targeting layer; it’s more of an AOI narrowing tool, guiding where to point higher‑resolution sensors.

Another operational constraint is temporal granularity. The public dataset is annual, not near‑real‑time. That makes it superb for long‑term trend analysis: mapping multi‑year deforestation, tracking city expansion, monitoring wetland loss, but unsuitable for detecting short‑lived events. Military deployments, rapid artisanal mine expansion, seasonal flooding, or ephemeral construction activity will often be smoothed out of the annual composite. In agricultural monitoring, intra‑annual phenology, crucial for crop condition assessment, will not be visible here.

Despite these constraints, the model has clear sweet spots. At a national scale, AlphaEarth can deliver consistent, high‑accuracy land‑cover maps far faster than existing workflows. For environmental intelligence, it excels in identifying broad‑area change “hotspots,” which can then be queued for targeted VHR or drone collection. In humanitarian response, it can help quickly establish a baseline understanding of affected regions – even if building‑by‑building damage assessment must be done with finer resolution imagery. For climate science, conservation planning, basin‑scale hydrology, and strategic environmental monitoring, AlphaEarth is an accelerant.

In practice, this positions AlphaEarth as a mid‑tier analytical layer in the geospatial stack. Below it, raw optical and radar imagery from Sentinel‑2, Landsat, and others still provide the source pixels for specialists who need spectral and temporal precision. Above it, VHR commercial imagery and airborne data capture the sub‑meter world for operational and tactical decisions. AlphaEarth sits in the middle, offering the efficiency and generality of a learned representation without the cost or data‑management burden of raw imagery analysis.

One of the less‑discussed but important aspects of AlphaEarth is its accessibility. By releasing the embeddings publicly in Earth Engine, Google has created a shared global layer that can be tapped by anyone with an account: from a conservation biologist in the Amazon to a municipal planner in East Africa. The question is how long that access will persist. Google has a mixed track record in maintaining long‑term public datasets and tools, and while Earth Engine has shown staying power, analysts in mission‑critical sectors will want to maintain independent capabilities.

For the geomatics professional, AlphaEarth represents both a new capability and a familiar trade‑off. It accelerates the broad‑area, medium‑resolution part of the workflow and lowers the barrier to global‑scale thematic mapping. But it is no substitute for finer‑resolution sensors when the mission demands target‑scale discrimination or rapid revisit. As a strategic mapping tool, it has immediate value. As a tactical intelligence asset, its role is more about guidance than decision authority. In the right slot in the geospatial toolkit, however, AlphaEarth can shift timelines, expand analytical reach, and make broad‑area monitoring more accessible than ever before.

The Cost of Innovation: How the Ordnance Survey’s 1990s Financial Model Created Competition

When I arrived at Durham University in 1985 to begin my PhD research, I was given an office once occupied by David Rhind, a leading figure in geomatics. Professor Rhind passed away this month at 81, following a distinguished career in geomorphology, geomatics and cartography. Two of his most notable contributions were to the Chorley Committee’s 1987 report on the “Handling of Geographical Information” and his leadership of the Ordnance Survey (OS) as Director General from 1992 to 1998; a position I once aspired to.

In the early 1990s, the UK Ordnance Survey transitioned from offering maps at cost to a commercially-driven model aimed at reducing taxpayer dependence. Spearheaded by Rhind, this shift was intended to generate new revenue streams by charging commercial rates, fostering innovation in the private sector, with this change occurring during John Major’s continuation of Margaret Thatcher’s free-market Conservative government.

On the surface, the strategy seemed a logical response to the digital age, but its impact on the OS’s client relationships raised concerns. A prime example was the UK Automobile Association (AA), which had long relied on OS maps. As the OS raised prices, the AA, caught between increasing costs and the need to maintain affordable services, began developing its own mapping solutions. This shift, prompted by Rhind’s commercial model, mirrored a broader industry trend where rising prices forced organizations to explore alternatives.

The AA’s move away from OS data highlighted a flaw in the OS’s strategy: by prioritizing revenue, the OS alienated loyal clients and opened the door for competitors offering cheaper or more specialized services. This weakened OS’s market dominance and contributed to the rise of private mapping services, eroding its monopoly.

This shift also sparked debate about public ownership of data. Mapping data, funded by taxpayers, had once been made available at cost to ensure equitable access. Rhind’s commercialization, while financially successful, seemed to contradict this principle, favoring revenue over the broader public good.

In hindsight, the transition to a commercial model raised important questions about the long-term sustainability of the OS. While it aimed to modernize the service and ensure financial self-sufficiency, it fragmented the market, driving clients to develop in-house solutions and creating competition. The AA’s departure underscores the risks of prioritizing profit over accessibility.

Today, the OS operates on a mixed-cost model, offering both free OpenData and premium products sold based on usage. This model aims to balance public access with financial sustainability, generating revenue for ongoing data maintenance. However, the legacy of the commercialization strategy persists, and the question remains whether the OS can maintain its mission of serving the public good while ensuring its financial independence. The challenge is finding a balance that doesn’t drive clients away or erode public access.

It’s interesting to note that the U.S. Geological Survey (USGS) continues to distribute a significant amount of its data free to the public including topographical map, earthquake and water data along with Landsat imagery. While the USGS does offer some cost-recovery and subscription-based data sets, the vast majority of its data holdings are still freely available, but I wonder how long this financial model will be in place under the second Trump administration.