Why Decentralized Social Media Is Gaining Ground

As I edit this post, I feel that I am mansplaining a shift in technology and platforms that most people already know, but people are getting fed up with the way the big platforms like Meta, X, and Google and are trying to maintain control of the narrative and our data. 

What’s Driving the Shift?
Today, with 5.42 billion people on social media globally; and an average user visiting nearly seven platforms per month, the field is crowded and monopolized by big players driving both attention and data exploitation. 

Decentralized networks are winning attention amid growing distrust: a Pew Research survey found 78% of users worry about how traditional platforms use their data. These alternatives promise control: data ownership, customizable moderation, transparent algorithms, and monetization models that shift value back to creators.

Moreover, the market is on a steep growth path: from US $1.2 billion in 2023 with a projected 29.5% annual growth rate through 2033, decentralized social is carving out real economic ground. 

Key Platforms Leading the Movement

PlatformHighlights & Stats
BlueskyBuilt on the AT Protocol—prioritizes algorithmic control and data portability. Opened publicly in February 2024, it had over 10M registered users by Oct 2024, more than 25M by late 2024, and recently surpassed 30M  . It also supports diverse niche front ends—like Flashes and PinkSea  . Moderation remains a challenge with rising bot activity  .
MastodonFederated, ActivityPub-based microblogging. As of early 2025, estimates vary: around 9–15 million total users, with ~1 million monthly active accounts  . Its decentralized model allows communities to govern locally  . However, Reddit discussions show user engagement still feels low or “ghost-town-ish”  .
Lens ProtocolWeb3-native, on Polygon. Empowers creators to own their social graph and monetize content directly through tokenized mechanisms  .
FarcasterBuilt on Optimism, emphasizes identity portability and content control across different clients  .
PoostingA Brazilian alternative launched in 2025, offering a chronological feed, thematic communities, and low-algorithmic interference. Reached 130,000 users within months and valued at R$6 million  .


Additional notable mentions: MeWe, working on transitioning to the Project Liberty-based DSNP protocol, potentially becoming the largest decentralized platform; Odysee for decentralized video hosting via LBRY, though moderation remains an issue. 

Why Users Are Leaving Big Tech
Privacy & Surveillance Fatigue: Decentralized alternatives reduce data collection and manipulation.
Prosocial Media Momentum: Movements toward more empathetic and collaborative platforms are gaining traction, with decentralized systems playing a central role.
Market Shifts & Cracks in Big Tech: TikTok legal challenges prompted influencers to explore decentralized fediverse platforms, while acquisition talks like Frank McCourt’s “people’s bid” for TikTok push the conversation toward user-centric internet models.

Challenges Ahead
User Experience & Onboarding: Platforms like Mastodon remain intimidating for non-tech users.
Scalability & Technical Friction: Many platforms still struggle with smooth performance at scale.
Moderation Without Central Control: Community-based governance is evolving but risks inconsistent enforcement and harmful content.
Mainstream Adoption: Big platforms dominate user attention, making decentralized alternatives a niche, not yet mainstream.

What’s Next
Hybrid Models: Decentralization features are being integrated into mainstream platforms, like Threads joining the Fediverse, bridging familiarity with innovation. 
Creator-First Economies: Platforms onboard new monetization structures—subscriptions, tokens, tipping—allowing creators to retain 70–80% of the value, compared to the 5–15% they currently retain on centralized platforms.
Niche and Ethical Communities: Users will increasingly seek vertical or value-oriented communities (privacy, art, prosocial discourse) over mass platforms.
Market Potential: With a high projected growth rate, decentralized networks could become a major force, particularly if UX improves and moderation models mature. 

Modernized Takeaway: Decentralized social media has evolved from fringe idealism to a tangible alternative – driven by data privacy concerns, creator empowerment, and ethical innovation. Platforms like Bluesky and Mastodon are gaining traction but still face adoption and moderation challenges. The future lies in hybrid models, ethical governance, and creator-first economies that shift the balance of power away from centralized gatekeepers.

Results Over Bureaucracy: Transforming Federal Management and Workforce Planning

Canada’s federal government employs hundreds of thousands of people, yet far too often, success is measured by inputs rather than results. Hours worked, meetings attended, or forms completed dominate performance metrics, while citizens experience delays, inconsistent service, and bureaucratic frustration. Prime Minister Mark Carney has an opportunity to change this by embracing outcomes-based management and coupling it with a planned reduction of the federal workforce—a strategy that improves efficiency without undermining service delivery.

The case for outcomes-based management
Currently, federal management emphasizes process compliance over actual impact. Staff are assessed on whether they followed procedures, logged sufficient hours, or completed internal forms. While accountability is important, focusing on inputs rather than outputs fosters risk aversion, discourages initiative, and prioritizes process over public value.

Outcomes-based management flips this paradigm. Departments and employees are held accountable for tangible results: timeliness, accuracy, citizen satisfaction, and measurable program goals. Performance evaluation becomes tied to impact rather than paperwork. Managers are empowered to allocate resources strategically, encourage innovation, and remove obstacles that slow delivery. Employees gain clarity on expectations, flexibility in execution, and motivation to improve services.

This approach is widely recognized internationally as best practice in public administration. Governments that adopt outcomes-focused management report faster service delivery, higher citizen satisfaction, and better use of limited resources. It is a tool for effectiveness as much as efficiency.

Planned workforce reduction: 5% annually
Outcomes-based management alone does not shrink government, but it creates the environment to do so responsibly. With clearer accountability for results, the government can reduce headcount without impairing services. A planned 5% annual reduction over five years, achieved through retirements, attrition, and more selective hiring, offers a predictable, sustainable path to a smaller, more focused public service.

No mass layoffs are necessary. Instead, positions are left unfilled where feasible, and recruitment is limited to essential roles. Over five years, the workforce contracts by approximately 23%, freeing funds for high-priority programs while maintaining core services. At the end of the cycle, a full review assesses outcomes: delivery quality, service metrics, and costs. Adjustments can be made if reductions have inadvertently affected citizens’ experience.

Synergy with the other reforms
This plan works hand-in-hand with the other two reforms proposed: eliminating internal cost recovery and adopting a single pay scale with one bargaining agent. With fewer staff and a streamlined compensation system, management gains greater clarity and control. Removing internal billing and administrative overhead frees staff to focus on outcomes, while a unified pay scale ensures fair and consistent compensation as the workforce shrinks. Together, these reforms create a coherent, accountable, and modern public service.

Benefits for Canadians
Outcomes-based management and planned workforce reduction offer multiple benefits:
1. Efficiency gains: Staff focus on work that delivers measurable results rather than administrative juggling.
2. Cost savings: Attrition-based reductions lower salary and benefits expenditures without disruptive layoffs.
3. Transparency: Clear metrics demonstrate value to taxpayers, building public trust.
4. Resilience and innovation: Departments adapt faster, encouraging problem-solving and continuous improvement.

Political and administrative feasibility
Canada has successfully experimented with elements of outcomes-based management in programs such as the Treasury Board’s Results-Based Management Framework and departmental performance agreements. These initiatives demonstrate that the federal bureaucracy can shift focus from inputs to results if given clear mandates and strong leadership. Coupled with a predictable downsizing plan, the government can modernize staffing while maintaining accountability and service quality.

A smarter, results-driven public service
Prime Minister Carney has the opportunity to reshape Ottawa’s culture. Moving from input-focused bureaucracy to outcomes-based management, and pairing it with a responsible workforce reduction, creates a public service that delivers more for less. Citizens experience faster, more reliable services; employees understand expectations and have clarity in their roles; and the government maximizes value from every dollar spent.

Together with eliminating internal cost recovery and adopting a single pay scale, this reform completes a trio of policies that make the federal government smaller, smarter, and more accountable. Canadians deserve a public service focused not on paperwork, but on results that matter. This is the path to a modern, efficient, and effective Ottawa.

A Comparative Analysis of Global Space Technology Capabilities 

The space sector has changed dramatically in recent decades, with nations advancing human exploration, satellite technology, and commercial ventures beyond Earth. As more players enter this evolving arena, it is helpful to look at the capabilities of different countries to see how their strengths, challenges, and ambitions shape the future of space. This overview offers a comparative look at several leading spacefaring nations, highlighting their key achievements and ongoing projects.

United States: A Leader in Innovation and Commercialization
The United States remains a dominant force in space technology, driven by the synergy between governmental and private sector endeavors. NASA, the nation’s flagship space agency, has historically led human space exploration, most notably with the Apollo program that landed astronauts on the Moon. Today, NASA’s Artemis program aims to return humans to the lunar surface and eventually establish a sustainable lunar presence. Furthermore, NASA’s ongoing Mars missions, including the Perseverance rover and the upcoming sample return initiative, are paving the way for future human exploration of the Red Planet.

However, it is the rise of private companies like SpaceX and Blue Origin that has revolutionized U.S. space capabilities. SpaceX, with its reusable Falcon rockets and ambitious Starship program, has drastically reduced launch costs and increased mission cadence, while also contributing to global satellite broadband via the Starlink constellation. Blue Origin, although more focused on suborbital space tourism and future lunar exploration, is also playing a key role in shaping the future of space. The integration of private players into the space ecosystem has created a competitive environment that fosters innovation, with an eye on deep space exploration, asteroid mining, and even space tourism.

Despite its successes, the U.S. faces significant challenges in terms of cost and over-reliance on private entities for crewed space missions, a gap that is being gradually filled by NASA’s own projects and partnerships. The balance between government-funded exploration and private sector innovation will define the future of U.S. space ambitions.

China: A Rising Space Power with Ambitious Goals
China has emerged as a major player in the space domain, with the China National Space Administration (CNSA) spearheading the country’s space ambitions. Unlike the United States, China’s space program is largely state-driven, with a clear, long-term vision focused on becoming a dominant spacefaring nation. One of China’s most notable achievements has been its successful lunar exploration programs. The Chang’e missions, including the first-ever soft landing on the far side of the Moon and the recent lunar sample return, demonstrate China’s growing expertise in deep space exploration.

China has also made significant strides in human spaceflight, with the establishment of the Tiangong space station, which serves as a platform for long-term orbital missions and scientific research. The country’s Mars exploration capabilities were proven with the Tianwen-1 mission, which included the successful deployment of the Zhurong rover on the Martian surface. These achievements are indicative of China’s ability to master complex space technologies and execute large-scale missions.

On the military front, China has developed advanced space surveillance systems and anti-satellite capabilities, which highlight the strategic importance of space in national defense. Looking forward, China is planning ambitious missions, including Mars sample return, the construction of a lunar base, and the exploration of asteroids. However, China’s space program is also hindered by its relative isolation from international collaboration due to geopolitical tensions, limiting its ability to share and exchange knowledge with other spacefaring nations.

Russia: A Storied Legacy with Modern Challenges
Russia, as the inheritor of the Soviet Union’s space legacy, remains an important player in global space technology. The Russian space agency, Roscosmos, is renowned for its expertise in human spaceflight, dating back to the launch of Sputnik, the first artificial satellite, and the first human spaceflight by Yuri Gagarin. Today, Russia continues to provide critical crewed spaceflight capabilities to the International Space Station (ISS) through its Soyuz program, which remains a workhorse for transporting astronauts to and from orbit.

Russia’s space program also emphasizes military applications, with advanced satellite systems for navigation, reconnaissance, and surveillance. Despite this, Russia faces several challenges, including aging infrastructure, a shrinking budget, and increasing competition from private companies and international partners. While the country remains a key participant in the ISS, it is increasingly at risk of being overshadowed by more technologically advanced nations.

Looking to the future, Russia has outlined plans for lunar exploration, including its Luna 25 mission, and continues to develop advanced space propulsion systems. However, for Russia to maintain its standing as a space power, it will need to modernize its space technologies and address the structural inefficiencies that have plagued its space industry in recent years.

European Union: Collaborative Strength and Scientific Prowess
The European Space Agency (ESA) represents a collaborative effort between multiple European nations, and this collaboration is one of its greatest strengths. The ESA has made significant contributions to global space efforts, particularly in satellite technology and space science. The Ariane family of rockets has been a reliable workhorse for launching satellites into orbit, while the Galileo satellite constellation is Europe’s answer to the U.S. Global Positioning System (GPS), providing high-precision navigation services to users around the world.

The ESA has also played a pivotal role in scientific exploration, collaborating on high-profile projects such as the James Webb Space Telescope and the Rosetta comet mission. Through these efforts, European scientists have contributed to major discoveries in space science, deepening our understanding of the cosmos.

Despite its many achievements, Europe faces challenges, particularly in human spaceflight. While the ESA has been an integral partner in the ISS program, it is still dependent on the United States and Russia for crewed missions. Future plans include greater involvement in the Artemis lunar program, advanced space telescopes, and participation in deep-space exploration, but Europe will need to further develop its own crewed space capabilities to fully compete on the global stage.

India: Cost-Effective Innovation and Expanding Capabilities
India, through its space agency ISRO, has made significant strides in space exploration, often achieving impressive feats with a fraction of the budget of other spacefaring nations. India’s Mars Orbiter Mission (Mangalyaan) made history as the first Asian nation to reach Mars orbit, and it did so with a remarkably low-cost mission. Similarly, the Chandrayaan missions have contributed to our understanding of the Moon, with Chandrayaan-2’s orbiter continuing to provide valuable data.

ISRO’s cost-effective approach has also made it a key player in the commercial launch sector, with its Polar Satellite Launch Vehicle (PSLV) known for its reliability and affordability. India’s growing focus on space-based applications—such as satellite navigation, weather forecasting, and rural connectivity—demonstrates the country’s commitment to leveraging space technology for societal benefit.

Looking ahead, India has ambitious plans, including the Gaganyaan crewed mission, reusable rocket technologies, and deep-space exploration missions. However, the country still faces challenges in terms of budget constraints and technological limitations compared to global leaders. Despite these challenges, ISRO’s successes in low-cost, high-impact missions have made it a model for emerging space nations.

Japan: Precision Engineering and Collaborative Excellence
Japan’s space agency, JAXA, is known for its precision engineering and innovative approach to space exploration. One of Japan’s most notable achievements is its Hayabusa mission, which successfully returned samples from the asteroid Itokawa, and the subsequent Hayabusa2 mission, which collected samples from the asteroid Ryugu. These missions have placed Japan at the forefront of asteroid exploration, providing valuable insights into the origins of the solar system.

JAXA also plays an important role in international collaborations, contributing to the ISS and working on future lunar missions in partnership with NASA. Japan’s space technology is particularly focused on robotics, with the development of autonomous systems for space exploration and satellite servicing.

While Japan excels in scientific exploration and technological development, it faces challenges in scaling its space ambitions beyond its current focus on research and development. Japan’s private sector has not yet reached the scale of space commercialization seen in the United States, but the country’s ongoing advancements in space science and engineering position it as a key player in the global space arena.

Emerging Space Nations: Niche Players with Growing Influence
In addition to the major space powers, a growing number of emerging nations are making significant strides in space technology. The United Arab Emirates (UAE), for example, successfully launched its Mars mission, Hope, in 2020, marking a historic achievement for the Arab world. South Korea is also making progress with its lunar missions, while Israel’s Beresheet lander, though unsuccessful, demonstrated the country’s determination to establish a presence in space.

These emerging spacefaring nations are focusing on niche areas such as planetary exploration, small satellite development, and indigenous launch capabilities. While they face challenges such as limited funding and technological dependencies, their growing interest in space technology will likely contribute to the diversification of the global space landscape in the coming years.

A Global Space Race with Diverse Players
The global space race is no longer defined solely by the superpowers of the past; it is now a diverse and competitive landscape where nations of all sizes are making their mark. The United States, China, Russia, and Europe remain at the forefront of human exploration and satellite technology, while emerging nations like India, Japan, and the UAE are increasingly contributing to scientific discovery and space commercialization. As technological advancements continue and the boundaries of space exploration expand, the future of space will be shaped by the unique capabilities and ambitions of these diverse players.

From Dystopian Fiction to Political Reality: Britain’s Digital ID Proposal

As a teenager in the late 1970s, I watched a BBC drama that left a mark on me for life. The series was called 1990. It imagined a Britain in economic decline where civil liberties had been sacrificed to bureaucracy. Citizens carried Union cards; identity documents that decided whether they could work, travel, or even buy food. Lose the card and you became a “non-person.” Edward Woodward played the defiant journalist Jim Kyle, trying to expose the regime, while Barbara Kellerman embodied the cold efficiency of the state machine.

Back then it felt like dystopian fantasy, a warning not a forecast. Yet today, watching the UK government push forward with a mandatory digital ID scheme, I feel as if the fiction of my youth is edging into fact.

The plan sounds simple enough: a free digital credential stored on smartphones, initially required to prove the right to work. But let’s be honest, once the infrastructure exists, expansion is inevitable. Why stop at work checks? Why not use it for renting property, opening bank accounts, accessing healthcare, or even voting? Every new use will be presented as common sense. Before long, showing your digital ID could become as routine, and as coercive, as carrying the Union card in 1990.

Privacy is the first casualty. This credential will include biometric data and residency status, and it will be verified through state-certified providers. In theory it’s secure. In practice, Britain’s record on data protection is chequered, from NHS leaks to Home Office blunders. Biometric data isn’t like a password, you can’t change your face if it’s compromised. A single breach could haunt people for life.

Exclusion is the next. Ministers claim alternatives will exist for those without smartphones, but experience tells us such alternatives are clunky and marginal. Millions in Britain don’t have passports, reliable internet, or the latest phone. Elderly people, the poor, disabled citizens, these groups risk being pushed further to the margins. In 1990, the state declared dissidents “non-people.” In 2025, exclusion could come from something as mundane as a failed app update.

The democratic deficit is just as troubling. Voters already rejected ID cards once, when Labour’s 2006 scheme collapsed under public resistance. For today’s government to revive the idea, in digital clothing, without wide public debate or strong parliamentary scrutiny, is a profound act of political amnesia. We were told only a few years ago there would be no national ID. Yet here it comes, rebranded and repackaged as “modernisation.”

And then there’s the problem of function creep. In 1990, the Union card didn’t begin as an instrument of oppression; it became one because officials found it too useful to resist. The same danger lurks today. A card designed for immigration control could end up regulating everyday life. It could be tied to financial services, travel, or even access to political spaces. Convenience is the Trojan horse of coercion.

The government argues this will tackle illegal working and make life easier for businesses. Perhaps it will. But at what cost? We will have built the very infrastructure that past generations fought to reject: a system where your ability to live, work and move depends on a state-issued credential. The show I watched as a teenager was meant to remind us what happens when people forget to guard their freedoms.

This isn’t just a technical fix. It’s a fundamental shift in the relationship between citizen and state. Once the power to define your identity sits in a centralised digital credential, you no longer own it, the government does. That should chill anyone who values freedom in Britain.

We need to pause, debate, and if necessary, reject this plan before the future we feared on screen becomes the present we inhabit.

Hosting Your Own AI: Why Everyday Users Should Consider Bringing AI Home

The rise of high-speed fibre internet has done more than just make Netflix faster and video calls clearer, it has opened the door for ordinary people to run powerful technologies from the comfort of their own homes. One of the most exciting of these possibilities is self-hosted artificial intelligence. While most people are used to accessing AI through big tech companies’ cloud platforms, the time has come to consider what it means to bring this capability in-house. For everyday users, the advantages come down to three things: security, personalization, and independence.

The first advantage is data security. Every time someone uses a cloud-based AI service, their words, files, or images travel across the internet to a company’s servers. That data may be stored, analyzed, or even used to improve the company’s products. For personal matters like health information, financial records, or private conversations, that can feel intrusive. Hosting an AI at home flips the equation. The data never leaves your own device, which means you, not a tech giant, are the one in control. It’s like the difference between storing your photos on your own hard drive versus uploading them to a social media site.

The second benefit is customization. The AI services offered online are built for the masses: general-purpose, standardized, and often limited in what they can do. By hosting your own AI, you can shape it around your life. A student could set it up to summarize their textbooks. A small business owner might feed it product information to answer customer questions quickly. A parent might even build a personal assistant trained on family recipes, schedules, or local activities. The point is that self-hosted AI can be tuned to match individual needs, rather than forcing everyone into a one-size-fits-all mold.

The third reason is independence. Relying on external services means depending on their availability, pricing, and rules. We’ve all experienced the frustration of an app changing overnight or a service suddenly charging for features that used to be free. A self-hosted AI is yours. It continues to run regardless of internet outages, company decisions, or international disputes. Just as personal computers gave households independence from corporate mainframes in the 1980s, self-hosted AI promises a similar shift today.

The good news is that ordinary users don’t need to be programmers or engineers to start experimenting. Open-source projects are making AI more accessible than ever. GPT4All offers a desktop app that works much like any other piece of software: you download it, run it, and interact with the AI through a simple interface. Ollama provides an easy way to install and switch between different AI models on your computer. Communities around these tools offer clear guides, friendly forums, and video tutorials that make the learning curve far less intimidating. For most people, running a basic AI system today is no harder than setting up a home printer or Wi-Fi router.

Of course, there are still limits. Running the largest and most advanced models may require high-end hardware, but for many day-to-day uses: writing, brainstorming, answering questions, or summarizing text, lighter models already perform impressively on standard laptops or desktop PCs. And just like every other piece of technology, the tools are becoming easier and more user-friendly every year. What feels like a hobbyist’s project in 2025 could be as common as antivirus software or cloud storage by 2027.

Self-hosted AI isn’t just for tech enthusiasts. Thanks to fibre internet and the growth of user-friendly tools, it is becoming a real option for everyday households. By bringing AI home, users can protect their privacy, shape the technology around their own lives, and free themselves from the whims of big tech companies. Just as personal computing once shifted power from corporations to individuals, the same shift is now within reach for artificial intelligence.

Sharing as the Core of Influence in Knowledge-Driven Organizations

In contemporary organizational theory, the capacity to share knowledge efficiently is increasingly recognized not merely as a good practice, but as one of the central levers of influence, innovation, and competitive advantage. Influence in the workplace is no longer determined solely by formal authority or proximity to decision-makers; it hinges instead on who opens up their ideas, disseminates outcomes, and builds collective awareness. Knowledge sharing, properly conceived, is a social process that undergirds learning, creativity, and organizational agility.

Why Sharing Still Matters
Even with advances in digital collaboration tools, hybrid work environments, and more explicit knowledge management policies, many organizations continue to wrestle with information silos, “knowledge hoarding,” and weak visibility of what colleagues are doing. These behaviors impose hidden costs: duplication of work, failure to capitalize on existing insights, slow adoption of innovations, and organizational inertia.

Empirical studies confirm that when organizational climate is supportive, when centralization and formalization are lower, knowledge sharing behavior (KSB) tends to increase. For example, a recent study of IT firms in Vietnam (n = 529) found that a positive organizational climate had a direct positive effect on KSB, while high degrees of centralization and formalization decreased knowledge‐sharing intentions.  

Moreover, knowledge sharing is strongly associated with improved performance outcomes. In technological companies in China, for instance, research shows that AI-augmented knowledge sharing, along with organizational learning and dynamic capabilities, positively affect job performance.  

Theoretical Foundations & Diffusion of Influence
A number of established frameworks help us understand both how knowledge spreads and why sharing can shift influence within organizations.
Diffusion of Innovations (Everett Rogers et al.): This theory explains how new ideas are adopted across a social system over time via innovators, early adopters, early majority etc. Key variables include communication channels, time, social systems, and the characteristics of the innovation itself.
Threshold Models & Critical Mass: Recent experiments suggest that when a certain proportion of individuals (often around 20-30%) behave in a particular way (e.g. adopting or sharing an innovation), that can tip the whole system into broader adoption. For example, one study found that social diffusion leading to change in norms becomes much more probable once a committed minority exceeds roughly 25% of the population.
Organizational Climate & Intention/Behavior Models: Behavior intentions (e.g. willingness to share) are shaped by trust, perceived support, alignment of individual and organizational values, and perceived risk/benefit. These mediate whether knowledge is actually shared or hidden.  

Barriers & Enablers
Understanding why people don’t share is as important as understanding why they do.

Barriers include:
Structural impediments like overly centralized decision frameworks, rigid hierarchy, heavy formalization. These reduce the avenues for informal sharing and flatten the perceived payoff for going outside established channels.
Cognitive or psychological obstacles, such as fear of criticism, loss of advantage (“knowledge as power”), lack of trust, or simply not knowing who might benefit from what one knows.
Technological and process deficiencies: poor documentation practices, weak knowledge management systems, lack of standard archiving, difficult to locate material, etc. These make sharing costly in terms of effort, risk of misunderstanding, or duplication.  

Enablers include:
• Cultivating a learning culture: where mistakes are not punished, where experimentation is supported, and where informal learning is valued. Studies in team climate show that the presence of an “organizational learning culture” correlates strongly with innovative work behavior.
• Leadership that is supportive of sharing: transformational, inclusive leadership, openness to new ideas even when they challenge orthodoxy. Leaders who make visible their support for sharing set norms.
• Recognition, incentive alignment, and reward systems that explicitly value sharing. When sharing contributes to promotions, performance evaluations, or peer recognition, people are more likely to invest effort in it.  

Influence through Sharing: A Refined Model
Putting this together, here is a refined model of how sharing translates into influence:
1. Visibility: Sharing makes one’s work visible across formal and informal networks. Visibility breeds recognition.
2. Peer Adoption & Critical Mass: Innovation often needs a threshold of peer adoption. Once enough people (often around 20-30%) accept or discuss an idea, it tends to propagate more broadly. Early informal sharing helps reach that threshold.
3. Legitimization & Institutionalization: When enough peers accept an idea, it begins to be noticed by formal leadership, which may then adopt it as part of official strategy or practice. What was once “radical” becomes “official.”
4. Influence & Reward: As an individual or team’s ideas get absorbed into the organizational narrative, their influence increases. They may be entrusted with leadership, provided more resources, or seen as agents of change.

Recent Considerations: Hybrid Work, Digital Tools, AI
Over the past few years, changes in how and where people work, plus the integration of AI into knowledge-sharing tools, add new dimensions:
• Remote and hybrid setups tend to magnify the problems of invisibility and isolation; informal corridor conversations or impromptu check-ins become less likely. Organizations must work harder to construct virtual equivalents (e.g. asynchronous documentation, digital forums, internal social networks).
• AI and knowledge-management platforms can help accelerate sharing, reduce friction (e.g. discovery of existing reports, automatic tagging, summarisation), but they also risk over-trust in automation or leaving behind tacit knowledge that is hard to codify.
• Given the increasing volume of information, selective sharing and curating become skills. Not every detail needs to be shared widely, but knowing what, when, and how to share is part of influence.

Implications for Practice
For individuals aiming to increase their influence via sharing:
• Embed documentation and archival processes into every project (e.g. phase reports, lessons learned).
• Use both formal and informal channels: internal blogs or newsletters, but also coffee chats, virtual social spaces.
• Be willing to experiment, share preliminary findings; feedback improves ideas and increases visibility.

For organizations:
• Build a culture that rewards sharing explicitly through performance systems.
• Reduce structural barriers like overly centralized control or onerous formalization.
• Provide tools and training to lower the effort of sharing; make knowledge easier to find and use.
• Encourage cross-team interactions, peer networks, communities of practice.

Final Word
Sharing is not just a morally good or nice thing to do, it is one of the most potent forms of influence in knowledge-based work. It transforms static assets into living processes, elevates visibility, enables innovation, and shapes organization culture. As the world of work continues to evolve, those who master the art and science of sharing will increasingly become the architects of change.

References:
Here are key sources that discuss the concepts above. You can draw on these for citations or further reading.
1. Xu, J., et al. (2023). A theoretical review on the role of knowledge sharing and … [PMC]
2. Peters, L.D.K., et al. (2024). “‘The more we share, the more we have’? Analyses of identification with the company positively influencing knowledge-sharing behaviour…”
3. Greenhalgh, T., et al. (2004). “Diffusion of Innovations in Service Organizations.” Milbank Quarterly – literature review on spreading and sustaining innovations.
4. Ye, M., et al. (2021). “Collective patterns of social diffusion are shaped by committed minorities …” Nature Communications
5. Bui, T. T., Nguyen, L. P., Tran, A. P., Nguyen, H. H., & Tran, T. T. (2023). “Organizational Factors and Knowledge Sharing Behavior: Mediating Model of Knowledge Sharing Intention.”
6. Abbasi, S. G., et al. (2021). “Impact of Organizational and Individual Factors on Knowledge Sharing Behavior.”
7. He, M., et al. (2024). “Sharing or Hiding? Exploring the Influence of Social … Knowledge sharing & knowledge hiding mechanisms.”
8. Sudibjo, N., et al. (2021). “The effects of knowledge sharing and person–organization fit on teachers’ innovative work …”
9. Academia preprint: Cui, J., et al. (2025). “The Explore of Knowledge Management Dynamic Capabilities, AI-Driven Knowledge Sharing, Knowledge-Based Organizational Support, and Organizational Learning on Job Performance: Evidence from Chinese Technological Companies.”
10. Koivisto, K., & Taipalus, T. (2023). “Pitfalls in Effective Knowledge Management: Insights from an International Information Technology Organization.”  

The Double Standard: Blocking AI While Deploying AI

In an era when artificial intelligence threatens to displace traditional journalism, a glaring contradiction has emerged: news organizations that block AI crawlers from accessing their content are increasingly using AI to generate the very content they deny to AI. This move not only undermines the values of transparency and fairness, but also exposes a troubling hypocrisy in the media’s engagement with AI.

Fortifying the Gates Against AI
Many established news outlets have taken concrete steps to prevent AI from accessing their content. As of early 2024, over 88 percent of top news outlets, including The New York TimesThe Washington Post, and The Guardian, were blocking AI data-collection bots such as OpenAI’s GPTBot via their robots.txt files. Echoing these moves, a Reuters Institute report found that nearly 80 percent of prominent U.S. news organizations blocked OpenAI’s crawlers by the end of 2023, while roughly 36 percent blocked Google’s AI crawler.

These restrictions are not limited to voluntary technical guidelines. Cloudflare has gone further, blocking known AI crawlers by default and offering publishers a “Pay Per Crawl” model, allowing access to their content only under specific licensing terms. The intent is clear: content creators want to retain control, demand compensation, and prevent unlicensed harvesting of their journalism.

But Then They Use AI To Generate Their Own Content
While these publishers fortify their content against external AI exploitation, they increasingly turn to AI internally to produce articles, summaries, and other content. This shift has real consequences: jobs are being cut and AI-generated content is being used to replace human-created journalism.
Reach plc, publisher of MirrorExpress, and others, recently announced a restructuring that places 600 jobs at risk, including 321 editorial positions, as it pivots toward AI-driven formats like video and live content.
Business Insider CEO Barbara Peng confirmed that roughly 21 percent of the staff were laid off to offset declines in search traffic, while the company shifts resources toward AI-generated features such as automated audio briefings.
• CNET faced backlash after it published numerous AI-generated stories under staff bylines, some containing factual errors. The fallout led to corrections and a renewed pushback from newsroom employees.

The Hypocrisy Unfolds
This dissonance, blocking AI while deploying it, lies at the heart of the hypocrisy. On one hand, publishers argue for content sovereignty: preventing AI from freely ingesting and repurposing their work. On the other hand, they quietly harness AI for their own ends, often reducing staffing under the pretense of innovation or cost-cutting.

This creates a scenario in which:
AI is denied access to public content, while in-house AI is trusted with producing public-facing content.
Human labor is dismissed in the name of progress, even though AI is not prevented from tapping into the cultural and journalistic capital built over years.
Control and compensation arguments are asserted to keep AI out, yet the same AI is deployed strategically to reshape newsroom economics.

This approach fails to reconcile the ethical tensions it embodies. If publishers truly value journalistic integrity, transparency, and compensation, then applying those principles selectively, accepting them only when convenient, is disingenuous. The news media’s simultaneous rejection and embrace of AI reflect a transactional, rather than principled, stance.

A Path Forward – or a Mirage?
Some publishers are demanding fair licensing models, seeking to monetize AI access rather than simply deny it. The emergence of frameworks like the Really Simple Licensing (RSL) standard allows websites to specify terms, such as royalties or pay-per-inference charges, in their robots.txt, aiming for a more equitable exchange between AI firms and content creators.

Still, that measured approach contrasts sharply with using AI to cut costs internally, a strategy that further alienates journalists and erodes trust in media institutions.

Integrity or Expedience?
The juxtaposition of content protection and AI deployment in newsrooms lays bare a cynical calculus: AI is off-limits when others use it, but eminently acceptable when it serves internal profit goals. This selective embrace erodes the moral foundation of journalistic institutions and raises urgent questions:
• Can publishers reconcile the need for revenue with the ethical imperatives of transparency and fairness?
• Will the rapid rise of AI content displace more journalists than it empowers?
• And ultimately, can media institutions craft coherent policies that honor both their creators and the audience’s right to trustworthy news

Perhaps there is a path toward licensing frameworks and responsible AI use that aligns with journalistic values, but as long as the will to shift blame, “not us scraping, but us firing”, persists, the hypocrisy remains undeniable.

AI and the Future of Professional Writing: A Reframing

For centuries, every major technological shift has sparked fears about the death of the crafts it intersects. The printing press didn’t eliminate scribes, it transformed them. The rise of the internet and word processors didn’t end journalism, they redefined its forms. Now, artificial intelligence fronts the same familiar conversation: is AI killing professional writing, or is it once again reshaping it?

As a business consultant, I’ve immersed myself in digital tools: from CRMs to calendars, word processors to spreadsheets, not as existential threats, but as extensions of my capabilities. AI fits into that lineage. It doesn’t render me obsolete. It offers capacity, particularly, the capacity to offload mechanical work, and reclaim time for strategic, empathic, and creative labor.

The data shows this isn’t just a sentimental interpretation. Multiple studies document significant declines in demand for freelance writing roles. A Harvard Business Review–cited study that tracked 1.4 million freelance job listings found that, post-ChatGPT, demand for “automation-prone” jobs fell by 21%, with writing roles specifically dropping 30%  . Another analysis on Upwork revealed a 33% drop in writing postings between late 2022 and early 2024, while a separate study observed that, shortly after ChatGPT’s debut, freelance job hires declined by nearly 5% and monthly earnings by over 5% among writers.  These numbers are real. The shift has been painful for many in the profession.

Yet the picture isn’t uniform. Other data suggests that while routine or templated writing roles are indeed shrinking, strategic and creatively nuanced writing remains vibrant. Upwork reports that roles demanding human nuance: like copywriting, ghostwriting, and marketing content have actually surged, rising by 19–24% in mid-2023. Similarly, experts note that although basic web copy and boilerplate content are susceptible to automation, high-empathy, voice-driven writing continues to thrive.

My daily experience aligns with that trend. I don’t surrender to AI. I integrate it. I rely on it to break the blank page, sketch a structure, suggest keywords, or clarify phrasing. Yet I still craft, steer, and embed meaning, because that human judgment, that voice, is irreplaceable.

Many professionals are responding similarly. A qualitative study exploring how writers engage with AI identified four adaptive strategies, from resisting to embracing AI tools, each aimed at preserving human identity, enhancing workflow, or reaffirming credibility. A 2025 survey of 301 professional writers across 25+ languages highlighted both ethical concerns, and a nuanced realignment of expectations around AI adoption.

This is not unprecedented in academia: AI is already assisting with readability, grammar, and accessibility, especially for non-native authors, but not at the expense of critical thinking or academic integrity.  In fact, when carefully integrated, AI shows promise as an aid, not a replacement.

In this light, AI should not be viewed as the death of professional writing, but as a test of its boundaries: Where does machine-assisted work end and human insight begin? The profession isn’t collapsing, it’s clarifying its value. The roles that survive will not be those that can be automated, but those that can’t.

In that regard, we as writers, consultants, and professionals must decide: will we retreat into obsolescence or evolve into roles centered on empathy, strategy, and authentic voice? I choose the latter, not because it’s easier, but because it’s more necessary.

Sources
• Analysis of 1.4 million freelance job listings showing a 30% decline in demand for writing positions post-ChatGPT release
• Upwork data indicating a 33% decrease in writing job postings from late 2022 to early 2024
• Study of 92,547 freelance writers revealing a 5.2% drop in earnings and reduced job flow following ChatGPT’s launch  ort showing growth in high-nuance writing roles (copywriting, ghostwriting, content creation) in Q3 2023
• Analysis noting decreased demand (20–50%) for basic writing and translation, while creative and high-empathy roles remain resilient
• Qualitative research on writing professionals’ adaptive strategies around generative AI
• Survey of professional writers on AI usage, adoption challenges, and ethical considerations
• Academic studies indicating that AI tools can enhance writing mechanics and accessibility if integrated thoughtfully

Strategic Pricing Adjustment to Accelerate User Growth and Revenue

Dear OpenAI Leadership,

I am writing to propose a strategic adjustment to ChatGPT’s subscription pricing that could substantially increase both user adoption and revenue. While ChatGPT has achieved remarkable success, the current $25/month subscription fee may be a barrier for many potential users. In contrast, a $9.95/month pricing model aligns with industry standards and could unlock significant growth.

Current Landscape

As of mid-2025, ChatGPT boasts:

  • 800 million weekly active users, with projections aiming for 1 billion by year-end. (source)
  • 20 million paid subscribers, generating approximately $500 million in monthly revenue. (source)

Despite this success, the vast majority of users remain on the free tier, indicating a substantial untapped market.

The Case for $9.95/Month

A $9.95/month subscription fee is a proven price point for digital services, offering a balance between affordability and perceived value. Services like Spotify, Netflix, and OnlyFans have thrived with similar pricing, demonstrating that users are willing to pay for enhanced features and experiences at this price point.

Projected Impact

If ChatGPT were to lower its subscription fee to $9.95/month, the following scenarios illustrate potential outcomes:

  • Scenario 1: 50% Conversion Rate
    50% of current weekly active users (400 million) convert to paid subscriptions.
    200 million paying users × $9.95/month = $1.99 billion/month.
    Annual revenue: $23.88 billion.
  • Scenario 2: 25% Conversion Rate
    25% conversion rate yields 100 million paying users.
    100 million × $9.95/month = $995 million/month.
    Annual revenue: $11.94 billion.

Even at a conservative 25% conversion rate, annual revenue would exceed current projections, highlighting the significant financial upside.

Strategic Considerations

  • Expand the user base: Attract a broader audience, including students, professionals, and casual users.
  • Enhance user engagement: Increased adoption could lead to higher usage rates and data insights, further improving the product.
  • Strengthen market position: A more accessible price point could solidify ChatGPT’s dominance in the AI chatbot market, currently holding an 80.92% share. (source)

Conclusion

Adopting a $9.95/month subscription fee could be a transformative move for ChatGPT, driving substantial revenue growth and reinforcing its position as a leader in the AI space. I urge you to consider this strategic adjustment to unlock ChatGPT’s full potential.

Sincerely,
The Rowanwood Chronicles

#ChatGPT #PricingStrategy #SubscriptionModel #AIAdoption #DigitalEconomy #OpenAI #TechGrowth

When 10 Meters Isn’t Enough: Understanding AlphaEarth’s Limits in Operational Contexts

In the operational world, data is only as valuable as the decisions it enables, and as timely as the missions it supports. I’ve worked with geospatial intelligence in contexts where every meter mattered and every day lost could change the outcome. AlphaEarth Foundations is not the sensor that will tell you which vehicle just pulled into a compound or how a flood has shifted in the last 48 hours, but it may be the tool that tells you exactly where to point the sensors that can. That distinction is everything in operational geomatics.

With the public release of AlphaEarth Foundations, Google DeepMind has placed a new analytical tool into the hands of the global geospatial community. It is a compelling mid-tier dataset – broad in coverage, high in thematic accuracy, and computationally efficient. But in operational contexts, where missions hinge on timelines, revisit rates, and detail down to the meter, knowing exactly where AlphaEarth fits, and where it does not, is essential.

Operationally, AlphaEarth is best understood as a strategic reconnaissance layer. Its 10 m spatial resolution makes it ideal for detecting patterns and changes at the meso‑scale: agricultural zones, industrial developments, forest stands, large infrastructure footprints, and broad hydrological changes. It can rapidly scan an area of operations for emerging anomalies and guide where scarce high‑resolution collection assets should be deployed. In intelligence terms, it functions like a wide-area search radar, identifying sectors of interest, but not resolving the individual objects within them.

The strengths are clear. In broad-area environmental monitoring, AlphaEarth can reveal where deforestation is expanding most rapidly or where wetlands are shrinking. In agricultural intelligence, it can detect shifts in cultivation boundaries, large-scale irrigation projects, or conversion of rangeland to cropland. In infrastructure analysis, it can track new highway corridors, airport expansions, or urban sprawl. Because it operates from annual composites, these changes can be measured consistently year-over-year, providing reliable trend data for long-term planning and resource allocation.

In the humanitarian and disaster-response arena, AlphaEarth offers a quick way to establish pre‑event baselines. When a cyclone strikes, analysts can compare the latest annual composite to prior years to understand how the landscape has evolved, information that can guide relief planning and longer‑term resilience efforts. In climate-change adaptation, it can help identify landscapes under stress, informing where to target mitigation measures.

But operational users quickly run into resolution‑driven limitations. At 10 m GSD, AlphaEarth cannot identify individual vehicles, small boats, rooftop solar installations, or artisanal mining pits. Narrow features – rural roads, irrigation ditches, hedgerows – disappear into the generalised pixel. In urban ISR (urban Intelligence, Surveillance, and Reconnaissance), this makes it impossible to monitor fine‑scale changes like new rooftop construction, encroachment on vacant lots, or the addition of temporary structures. For these tasks, commercial very high resolution (VHR) satellites, crewed aerial imagery, or drones are mandatory.

Another constraint is temporal granularity. The public AlphaEarth dataset is annual. This works well for detecting multi‑year shifts in land cover but is too coarse for short-lived events or rapidly evolving situations. A military deployment lasting two months, a flash‑flood event, or seasonal agricultural practices will not be visible. For operational missions requiring weekly or daily updates, sensors like PlanetScope’s daily 3–5 m imagery or commercial tasking from Maxar’s WorldView fleet are essential.

There is also the mixed‑pixel effect, particularly problematic in heterogeneous environments. Each embedding is a statistical blend of everything inside that 100 m² tile. In a peri‑urban setting, a pixel might include rooftops, vegetation, and bare soil. The dominant surface type will bias the model’s classification, potentially misrepresenting reality in high‑entropy zones. This limits AlphaEarth’s utility for precise land‑use delineation in complex landscapes.

In operational geospatial workflows, AlphaEarth is therefore most effective as a triage tool. Analysts can ingest AlphaEarth embeddings into their GIS or mission‑planning system to highlight AOIs where significant year‑on‑year change is likely. These areas can then be queued for tasking with higher‑resolution, higher‑frequency assets. In resource-constrained environments, this can dramatically reduce unnecessary collection, storage, and analysis – focusing effort where it matters most.

A second valuable operational role is in baseline mapping. AlphaEarth can provide the reference layer against which other sources are compared. For instance, a national agriculture ministry might use AlphaEarth to maintain a rolling national crop‑type map, then overlay drone or VHR imagery for detailed inspections in priority regions. Intelligence analysts might use it to maintain a macro‑level picture of land‑cover change across an entire theatre, ensuring no sector is overlooked.

It’s important to stress that AlphaEarth is not a targeting tool in the military sense. It does not replace synthetic aperture radar for all-weather monitoring, nor does it substitute for daily revisit constellations in time-sensitive missions. It cannot replace the interpretive clarity of high‑resolution optical imagery for damage assessment, facility monitoring, or urban mapping. Its strength lies in scope, consistency, and analytical efficiency – not in tactical precision.

The most successful operational use cases will integrate AlphaEarth into a tiered collection strategy. At the top tier, high‑resolution sensors deliver tactical detail. At the mid‑tier, AlphaEarth covers the wide‑area search and pattern detection mission. At the base, raw satellite archives remain available for custom analyses when needed. This layered approach ensures that each sensor type is used where it is strongest, and AlphaEarth becomes the connective tissue between broad‑area awareness and fine‑scale intelligence.

Ultimately, AlphaEarth’s operational value comes down to how it’s positioned in the workflow. Used to guide, prioritize, and contextualize other intelligence sources, it can save time, reduce costs, and expand analytical reach. Used as a standalone decision tool in missions that demand high spatial or temporal resolution, it will disappoint. But as a mid‑tier, strategic reconnaissance layer, it offers an elegant solution to a long-standing operational challenge: how to maintain global awareness without drowning in raw data.

For geomatics professionals, especially those in the intelligence and commercial mapping sectors, AlphaEarth is less a silver bullet than a force multiplier. It can’t tell you everything, but it can tell you where to look, and in operational contexts, knowing where to look is often the difference between success and failure.