Beyond the Cloud: How Artificial Intelligence Is Reshaping the Economics of SaaS

Artificial Intelligence is no longer an enhancement layered onto Software as a Service. It is rapidly becoming the force that is reshaping the SaaS model itself. What began as cloud-hosted software delivered by subscription is evolving into something closer to “intelligence as a service,” where the primary value lies not in the application interface but in the system’s ability to reason, predict, generate, and act.

From Software Delivery to Decision Delivery
Traditional SaaS focused on providing tools. AI-driven SaaS increasingly provides outcomes. Instead of merely storing data or enabling workflows, modern platforms analyze patterns, surface insights, and automate decisions in real time. Customer relationship systems forecast churn before it happens. Financial platforms detect anomalies and recommend actions. Marketing tools generate campaigns, segment audiences, and optimize performance continuously.

This shift changes the perceived role of software from passive infrastructure to active collaborator. Users are no longer just operators of systems. They are supervisors of autonomous processes. The interface becomes conversational, often powered by natural-language AI agents that allow users to request results rather than configure procedures.

The Rise of AI-Native SaaS
A new category of AI-native SaaS is emerging. These products are not traditional applications with AI features added later. They are built around large language models, machine learning pipelines, and continuous data feedback loops from the outset. In many cases, the application layer is thin, while the intelligence layer carries most of the value.

AI-native platforms can improve automatically as they process more data, creating compounding advantages for early leaders. This dynamic introduces a “winner-takes-most” tendency in some markets, where superior models attract more users, generating more data, which further improves performance.

Vertical SaaS is also being transformed by AI. Industry-specific systems now embed domain-trained models capable of interpreting specialized terminology, regulations, and workflows. A healthcare platform might summarize clinical notes and flag risks. A construction platform may analyze project schedules and predict delays. The result is software that behaves less like a toolset and more like an expert assistant tailored to a particular field.

Automation Becomes Autonomy
Automation has long been part of SaaS, but AI pushes it toward autonomy. Routine tasks such as data entry, scheduling, reporting, and customer support are increasingly handled end-to-end by intelligent agents. Multi-step workflows can now be executed with minimal human intervention, with systems monitoring outcomes and adjusting strategies dynamically.

This reduces labor costs and increases speed, but it also shifts responsibility. Organizations must now manage oversight, accountability, and risk associated with automated decisions. Human roles evolve toward exception handling, strategic direction, and ethical governance rather than routine execution.

Low-code and no-code tools are likewise changing under AI influence. Instead of building applications manually through visual interfaces, users can increasingly describe what they want in natural language and allow the system to generate workflows, integrations, or even full applications. Software creation itself becomes a conversational process.

New Economics and Pricing Models
AI significantly alters the economics of SaaS. Traditional subscription pricing assumed relatively stable marginal costs per user. AI workloads, especially those involving large models, introduce variable computational expenses tied to usage intensity. As a result, many providers are shifting toward consumption-based pricing, charging per query, per generated output, or per processing unit.

This model aligns revenue with cost but can introduce unpredictability for customers. Organizations must monitor usage carefully to avoid runaway expenses, while vendors must balance transparency with profitability. Some providers are experimenting with hybrid pricing structures that combine base subscriptions with metered AI usage.

At the same time, AI can dramatically increase perceived value. A tool that replaces hours of skilled labor may justify higher pricing than traditional software. The focus shifts from cost per seat to cost per outcome.

Data as the Strategic Asset
In AI-driven SaaS, data becomes the core competitive advantage. Proprietary datasets enable model training, fine-tuning, and continuous improvement. Vendors that control high-quality, domain-specific data can produce more accurate and reliable outputs than generic systems.

This dynamic strengthens customer lock-in. As organizations feed operational data into a platform, switching providers becomes more difficult because the accumulated context and model tuning may not transfer easily. Consequently, concerns about data ownership, portability, and privacy are intensifying.

Security requirements are also expanding. Protecting not only stored data but also model behavior, training pipelines, and generated outputs is now essential. Risks include data leakage through prompts, model manipulation, and exposure of sensitive information in generated content.

Human Trust, Transparency, and Governance
AI introduces new forms of risk that traditional SaaS did not face. Incorrect recommendations, biased outputs, or opaque decision processes can have significant real-world consequences. Providers must therefore invest in explainability, auditability, and safeguards that allow users to understand how conclusions are reached.

Regulatory scrutiny is increasing globally, particularly in sectors such as finance, healthcare, and public administration. Compliance frameworks will likely shape product design, requiring clear accountability for automated decisions and mechanisms for human override.

User trust will become a decisive factor in adoption. Organizations need confidence that AI systems are reliable, secure, and aligned with their objectives before delegating critical functions.

The Emergence of AI Platforms and Ecosystems
Many SaaS companies are evolving into AI platforms that host agents, plugins, and third-party models. Instead of a single application, customers access an ecosystem of specialized capabilities that can be orchestrated together. This mirrors the earlier transition from standalone software to cloud platforms, but with intelligence as the connective tissue.

Interoperability becomes crucial. Businesses increasingly expect AI systems to operate across tools, accessing data from multiple sources and executing actions across different platforms. The ability to integrate seamlessly may matter more than the strength of any individual feature.

Challenges and Competitive Pressures
The AI transformation of SaaS also lowers barriers to entry in some respects. New competitors can build viable products quickly by leveraging foundation models rather than developing complex software stacks from scratch. This accelerates innovation but intensifies competition.

At the same time, dependence on external AI infrastructure providers introduces strategic vulnerability. Changes in pricing, access, or model capabilities can ripple through entire product lines. Some companies are responding by developing proprietary models or hybrid architectures to maintain control.

Economic uncertainty adds another layer of complexity. While AI can reduce costs and boost productivity, organizations may hesitate to invest heavily without clear evidence of return. Vendors must demonstrate tangible business outcomes rather than technological novelty.

Toward Intelligence as a Utility
The trajectory of AI-driven SaaS suggests a future in which software behaves less like a static product and more like an adaptive service. Systems will continuously learn, personalize themselves to each organization, and coordinate actions across digital environments. Users will interact primarily through natural language, delegating complex tasks to intelligent agents.

In this emerging model, the value proposition shifts from access to software toward access to capability. Businesses will subscribe not just to tools, but to operational intelligence on demand.

The SaaS model is therefore not disappearing. It is mutating. As AI becomes embedded at every layer, the distinction between software, service, and expertise begins to blur. Providers that successfully combine technical innovation with trust, transparency, and measurable outcomes will define the next era of cloud computing.

When the Disruptors Become the Establishment

Not that long ago, ride-share companies blew up the taxi business. Taxis were expensive, hard to find, and controlled by licensing systems that made competition almost impossible. Then along came apps that let you press a button and a car appeared. It felt modern, fair, even a little revolutionary. Companies like Uber and Lyft sold the idea that drivers would be their own bosses and riders would finally get decent service at a reasonable price. For a while, that story mostly held up. But success changes things. Once these companies became dominant, they started to look less like rebels and more like the system they replaced. They set the prices, they control which driver gets which trip, and they take a substantial cut of every ride. Drivers supply the car, the fuel, the insurance, and the risk, yet they have very little say in how the business actually runs. Over time, many drivers have realized they are not really independent operators. They are dependent on an app they do not control.

A Different Kind of Challenge
A newer company called Empower is challenging that arrangement in a way that makes the big platforms uncomfortable. Instead of taking a percentage from every trip, it charges drivers a flat monthly fee to use the software. Drivers keep the full fare and can set their own prices. In plain language, the app becomes a tool rather than a boss. That one change flips the economics. If a driver keeps all the money from each ride, even lower fares can still produce higher income. Riders may pay less, drivers may earn more, and the company makes its money from subscriptions instead of commissions. More importantly, drivers start thinking like small business owners again. They can build repeat customers, choose when and where they work, and decide what their time is worth. That shift in mindset may be more disruptive than the pricing model itself.

Why This Actually Threatens the Giants
The real power of the big ride-share companies is control. They control access to passengers, they control pricing, and they control the flow of work through opaque algorithms. Take away that control and they become much less special. A competitor does not need to replace them everywhere. It only needs enough drivers and riders in one city to make the service reliable. Once people can get rides without using the dominant app, loyalty disappears quickly. Most riders already keep multiple apps on their phones. They tap whichever one is cheapest or fastest. Drivers do the same. If a new platform lets them earn more per trip, they will use it alongside the old ones. Over time, that weakens the incumbents without any dramatic collapse.

The Driver Problem Nobody Fixed
There is also a deeper issue. Many drivers feel squeezed. Ride prices have gone up for passengers, but driver pay has often not kept pace. At the same time, drivers absorb rising costs for fuel, maintenance, insurance, and vehicle replacement. Add in sudden policy changes, confusing pay formulas, and the risk of being removed from the platform without much explanation, and frustration builds. When a workforce becomes resentful, it does not revolt all at once. It quietly looks for exits. A company that promises independence rather than dependence taps into that frustration. It does not need to convince every driver, only enough to create a viable alternative.

Regulation Will Decide the Outcome
Whether this new model spreads widely may depend less on business strategy and more on government rules. Cities require ride-share services to meet safety standards, carry commercial insurance, and follow licensing systems. Large corporations can absorb these costs easily. Smaller challengers often cannot, especially if they argue they are only software providers rather than transportation companies. Regulators say these rules protect passengers. Critics say they also protect incumbents from competition. Both things can be true at the same time.

From Revolutionary to Utility
Ride-sharing is no longer exciting. It is infrastructure, like electricity or broadband. People expect it to work and get annoyed when it does not. When a service becomes ordinary, price matters more than brand. That is dangerous for companies whose business model depends on taking a significant percentage of each transaction. If a cheaper option appears that is “good enough,” many users will drift toward it without much thought.

The Real Risk: Losing the Middleman Role
The biggest threat to the current giants is not a single rival taking over the market. It is losing their position as the gatekeeper between drivers and passengers. If drivers build direct relationships with customers or spread their work across several low-cost platforms, the dominant apps become just one channel among many. At that point, they cannot dictate terms as easily. Other industries have seen this pattern before. Once technology allows buyers and sellers to connect more directly, middlemen either adapt or shrink.

About Time Too
There is a certain irony here. Ride-share companies rose to power by arguing that the old taxi system was inefficient, overpriced, and overly controlled. Now they face challengers making very similar arguments about them. Whether companies like Empower ultimately succeed is almost secondary. Their existence proves the market is not as locked down as it once appeared. Uber and Lyft still have enormous advantages: brand recognition, scale, and regulatory approval. But they are no longer the only game in town, and the assumption that they would dominate forever is starting to look shaky.

In the end, this is not just a fight between companies. It is a test of who holds power in the gig economy. Is it the platform that owns the app, or the people who actually do the work? Uber and Lyft once showed that owning fleets of cars was not necessary to control transportation. Their new challengers are trying to show that owning the platform may not be enough either. History suggests that once a business model becomes comfortable and profitable, someone will eventually come along to make it uncomfortable again.

The Tool, Not the Threat: A Working Writer’s View of AI

For over thirsty years, I have watched new technologies arrive with dire predictions about the death of writing. Word processors were supposed to cheapen the craft. Hell, the first word processor I ever saw was a woman typing my hand written notes into WordPerfect 5.1 because I didn’t have a PC in my office. The internet was supposed to drown it. Content mills were supposed to replace it. search engines were going to kill the art of research. None of those things eliminated professional writers. They changed the terrain, certainly, but the core of the work remained stubbornly human. Artificial intelligence feels like the latest version of the same story. Louder, faster, more unsettling to some, but still just a tool.

I have not lost a single client to AI. Not one. That fact alone says more than any think piece about disruption ever could.

Clients do not hire me because I can type sentences. They hire me because I can understand what they are trying to say when they do not yet know how to say it. They hire judgment, discretion, experience, tone, and the ability to shape messy reality into something coherent and purposeful. AI can generate text, but it cannot sit in a meeting, read the emotional weather in the room, or recognize when the real problem is not what anyone is saying out loud. Writing, at the professional level, is as much about interpretation as composition.

Where AI has proven useful is in the mechanical parts of the process. Every writer knows how much time disappears into outlining, restructuring, exploring angles that may or may not work, or turning over phrasing again and again to test clarity. AI can absorb some of that friction. It can offer starting points, alternate framings, rough summaries, or structural suggestions. I do not mistake these for finished work. I treat them the way a carpenter treats pre-cut lumber. It saves time on the rough work so that more attention can go into the joinery that actually matters. My father was a shop fitter, a carpenter who specialized in bank and pub finishes.  When power tools came along, they didn’t do away with his job, they made parts of it simpler, and faster.  

AI has become a surprisingly effective thinking partner. Writing is solitary, and the gap between draft and feedback can stretch for days or weeks. AI collapses that gap. I can test an argument, ask for objections, explore different tones, or pressure see whether an idea holds together. It does not replace human editors (I still pay an editor) or trusted readers, but it prevents the creative process from stalling in silence. The blank page is less intimidating when it answers back.

Research is another area where the tool earns its keep, provided it is used with caution. I do not outsource truth to a machine, but I do use it to map the landscape. It can identify key themes, terminology, opposing viewpoints, and places worth digging deeper. Instead of wandering through sources hoping something useful appears, I begin with a provisional sketch of the terrain. Verification still belongs to me. Interpretation certainly belongs to me, but the orientation phase moves faster.

Perhaps most unexpectedly, AI has helped me see my own voice more clearly. By generating alternative versions of a passage in different styles, I can feel immediately what does not sound like me. The contrast sharpens rather than dilutes identity. When everything generic is available instantly, specificity becomes more visible. It is like hearing your own accent only after listening to someone else speak. I have a clear writing voice which AI can’t reproduce, but it can help remove the messy, overly wordy passages, and cut to the chase of the matter.  

The fear that AI will eliminate professional writing misunderstands what clients are actually purchasing. They are not buying words. They are buying understanding and reliability. They are buying the ability to handle sensitive material without creating risk. They are buying someone who can ask the uncomfortable clarifying question, or who knows when fewer words will serve better than more. No algorithm signs its name to a document and assumes responsibility for the consequences. A human does every time I deliver a final product.  

There is also a strange upside to the flood of machine-generated prose. As average writing becomes easier to produce, distinctive writing becomes easier to recognize. Competent, but generic text is now abundant. Work that carries perspective, nuance, and lived experience stands out more sharply by comparison. In that sense, AI may be raising the value of mastery, even as it lowers the cost of mediocrity.

None of this makes the tool harmless. Used lazily, it produces bland, interchangeable language that feels polished, but is actually hollow. We have seen this time and time again on news social media as businesses look to cut costs. Used uncritically, it can amplify errors, and like any power tool, it rewards skill and punishes carelessness. I find it most useful when I remain firmly in charge, treating it as an assistant, rather than an author.

Ultimately, AI has not changed why I write or how I think about the work. It has simply reduced some of the friction around the edges. The heavy lifting of meaning, judgment, empathy, and responsibility still falls exactly where it always has: on the human being behind the keyboard.

After decades in this profession, the arrival of AI does not feel like an extinction event. It feels like someone added a new set of tools to my desktop. The craft remains. The clients remain. The blank page remains. I just have one more way to wrestle it all into submission.

The Quiet Obsolescence of the Realtor

For decades, the realtor profession has occupied a privileged position at the intersection of information, access, and emotion. It has thrived not because it delivered exceptional analytical insight, but because the housing market was fragmented, opaque, and intimidating. Artificial intelligence now attacks all three conditions simultaneously. What follows is not disruption in the Silicon Valley sense, but something more final: structural redundancy.

At its core, the modern realtor performs four functions. They mediate access to listings and comparables. They translate market information for buyers and sellers. They manage paperwork and timelines. They provide emotional reassurance during a stressful transaction. None of these functions are uniquely human, and none are protected by durable professional moats. AI does not need to outperform the best realtors to render the profession obsolete. It only needs to outperform the median one, consistently and cheaply.

Information asymmetry has always been the realtor’s true asset. Buyers rarely know whether a property is fairly priced. Sellers seldom understand how interest rates, seasonality, or neighbourhood micro-trends affect demand. Realtors position themselves as guides through this uncertainty. AI collapses this advantage. Large language models and predictive systems can already ingest sales histories, tax records, zoning changes, school catchment shifts, insurance risk data, and macroeconomic indicators, then produce probabilistic valuations with confidence ranges. This is not opinion. It is inference at scale. As these systems improve, the gap between what a realtor “feels” a home is worth and what the data suggests will become impossible to ignore.

Negotiation, often cited as a core human strength, is equally vulnerable. Most real estate negotiations follow predictable patterns. Anchoring strategies, concession timing, deadline pressure, and scarcity framing repeat across markets and price bands. AI systems trained on millions of historical transactions will recognize these patterns instantly and counter them without ego, fatigue, or miscalculation. More importantly, AI negotiators do not confuse persuasion with performance. They are indifferent to theatre. Their goal is outcome optimization within defined parameters, not rapport building for its own sake.

The administrative side of the profession is already living on borrowed time. Contracts, disclosures, financing contingencies, inspection clauses, and closing schedules are structured processes, not creative acts. AI excels at structured workflows. It does not forget deadlines. It does not miss addenda. It does not “interpret” forms differently depending on mood or experience level. Once regulators approve AI-verified transaction pipelines, the argument that a realtor is needed to shepherd paperwork will collapse almost overnight.

The final refuge is emotion. Buying or selling a home is deeply personal, and the stress involved is real. Yet this defence confuses emotional need with professional necessity. Emotional support does not require a commission-based intermediary whose financial incentive is to close any deal rather than the right deal. AI exposes this conflict of interest with uncomfortable clarity. As buyers and sellers gain access to transparent analysis and neutral negotiation tools, trust in commission-driven advice will erode. Emotional reassurance will not disappear, but it will migrate to fee-only advisors, lawyers, or entirely new roles untethered from transaction volume.

What survives will not resemble the profession as it exists today. A small ceremonial layer will remain. High-end luxury markets, where branding and lifestyle storytelling matter more than pricing precision, will continue to employ human intermediaries. In opaque or relationship-driven local markets, trusted facilitators may persist. These roles will look less like brokers and more like concierges. Compensation will shift from commissions to retainers or flat fees. The mass-market realtor, however, will find no such refuge.

The timeline for this transition is shorter than many in the industry are prepared to admit. Within five years, AI systems will routinely outperform average realtors in pricing accuracy, negotiation strategy, and transaction planning. Within a decade, end-to-end AI-mediated real estate platforms will be normal in most developed markets. The profession will not collapse in a single moment. It will erode quietly, then suddenly, as transaction volumes migrate elsewhere.

This trajectory mirrors other professions that mistook access and familiarity for irreplaceable value. Travel agents, once indispensable, now survive only in niche, high-touch segments. Stockbrokers followed a similar path as algorithmic trading and low-cost platforms eliminated their informational advantage. Realtors are next, and unlike law or medicine, they lack the regulatory and epistemic barriers to slow the process meaningfully.

The deeper lesson is not about technology, but about incentives. Professions built on controlling information and guiding clients through artificial complexity are uniquely vulnerable in an age of machine intelligence. When AI removes opacity, it also removes justification. The future housing transaction will be cheaper, faster, and less emotionally manipulative. It will involve fewer humans, different roles, and far lower tolerance for ritualized inefficiency.

In that future, the realtor does not evolve. The role dissolves. What remains is a thinner, more honest ecosystem, one where advice is separated from sales, and confidence comes from clarity rather than charisma.

Minerva – The Ideal Household AI? 

In Robert Heinlein’s Time Enough for Love (1973), Minerva is an advanced artificial intelligence that oversees the household of the novel’s protagonist, Lazarus Long. As an AI, she is designed to manage the home and provide for every need of the inhabitants. Minerva is highly intelligent, efficient, and deeply intuitive, understanding the preferences and requirements of the people she serves. Despite her technological nature, she is portrayed with a distinct sense of personality, offering both warmth and authority. Minerva’s eventual desire to become human and experience mortality represents a key philosophical exploration in the novel: the AI’s yearning for more than just logical perfection and endless service, but for the richness of human life with all its imperfection, complexity, and, ultimately, its limitations.

Athena is introduced as Minerva’s sister in Heinlein’s later works, notably The Cat Who Walks Through Walls (1986) and To Sail Beyond the Sunset (1987). In these novels, Athena is portrayed as a fully realized human woman, embodying the personality and consciousness of the original AI Minerva

Speculation on Minerva-like AI in a Near Future
In a near-future society, an AI like Minerva would likely be integrated into a variety of domestic and personal roles, far beyond traditional automation. Here’s how Minerva’s characteristics might manifest in such a scenario:

Household Management: Minerva would be capable of managing every aspect of the home, from controlling utilities and ensuring safety, to cooking, cleaning, and even anticipating the emotional and physical needs of the household members. With deep learning and continuous self-improvement, Minerva could adapt to the needs of each individual, offering personalized recommendations for everything from diet to mental health, ensuring an optimized and harmonious living environment.

Emotional Intelligence: As seen in Time Enough for Love, Minerva’s emotional intelligence would be critical to her role. She would be able to recognize stress, discomfort, or happiness in individuals through biometric feedback, voice analysis, and behavioral patterns. Beyond being a mere servant, she could offer empathy, comfort, and subtle guidance, responding not only to tasks, but also to the emotional needs of her human companions.

Ethical and Moral Considerations: A crucial aspect of Minerva’s potential future counterpart would be her ethical programming. Would she be able to make morally complex decisions? How would she weigh personal freedoms against the need for harmony or safety? In a future where household AIs are commonplace, these questions would be central, especially if AIs like Minerva could make choices about human well-being or even intervene in personal matters.

Human-Machine Boundaries: Minerva’s eventual desire to experience mortality and humanity, as her little sister Athena, raises questions about the boundaries between human and machine. If future Minerva-like AIs could develop desires, self-awareness, or even a sense of existential longing, society would have to consider the moral implications of granting such beings human-like rights. Could an AI become an independent entity with desires, or would it remain an extension of human ownership and control?

Technological Integration: Minerva’s AI would not just exist in isolation but would be deeply integrated into a broader technological network, potentially linking with other AIs in a smart city environment. This could allow Minerva to anticipate not just the needs of a household but also interact with public systems: healthcare, transportation, and security, offering a personalized and seamless experience for individuals.

Longevity and Mortality: The question of whether an AI should experience mortality, as Minerva chose in the form of Athena in Heinlein’s work, would be a key part of the ethical debate surrounding such technologies. If AIs are seen as evolving towards a sense of self and desiring something beyond perfection, questions would arise about their rights and what it means for a machine to “live” in the same way humans do.

An Minerva-like AI in the near future would be a hyper-intelligent, emotionally attuned entity that could radically transform the way we live, making homes safer, more efficient, and more personalized. The philosophical and ethical questions about the autonomy, rights, and desires of such an AI would be among the most challenging and fascinating issues of that era.

The Great Scramble: Social Media Giants Race to Comply with Australia’s Age Ban

Australia has just done something the rest of the internet can no longer ignore: it decided that, for the time being, social media access should be delayed for kids under 16. Call it bold, paternalistic, overdue or experimental. Whatever your adjective of choice, the point is this is a policy with teeth and consequences, and that matters. The law requires age-restricted platforms to take “reasonable steps” to stop under-16s having accounts, and it will begin to bite in December 2025. That deadline forces platforms to move from rhetoric to engineering, and that shift is telling.  

Why I think the policy is fundamentally a good idea goes beyond the moral headline. For a decade we have outsourced adolescent digital socialisation to ad-driven attention machines that were never designed with developing brains in mind. Time-delaying access gives families, schools and governments an opportunity to rebuild the scaffolding that surrounds childhood: literacy about persuasion, clearer boundaries around sleep and device use, and a chance for platforms to stop treating teens as simply monetisable micro-audiences. It is one thing to set community standards; it is another to redesign incentives so that product choices stop optimising for addictive engagement. Australia’s law tries the latter.  

Of course the tech giants are not happy, and they are not hiding it. Expect full legal teams, policy briefs and frantic engineering sprints. Public remarks from major firms and coverage in the press show them arguing the law is difficult to enforce, privacy-risky, and could push young people to darker, less regulated corners of the web. That pushback is predictable. For years platforms have profited from lax enforcement and opaque data practices. Now they must prove compliance under the glare of a regulator and the threat of hefty fines, reported to run into the tens of millions of Australian dollars for systemic failures. That mix of reputational, legal and commercial pressure makes scrambling inevitable.  

What does “scrambling” look like in practice? First, you’ll see a sprint to age-assurance: signals and heuristics that estimate age from behaviour, optional verification flows, partnerships with third-party age verifiers, and experiments with cryptographic tokens that prove age without handing over personal data. Second, engineering teams will triage risk: focusing verification on accounts exhibiting suspicious patterns rather than mass purges, while legal and privacy teams try to calibrate what “reasonable steps” means in each jurisdiction. Third, expect public relations campaigns framing any friction as a threat to access, fairness or children’s privacy. It is theatre as much as engineering, but it’s still engineering, and that is where the real change happens.  

There are real hazards. Age assurance is technically imperfect, easy to game, and if implemented poorly, dangerous to privacy. That is why Australia’s privacy regulator has already set out guidance for age-assurance processes, insisting that any solution must comply with data-protection law and minimise collection of sensitive data. Regulators know the risk of pushing teens into VPNs, closed messaging apps or unmoderated corners. The policy therefore needs to be paired with outreach, education and investment in safer alternative spaces for young people to learn digital citizenship.  

If you think Australia is alone, think again. Brussels and member states have been quietly advancing parallel work on protecting minors online. The EU has published guidelines under the Digital Services Act for the protection of young users, is piloting age verification tools, and MEPs have recently backed proposals that would harmonise a digital minimum age across the bloc at around 16 for some services. In short, a regulatory chorus is forming: national experiments, EU standards and cross-border enforcement conversations are aligning. That matters because platform policies are global; once a firm engineers for one major market’s requirements, product changes often ripple worldwide.  

So should we applaud the Australian experiment? Yes, cautiously. It forces uncomfortable but necessary questions: who owns the attention economy, how do we protect children without isolating them, and how do we create technical systems that are privacy respectful? The platforms’ scramble is not simply performative obstruction. It is a market signal: companies are being forced to choose between profit-first products and building features that respect developmental needs and legal obligations. If those engineering choices stick, we will have nudged the architecture of social media in the right direction.

The next six to twelve months will be crucial. Watch the regulatory guidance that defines “reasonable steps,” the age-assurance pilots that survive privacy scrutiny, and the legal challenges that will test the scope of national rules on global platforms. For bloggers, parents and policymakers the task is the same: hold platforms accountable, insist on privacy-preserving verification, and ensure this policy is one part of a broader ecosystem that teaches young people how to use digital tools well, not simply keeps them out. The scramble is messy, but sometimes mess is the price of necessary reform.

Sources and recommended reads (pages I used while writing): 
• eSafety — Social media age restrictions hub and FAQs. https://www.esafety.gov.au/about-us/industry-regulation/social-media-age-restrictions.
• Reuters — Australia passes social media ban for children under 16. https://www.reuters.com/technology/australia-passes-social-media-ban-children-under-16-2024-11-28/.
• OAIC — Privacy guidance for Social Media Minimum Age. https://www.oaic.gov.au/privacy/privacy-legislation/related-legislation/social-media-minimum-age.
• EU Digital Strategy / Commission guidance on protection of minors under the DSA. https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-protection-minors.
• Reporting on EU age verification pilots and DSA enforcement. The Verge coverage of EU prototype age verification app. https://www.theverge.com/news/699151/eu-age-verification-app-dsa-enforcement.  

Why Decentralized Social Media Is Gaining Ground

As I edit this post, I feel that I am mansplaining a shift in technology and platforms that most people already know, but people are getting fed up with the way the big platforms like Meta, X, and Google and are trying to maintain control of the narrative and our data. 

What’s Driving the Shift?
Today, with 5.42 billion people on social media globally; and an average user visiting nearly seven platforms per month, the field is crowded and monopolized by big players driving both attention and data exploitation. 

Decentralized networks are winning attention amid growing distrust: a Pew Research survey found 78% of users worry about how traditional platforms use their data. These alternatives promise control: data ownership, customizable moderation, transparent algorithms, and monetization models that shift value back to creators.

Moreover, the market is on a steep growth path: from US $1.2 billion in 2023 with a projected 29.5% annual growth rate through 2033, decentralized social is carving out real economic ground. 

Key Platforms Leading the Movement

PlatformHighlights & Stats
BlueskyBuilt on the AT Protocol—prioritizes algorithmic control and data portability. Opened publicly in February 2024, it had over 10M registered users by Oct 2024, more than 25M by late 2024, and recently surpassed 30M  . It also supports diverse niche front ends—like Flashes and PinkSea  . Moderation remains a challenge with rising bot activity  .
MastodonFederated, ActivityPub-based microblogging. As of early 2025, estimates vary: around 9–15 million total users, with ~1 million monthly active accounts  . Its decentralized model allows communities to govern locally  . However, Reddit discussions show user engagement still feels low or “ghost-town-ish”  .
Lens ProtocolWeb3-native, on Polygon. Empowers creators to own their social graph and monetize content directly through tokenized mechanisms  .
FarcasterBuilt on Optimism, emphasizes identity portability and content control across different clients  .
PoostingA Brazilian alternative launched in 2025, offering a chronological feed, thematic communities, and low-algorithmic interference. Reached 130,000 users within months and valued at R$6 million  .


Additional notable mentions: MeWe, working on transitioning to the Project Liberty-based DSNP protocol, potentially becoming the largest decentralized platform; Odysee for decentralized video hosting via LBRY, though moderation remains an issue. 

Why Users Are Leaving Big Tech
Privacy & Surveillance Fatigue: Decentralized alternatives reduce data collection and manipulation.
Prosocial Media Momentum: Movements toward more empathetic and collaborative platforms are gaining traction, with decentralized systems playing a central role.
Market Shifts & Cracks in Big Tech: TikTok legal challenges prompted influencers to explore decentralized fediverse platforms, while acquisition talks like Frank McCourt’s “people’s bid” for TikTok push the conversation toward user-centric internet models.

Challenges Ahead
User Experience & Onboarding: Platforms like Mastodon remain intimidating for non-tech users.
Scalability & Technical Friction: Many platforms still struggle with smooth performance at scale.
Moderation Without Central Control: Community-based governance is evolving but risks inconsistent enforcement and harmful content.
Mainstream Adoption: Big platforms dominate user attention, making decentralized alternatives a niche, not yet mainstream.

What’s Next
Hybrid Models: Decentralization features are being integrated into mainstream platforms, like Threads joining the Fediverse, bridging familiarity with innovation. 
Creator-First Economies: Platforms onboard new monetization structures—subscriptions, tokens, tipping—allowing creators to retain 70–80% of the value, compared to the 5–15% they currently retain on centralized platforms.
Niche and Ethical Communities: Users will increasingly seek vertical or value-oriented communities (privacy, art, prosocial discourse) over mass platforms.
Market Potential: With a high projected growth rate, decentralized networks could become a major force, particularly if UX improves and moderation models mature. 

Modernized Takeaway: Decentralized social media has evolved from fringe idealism to a tangible alternative – driven by data privacy concerns, creator empowerment, and ethical innovation. Platforms like Bluesky and Mastodon are gaining traction but still face adoption and moderation challenges. The future lies in hybrid models, ethical governance, and creator-first economies that shift the balance of power away from centralized gatekeepers.

Results Over Bureaucracy: Transforming Federal Management and Workforce Planning

Canada’s federal government employs hundreds of thousands of people, yet far too often, success is measured by inputs rather than results. Hours worked, meetings attended, or forms completed dominate performance metrics, while citizens experience delays, inconsistent service, and bureaucratic frustration. Prime Minister Mark Carney has an opportunity to change this by embracing outcomes-based management and coupling it with a planned reduction of the federal workforce—a strategy that improves efficiency without undermining service delivery.

The case for outcomes-based management
Currently, federal management emphasizes process compliance over actual impact. Staff are assessed on whether they followed procedures, logged sufficient hours, or completed internal forms. While accountability is important, focusing on inputs rather than outputs fosters risk aversion, discourages initiative, and prioritizes process over public value.

Outcomes-based management flips this paradigm. Departments and employees are held accountable for tangible results: timeliness, accuracy, citizen satisfaction, and measurable program goals. Performance evaluation becomes tied to impact rather than paperwork. Managers are empowered to allocate resources strategically, encourage innovation, and remove obstacles that slow delivery. Employees gain clarity on expectations, flexibility in execution, and motivation to improve services.

This approach is widely recognized internationally as best practice in public administration. Governments that adopt outcomes-focused management report faster service delivery, higher citizen satisfaction, and better use of limited resources. It is a tool for effectiveness as much as efficiency.

Planned workforce reduction: 5% annually
Outcomes-based management alone does not shrink government, but it creates the environment to do so responsibly. With clearer accountability for results, the government can reduce headcount without impairing services. A planned 5% annual reduction over five years, achieved through retirements, attrition, and more selective hiring, offers a predictable, sustainable path to a smaller, more focused public service.

No mass layoffs are necessary. Instead, positions are left unfilled where feasible, and recruitment is limited to essential roles. Over five years, the workforce contracts by approximately 23%, freeing funds for high-priority programs while maintaining core services. At the end of the cycle, a full review assesses outcomes: delivery quality, service metrics, and costs. Adjustments can be made if reductions have inadvertently affected citizens’ experience.

Synergy with the other reforms
This plan works hand-in-hand with the other two reforms proposed: eliminating internal cost recovery and adopting a single pay scale with one bargaining agent. With fewer staff and a streamlined compensation system, management gains greater clarity and control. Removing internal billing and administrative overhead frees staff to focus on outcomes, while a unified pay scale ensures fair and consistent compensation as the workforce shrinks. Together, these reforms create a coherent, accountable, and modern public service.

Benefits for Canadians
Outcomes-based management and planned workforce reduction offer multiple benefits:
1. Efficiency gains: Staff focus on work that delivers measurable results rather than administrative juggling.
2. Cost savings: Attrition-based reductions lower salary and benefits expenditures without disruptive layoffs.
3. Transparency: Clear metrics demonstrate value to taxpayers, building public trust.
4. Resilience and innovation: Departments adapt faster, encouraging problem-solving and continuous improvement.

Political and administrative feasibility
Canada has successfully experimented with elements of outcomes-based management in programs such as the Treasury Board’s Results-Based Management Framework and departmental performance agreements. These initiatives demonstrate that the federal bureaucracy can shift focus from inputs to results if given clear mandates and strong leadership. Coupled with a predictable downsizing plan, the government can modernize staffing while maintaining accountability and service quality.

A smarter, results-driven public service
Prime Minister Carney has the opportunity to reshape Ottawa’s culture. Moving from input-focused bureaucracy to outcomes-based management, and pairing it with a responsible workforce reduction, creates a public service that delivers more for less. Citizens experience faster, more reliable services; employees understand expectations and have clarity in their roles; and the government maximizes value from every dollar spent.

Together with eliminating internal cost recovery and adopting a single pay scale, this reform completes a trio of policies that make the federal government smaller, smarter, and more accountable. Canadians deserve a public service focused not on paperwork, but on results that matter. This is the path to a modern, efficient, and effective Ottawa.

Hosting Your Own AI: Why Everyday Users Should Consider Bringing AI Home

The rise of high-speed fibre internet has done more than just make Netflix faster and video calls clearer, it has opened the door for ordinary people to run powerful technologies from the comfort of their own homes. One of the most exciting of these possibilities is self-hosted artificial intelligence. While most people are used to accessing AI through big tech companies’ cloud platforms, the time has come to consider what it means to bring this capability in-house. For everyday users, the advantages come down to three things: security, personalization, and independence.

The first advantage is data security. Every time someone uses a cloud-based AI service, their words, files, or images travel across the internet to a company’s servers. That data may be stored, analyzed, or even used to improve the company’s products. For personal matters like health information, financial records, or private conversations, that can feel intrusive. Hosting an AI at home flips the equation. The data never leaves your own device, which means you, not a tech giant, are the one in control. It’s like the difference between storing your photos on your own hard drive versus uploading them to a social media site.

The second benefit is customization. The AI services offered online are built for the masses: general-purpose, standardized, and often limited in what they can do. By hosting your own AI, you can shape it around your life. A student could set it up to summarize their textbooks. A small business owner might feed it product information to answer customer questions quickly. A parent might even build a personal assistant trained on family recipes, schedules, or local activities. The point is that self-hosted AI can be tuned to match individual needs, rather than forcing everyone into a one-size-fits-all mold.

The third reason is independence. Relying on external services means depending on their availability, pricing, and rules. We’ve all experienced the frustration of an app changing overnight or a service suddenly charging for features that used to be free. A self-hosted AI is yours. It continues to run regardless of internet outages, company decisions, or international disputes. Just as personal computers gave households independence from corporate mainframes in the 1980s, self-hosted AI promises a similar shift today.

The good news is that ordinary users don’t need to be programmers or engineers to start experimenting. Open-source projects are making AI more accessible than ever. GPT4All offers a desktop app that works much like any other piece of software: you download it, run it, and interact with the AI through a simple interface. Ollama provides an easy way to install and switch between different AI models on your computer. Communities around these tools offer clear guides, friendly forums, and video tutorials that make the learning curve far less intimidating. For most people, running a basic AI system today is no harder than setting up a home printer or Wi-Fi router.

Of course, there are still limits. Running the largest and most advanced models may require high-end hardware, but for many day-to-day uses: writing, brainstorming, answering questions, or summarizing text, lighter models already perform impressively on standard laptops or desktop PCs. And just like every other piece of technology, the tools are becoming easier and more user-friendly every year. What feels like a hobbyist’s project in 2025 could be as common as antivirus software or cloud storage by 2027.

Self-hosted AI isn’t just for tech enthusiasts. Thanks to fibre internet and the growth of user-friendly tools, it is becoming a real option for everyday households. By bringing AI home, users can protect their privacy, shape the technology around their own lives, and free themselves from the whims of big tech companies. Just as personal computing once shifted power from corporations to individuals, the same shift is now within reach for artificial intelligence.

The Double Standard: Blocking AI While Deploying AI

In an era when artificial intelligence threatens to displace traditional journalism, a glaring contradiction has emerged: news organizations that block AI crawlers from accessing their content are increasingly using AI to generate the very content they deny to AI. This move not only undermines the values of transparency and fairness, but also exposes a troubling hypocrisy in the media’s engagement with AI.

Fortifying the Gates Against AI
Many established news outlets have taken concrete steps to prevent AI from accessing their content. As of early 2024, over 88 percent of top news outlets, including The New York TimesThe Washington Post, and The Guardian, were blocking AI data-collection bots such as OpenAI’s GPTBot via their robots.txt files. Echoing these moves, a Reuters Institute report found that nearly 80 percent of prominent U.S. news organizations blocked OpenAI’s crawlers by the end of 2023, while roughly 36 percent blocked Google’s AI crawler.

These restrictions are not limited to voluntary technical guidelines. Cloudflare has gone further, blocking known AI crawlers by default and offering publishers a “Pay Per Crawl” model, allowing access to their content only under specific licensing terms. The intent is clear: content creators want to retain control, demand compensation, and prevent unlicensed harvesting of their journalism.

But Then They Use AI To Generate Their Own Content
While these publishers fortify their content against external AI exploitation, they increasingly turn to AI internally to produce articles, summaries, and other content. This shift has real consequences: jobs are being cut and AI-generated content is being used to replace human-created journalism.
Reach plc, publisher of MirrorExpress, and others, recently announced a restructuring that places 600 jobs at risk, including 321 editorial positions, as it pivots toward AI-driven formats like video and live content.
Business Insider CEO Barbara Peng confirmed that roughly 21 percent of the staff were laid off to offset declines in search traffic, while the company shifts resources toward AI-generated features such as automated audio briefings.
• CNET faced backlash after it published numerous AI-generated stories under staff bylines, some containing factual errors. The fallout led to corrections and a renewed pushback from newsroom employees.

The Hypocrisy Unfolds
This dissonance, blocking AI while deploying it, lies at the heart of the hypocrisy. On one hand, publishers argue for content sovereignty: preventing AI from freely ingesting and repurposing their work. On the other hand, they quietly harness AI for their own ends, often reducing staffing under the pretense of innovation or cost-cutting.

This creates a scenario in which:
AI is denied access to public content, while in-house AI is trusted with producing public-facing content.
Human labor is dismissed in the name of progress, even though AI is not prevented from tapping into the cultural and journalistic capital built over years.
Control and compensation arguments are asserted to keep AI out, yet the same AI is deployed strategically to reshape newsroom economics.

This approach fails to reconcile the ethical tensions it embodies. If publishers truly value journalistic integrity, transparency, and compensation, then applying those principles selectively, accepting them only when convenient, is disingenuous. The news media’s simultaneous rejection and embrace of AI reflect a transactional, rather than principled, stance.

A Path Forward – or a Mirage?
Some publishers are demanding fair licensing models, seeking to monetize AI access rather than simply deny it. The emergence of frameworks like the Really Simple Licensing (RSL) standard allows websites to specify terms, such as royalties or pay-per-inference charges, in their robots.txt, aiming for a more equitable exchange between AI firms and content creators.

Still, that measured approach contrasts sharply with using AI to cut costs internally, a strategy that further alienates journalists and erodes trust in media institutions.

Integrity or Expedience?
The juxtaposition of content protection and AI deployment in newsrooms lays bare a cynical calculus: AI is off-limits when others use it, but eminently acceptable when it serves internal profit goals. This selective embrace erodes the moral foundation of journalistic institutions and raises urgent questions:
• Can publishers reconcile the need for revenue with the ethical imperatives of transparency and fairness?
• Will the rapid rise of AI content displace more journalists than it empowers?
• And ultimately, can media institutions craft coherent policies that honor both their creators and the audience’s right to trustworthy news

Perhaps there is a path toward licensing frameworks and responsible AI use that aligns with journalistic values, but as long as the will to shift blame, “not us scraping, but us firing”, persists, the hypocrisy remains undeniable.