Objective vs. Subjective Truth: Can Reality Be Independent of Perspective?

With many of our political leaders and wannabes being even more flexible with facts these days than usual, especially during elections and internal party races, I felt I needed to get back into the whole Truth vs.Transparency debate.  The notion that truth depends on perspective is a long-standing debate in philosophy, epistemology, and even science. This idea, often associated with relativism, suggests that truth is not absolute, but rather contingent on individual experiences, cultural backgrounds, or frameworks of understanding. However, this claim is not without challenges, as there are also arguments in favor of objective and universal truths. To fully explore this concept, we must examine different domains where truth operates: subjective experience, science, social and political contexts, and philosophical thought.

Perspective and Subjective Truth
In many aspects of human experience, truth is shaped by individual perspective. This is especially evident in perception, memory, and personal beliefs. Two people witnessing the same event might recall it differently due to factors such as their background, cognitive biases, emotional states, or even the angle from which they viewed the scene. This idea aligns with psychological research on eyewitness testimony, which has shown that memory is often reconstructive rather than a perfect recording of reality.

Similarly, in moral and ethical debates, truth is often perspective-dependent. For example, the moral acceptability of euthanasia, capital punishment, or animal rights varies across cultures and individuals. Some believe that these issues have absolute moral answers, while others argue that they are contingent on cultural norms, social circumstances, or personal values. This form of truth relativism suggests that moral truths exist only within particular frameworks and are not universally binding.

The same can be said for aesthetic judgments. Whether a painting is beautiful or a piece of music is moving depends entirely on the individual’s perspective, cultural exposure, and personal taste. In these cases, truth appears to be entirely relative, as there is no objective standard for determining beauty or artistic value.

Scientific and Objective Truth
While subjective truths are shaped by perspective, there are many instances where truth appears to be independent of personal viewpoints. In science, for instance, objective truths are discovered through empirical evidence and repeatable experimentation. The boiling point of water at sea level is 100°C, regardless of who measures it or what they believe. The theory of gravity describes forces that apply universally, irrespective of individual perspectives. These facts suggest that some truths exist independently of human perception and belief.

However, even in science, perspective plays a role in shaping how truths are understood. Scientific paradigms, as described by Thomas Kuhn in The Structure of Scientific Revolutions, shift over time. What is considered “true” in one era may later be revised. For example, Newtonian physics was once seen as the ultimate truth about motion and force, but Einstein’s theory of relativity redefined our understanding of space and time. This suggests that while some scientific truths may be objective, our understanding of them is influenced by perspective and historical context.

Social and Political Truths
In social and political discourse, truth is often contested, shaped by competing narratives and interests. Political ideologies influence how events are interpreted and presented. The same historical event can be described differently depending on the source; one news outlet may highlight a particular set of facts while another emphasizes a different aspect, leading to multiple “truths” about the same event.

This phenomenon is especially evident in propaganda, media bias, and misinformation. A politician may claim that an economic policy has been a success, citing certain statistics, while an opponent presents an alternative set of data to argue the opposite. In such cases, truth becomes less about objective reality and more about which perspective dominates public discourse.

Additionally, postmodern thinkers like Michel Foucault argue that truth is linked to power structures. Those in power determine what is accepted as truth, shaping knowledge production in ways that reinforce their authority. This perspective challenges the idea that truth is purely objective, suggesting instead that it is constructed through discourse and institutional influence.

Philosophical Challenges: Can Truth Ever Be Objective?
Philosophers have long debated whether truth is ultimately subjective or objective. Immanuel Kant, for example, argued that we can never access the world as it truly is (noumena), but only as it appears to us through our senses and cognitive structures (phenomena). This implies that all knowledge is shaped by human perception, making pure objectivity impossible.

On the other hand, Plato’s theory of forms suggests that there are absolute truths – unchanging, eternal realities that exist beyond the material world. Mathematical truths, for instance, seem to be independent of human perspective. The Pythagorean theorem is true regardless of culture, language, or opinion.

Existentialist philosophers like Jean-Paul Sartre take a different approach, arguing that meaning and truth are constructed by individuals rather than discovered. From this perspective, truth is not something external to be found but something we create through our actions and beliefs.

Is Truth Relative or Absolute?
The idea that truth depends on perspective holds significant weight in subjective, moral, and social contexts. In matters of perception, ethics, and politics, truth often appears to be relative, shaped by individual experiences, cultural backgrounds, and power dynamics. However, in science, mathematics, and logic, objective truths exist independently of human interpretation, though our understanding of them may evolve over time.

The challenge lies in distinguishing between what is truly relative and what is universally valid. While perspective influences many aspects of truth, dismissing the possibility of objective truth altogether leads to skepticism and uncertainty. A balanced approach recognizes that while some truths are shaped by perspective, others remain constant regardless of human interpretation.

Minerva – The Ideal Household AI? 

In Robert Heinlein’s Time Enough for Love (1973), Minerva is an advanced artificial intelligence that oversees the household of the novel’s protagonist, Lazarus Long. As an AI, she is designed to manage the home and provide for every need of the inhabitants. Minerva is highly intelligent, efficient, and deeply intuitive, understanding the preferences and requirements of the people she serves. Despite her technological nature, she is portrayed with a distinct sense of personality, offering both warmth and authority. Minerva’s eventual desire to become human and experience mortality represents a key philosophical exploration in the novel: the AI’s yearning for more than just logical perfection and endless service, but for the richness of human life with all its imperfection, complexity, and, ultimately, its limitations.

Athena is introduced as Minerva’s sister in Heinlein’s later works, notably The Cat Who Walks Through Walls (1986) and To Sail Beyond the Sunset (1987). In these novels, Athena is portrayed as a fully realized human woman, embodying the personality and consciousness of the original AI Minerva

Speculation on Minerva-like AI in a Near Future
In a near-future society, an AI like Minerva would likely be integrated into a variety of domestic and personal roles, far beyond traditional automation. Here’s how Minerva’s characteristics might manifest in such a scenario:

Household Management: Minerva would be capable of managing every aspect of the home, from controlling utilities and ensuring safety, to cooking, cleaning, and even anticipating the emotional and physical needs of the household members. With deep learning and continuous self-improvement, Minerva could adapt to the needs of each individual, offering personalized recommendations for everything from diet to mental health, ensuring an optimized and harmonious living environment.

Emotional Intelligence: As seen in Time Enough for Love, Minerva’s emotional intelligence would be critical to her role. She would be able to recognize stress, discomfort, or happiness in individuals through biometric feedback, voice analysis, and behavioral patterns. Beyond being a mere servant, she could offer empathy, comfort, and subtle guidance, responding not only to tasks, but also to the emotional needs of her human companions.

Ethical and Moral Considerations: A crucial aspect of Minerva’s potential future counterpart would be her ethical programming. Would she be able to make morally complex decisions? How would she weigh personal freedoms against the need for harmony or safety? In a future where household AIs are commonplace, these questions would be central, especially if AIs like Minerva could make choices about human well-being or even intervene in personal matters.

Human-Machine Boundaries: Minerva’s eventual desire to experience mortality and humanity, as her little sister Athena, raises questions about the boundaries between human and machine. If future Minerva-like AIs could develop desires, self-awareness, or even a sense of existential longing, society would have to consider the moral implications of granting such beings human-like rights. Could an AI become an independent entity with desires, or would it remain an extension of human ownership and control?

Technological Integration: Minerva’s AI would not just exist in isolation but would be deeply integrated into a broader technological network, potentially linking with other AIs in a smart city environment. This could allow Minerva to anticipate not just the needs of a household but also interact with public systems: healthcare, transportation, and security, offering a personalized and seamless experience for individuals.

Longevity and Mortality: The question of whether an AI should experience mortality, as Minerva chose in the form of Athena in Heinlein’s work, would be a key part of the ethical debate surrounding such technologies. If AIs are seen as evolving towards a sense of self and desiring something beyond perfection, questions would arise about their rights and what it means for a machine to “live” in the same way humans do.

An Minerva-like AI in the near future would be a hyper-intelligent, emotionally attuned entity that could radically transform the way we live, making homes safer, more efficient, and more personalized. The philosophical and ethical questions about the autonomy, rights, and desires of such an AI would be among the most challenging and fascinating issues of that era.

Rethinking “Developing Countries” and Embracing the Majority World

When we talk about developing countries, we rarely stop to ask what the phrase actually means. It slips off the tongue so easily, a piece of polite shorthand meant to distinguish between rich and poor, industrial and agrarian, modern and traditional. But behind that convenience hides a great deal of inherited hierarchy. Calling one part of the planet “developing” assumes there is a finish line defined elsewhere; that a good society looks like a Western one, with high GDP, gleaming infrastructure, and endless economic growth.

In recent years, many writers and thinkers have begun to push back on that language, arguing that it keeps us trapped in a colonial frame of mind. Arturo Escobar, in his landmark Encountering Development, described “development” as one of the most powerful cultural projects of the twentieth century, a system of ideas that reshaped the world to fit Western priorities. The word itself became a quiet command: grow like us, consume like us, measure like us.

Where the Language Came From
The phrase Third World first appeared during the Cold War, used to describe nations that aligned with neither the capitalist West nor the communist East. Soon it came to mean “poor countries”;  those still struggling with the legacies of colonialism, low industrial output, or weak infrastructure. By the 1980s, the term had begun to sound uncomfortable, and developing world emerged as its polite successor. Yet the underlying assumptions didn’t change. To be “developing” was to be “not yet there.”

The problem isn’t just historical accuracy; it’s the moral geometry of the words. They draw the map as a staircase, with the G7 at the top and everyone else climbing, slowly or not at all. They suggest that the proper destiny of the planet is to become more like the already-industrialised nations, despite the ecological and social costs that model now reveals.

Why Words Matter
Language shapes policy, and policy shapes lives. When international agencies use developing, they often frame assistance, trade, and climate policy around the assumption that economic growth is the central measure of progress; but GDP tells us nothing about clean water, community cohesion, or cultural vitality. It counts bombs and hospital beds the same way, as “economic activity.”

When we say “developing,” we subtly affirm that Western modernity is the gold standard. That is not only inaccurate but increasingly unwise in an age of ecological constraint and social fragmentation. There are other ways to live well on this planet, and many of them are already being practiced by the people our old vocabulary patronizes.

The Rise of the Majority World
One alternative that resonates deeply is Majority World. The term flips the script: most of humanity lives outside the wealthy industrialized nations. To call those countries “developing” is not only condescending, it’s mathematically absurd. As development writer Sadaf Shallwani notes, “The terms ‘developing world’ and ‘Third World’ imply that development is a linear process, and that certain ‘developed’ countries have finished developing and are the norm towards which all countries should strive.”

The phrase Majority World reframes the global conversation. Instead of a minority of wealthy states defining progress, it recognizes that the majority of the planet’s population, and its cultural, ecological, and creative wealth, resides elsewhere. It’s not a euphemism; it’s a shift in perspective.

Calling Africa, Asia, Latin America, and the Pacific the Majority World centres humanity, not hierarchy. It invites curiosity instead of comparison. It allows us to speak about global issues: climate, migration, food security, health, as shared human challenges rather than one-way rescue missions.

Beyond Renaming: Rethinking Progress
Of course, simply changing labels isn’t enough. The deeper challenge is to redefine what progress itself means. For decades, “development” has equated to industrialization, export-driven growth, and consumer expansion. But that model has left deep scars on both people and planet.

Around the world, alternative visions of well-being are emerging. Bhutan measures Gross National Happiness. New Zealand’s Wellbeing Budgetprioritizes mental health, environment, and equity alongside economic performance. In Latin America, the Andean philosophy of Buen Vivir, “good living”, emphasizes balance with nature and community rather than domination or accumulation.

Each of these ideas challenges the unspoken assumption that there is a single road to modernity. They remind us that prosperity can mean dignity, education, safety, and belonging, not necessarily industrial sprawl and high consumption.

The term Majority World aligns beautifully with this plural understanding. It carries a quiet humility, an admission that the Western model is not universal, and that many societies are rich in social capital, resilience, and wisdom even without high per-capita income.

A Linguistic Act of Respect
For writers, journalists, and policymakers, choosing our words carefully is a small but vital act of respect. Before typing “developing country,” we might pause to ask: developing by whose standards? Toward what end? Whose story does this phrase tell, and whose does it erase?

When we speak instead of the Majority World, we acknowledge shared humanity and diversity of experience. It invites us to listen rather than prescribe, to recognize that there are as many definitions of progress as there are landscapes and languages.

This linguistic shift is also emotionally honest. It reminds those of us in the so-called “developed” world that we are the minority, not the model, and that our own path is far from sustainable. The future will depend not on teaching others to emulate us, but on learning together how to live well within planetary boundaries.

A More Honest Vocabulary
The phrase Majority World is not perfect, but it moves us closer to linguistic integrity. It removes the hierarchy, restores proportion, and invites humility. It replaces the idea of a “developing world” that needs guidance with a mosaic of societies co-creating their futures on equal moral footing.

Language is never neutral. The words we choose reveal the maps in our minds, who we see at the center, who we see at the margins. Changing those words changes the map.

Perhaps, in time, “development” itself will fade as a global organizing idea, replaced by something more ecological, more plural, and more just. Until then, we can begin with something simple and powerful: calling the world as it is, in its vastness and complexity, a Majority World that has always been, in truth, the heart of humanity.

References:
• Escobar, Arturo. Encountering Development: The Making and Unmaking of the Third World. Princeton University Press, 1995.
• Ziai, Aram. “The Discourse of ‘Development’ and Why the Concept Still Matters.” Third World Quarterly, 2013.
• Trainer, Ted. “Third World Development: The Simpler Way Critique of Conventional Theory and Practice.” Real-World Economics Review 95 (2021).
• Shallwani, Sadaf. “Why I Use the Term ‘Majority World’ Instead of ‘Developing Countries’ or ‘Third World.’” sadafshallwani.net, 2015.
• Wellbeing Economy Alliance. “What Is a Wellbeing Economy?” 2023.

The Comforting Cage: How Aldous Huxley Predicted Our Age of Distracted Control

In 1958, Aldous Huxley wrote a slender, but haunting volume titled Brave New World Revisited. It was his attempt to warn a generation already entranced by television, advertising, and early consumer culture that his 1932 dystopia was no longer fiction, it was unfolding in real time. Huxley believed that the most stable form of tyranny was not one enforced by fear, as in Orwell’s 1984, but one maintained through comfort, pleasure, and distraction. “A really efficient totalitarian state,” he wrote, “would be one in which the all-powerful executive…..control a population of slaves who do not have to be coerced, because they love their servitude.”

Huxley’s argument was not about overt repression, but about the subtle engineering of consent. He foresaw a world where governments and corporations would learn to shape desire, manage attention, and condition emotion. The key insight was that control could come wrapped in entertainment, convenience, and abundance. Power would no longer need to break the will, it could simply dissolve it in pleasure.

The Psychology of Voluntary Servitude
In Brave New World, the population is pacified by a combination of chemical pleasure, social conditioning, and endless amusement. Citizens are encouraged to consume, to stay busy, and to avoid reflection. The drug soma provides instant calm without consequence, while a system of engineered leisure: sport, sex, and spectacle keeps everyone compliant. Critical thought, solitude, and emotion are pathologized as “unnatural.”

In Revisited, Huxley warned that real-world versions of this society were forming through media and marketing. He recognized that advertising, propaganda, and consumer psychology had evolved into powerful instruments of social control. “The dictators of the future,” he wrote, “will find that education can be made to serve their purposes as efficiently as the rack or the stake.” What mattered was not to crush rebellion, but to prevent it from occurring by saturating people with triviality and comfort.

The result is a society of voluntary servitude, one in which citizens do not rebel because they do not wish to. They are too busy, too entertained, and too distracted to notice the shrinking space for independent thought.

From Propaganda to Persuasion
Huxley’s vision differed sharply from George Orwell’s. In 1984, the state controls through surveillance, fear, and censorship. In Huxley’s future, control is exercised through persuasion, pleasure, and distraction. Orwell feared that truth would be suppressed; Huxley feared it would be drowned in a sea of irrelevance. As Neil Postman put it in Amusing Ourselves to Death (1985), “Orwell feared those who would ban books. Huxley feared there would be no reason to ban a book, for there would be no one who wanted to read one.”

Modern societies have largely taken the Huxleyan path. The average person today is targeted by thousands of marketing messages per day, each designed to exploit cognitive bias and emotional need. Social media platforms fine-tune content to maximize engagement, rewarding outrage and impulse while eroding patience and depth. What Huxley described as a “soma” of distraction now takes the form of algorithmic pleasure loops and infinite scrolls.

This system is not maintained by coercion, but by the careful management of dopamine. We become self-regulating consumers in a vast behavioral economy, our desires shaped and sold back to us in a continuous cycle.

The Pharmacological and the Psychological
Huxley was also among the first to link chemical and psychological control. He predicted a “pharmacological revolution” that would make it possible to manage populations by adjusting mood and consciousness. He imagined a world where people might voluntarily medicate themselves into compliance, not because they were forced to, but because unhappiness or agitation had become socially unacceptable.

That world, too, has arrived. The global market for antidepressants, stimulants, and mood stabilizers exceeds $20 billion annually. These drugs do genuine good for many, but Huxley’s insight lies in the broader social psychology: a culture that prizes smooth functioning over introspection and equates emotional equilibrium with virtue. The line between healing and conditioning becomes blurred when the goal is to produce efficient, compliant, and content individuals.

Meanwhile, the tools of mass persuasion have become vastly more sophisticated than even Huxley imagined. Neuromarketing, data mining, and psychographic profiling allow advertisers and political campaigns to target individuals with surgical precision. The 2016 Cambridge Analytica scandal revealed just how easily personal data could be weaponized to shape belief and behavior while preserving the illusion of free choice.

The Politics of Distraction
What results is not classic authoritarianism but something more insidious: a managed democracy in which citizens remain formally free but existentially disengaged. Political discourse becomes entertainment, outrage becomes currency, and serious issues are reframed as spectacles. The goal is not to convince the public of a falsehood but to overwhelm them with contradictions until truth itself seems unknowable.

The philosopher Byung-Chul Han calls this the “achievement society,” where individuals exploit themselves under the illusion of freedom. Huxley anticipated this, writing that “liberty can be lost not only through active suppression but through passive conditioning.” The citizen who is perpetually entertained, stimulated, and comforted is not likely to notice that his choices have narrowed.

Resisting the Comforting Cage
Huxley’s warning was not anti-technology but anti-passivity. He believed that freedom could survive only if individuals cultivated awareness, attention, and critical thought. In Revisited, he proposed that education must teach the art of thinking clearly and resisting manipulation: “Freedom is not something that can be imposed; it is a state of consciousness.”

In an age where every click and scroll is monetized, the act of paying sustained attention may be the most radical form of resistance. To read deeply, to reflect, to seek solitude, these are not mere habits but acts of self-preservation in a culture that thrives on distraction.

Huxley’s world was one where people loved their servitude because it was pleasurable. Ours is one where servitude feels like connection: constant, frictionless, and comforting. Yet the essence of his message remains the same: the most effective form of control is the one we mistake for freedom.

Sources:
• Aldous Huxley, Brave New World (1932)
• Aldous Huxley, Brave New World Revisited (1958)
• Neil Postman, Amusing Ourselves to Death (1985)
• Shoshana Zuboff, The Age of Surveillance Capitalism (2019)
• Byung-Chul Han, The Burnout Society (2015)
• Christopher Lasch, The Culture of Narcissism (1979)

From Dystopian Fiction to Political Reality: Britain’s Digital ID Proposal

As a teenager in the late 1970s, I watched a BBC drama that left a mark on me for life. The series was called 1990. It imagined a Britain in economic decline where civil liberties had been sacrificed to bureaucracy. Citizens carried Union cards; identity documents that decided whether they could work, travel, or even buy food. Lose the card and you became a “non-person.” Edward Woodward played the defiant journalist Jim Kyle, trying to expose the regime, while Barbara Kellerman embodied the cold efficiency of the state machine.

Back then it felt like dystopian fantasy, a warning not a forecast. Yet today, watching the UK government push forward with a mandatory digital ID scheme, I feel as if the fiction of my youth is edging into fact.

The plan sounds simple enough: a free digital credential stored on smartphones, initially required to prove the right to work. But let’s be honest, once the infrastructure exists, expansion is inevitable. Why stop at work checks? Why not use it for renting property, opening bank accounts, accessing healthcare, or even voting? Every new use will be presented as common sense. Before long, showing your digital ID could become as routine, and as coercive, as carrying the Union card in 1990.

Privacy is the first casualty. This credential will include biometric data and residency status, and it will be verified through state-certified providers. In theory it’s secure. In practice, Britain’s record on data protection is chequered, from NHS leaks to Home Office blunders. Biometric data isn’t like a password, you can’t change your face if it’s compromised. A single breach could haunt people for life.

Exclusion is the next. Ministers claim alternatives will exist for those without smartphones, but experience tells us such alternatives are clunky and marginal. Millions in Britain don’t have passports, reliable internet, or the latest phone. Elderly people, the poor, disabled citizens, these groups risk being pushed further to the margins. In 1990, the state declared dissidents “non-people.” In 2025, exclusion could come from something as mundane as a failed app update.

The democratic deficit is just as troubling. Voters already rejected ID cards once, when Labour’s 2006 scheme collapsed under public resistance. For today’s government to revive the idea, in digital clothing, without wide public debate or strong parliamentary scrutiny, is a profound act of political amnesia. We were told only a few years ago there would be no national ID. Yet here it comes, rebranded and repackaged as “modernisation.”

And then there’s the problem of function creep. In 1990, the Union card didn’t begin as an instrument of oppression; it became one because officials found it too useful to resist. The same danger lurks today. A card designed for immigration control could end up regulating everyday life. It could be tied to financial services, travel, or even access to political spaces. Convenience is the Trojan horse of coercion.

The government argues this will tackle illegal working and make life easier for businesses. Perhaps it will. But at what cost? We will have built the very infrastructure that past generations fought to reject: a system where your ability to live, work and move depends on a state-issued credential. The show I watched as a teenager was meant to remind us what happens when people forget to guard their freedoms.

This isn’t just a technical fix. It’s a fundamental shift in the relationship between citizen and state. Once the power to define your identity sits in a centralised digital credential, you no longer own it, the government does. That should chill anyone who values freedom in Britain.

We need to pause, debate, and if necessary, reject this plan before the future we feared on screen becomes the present we inhabit.

The Paradox of Progress: Why Social Change Often Feels Like Loss To The Majority 

In the work of a business consultant, change is a constant theme. Helping teams and organizations evolve often involves navigating the resistance that accompanies any disruption to the status quo. But this resistance isn’t unique to the corporate world, it mirrors broader societal reactions to social rebalancing efforts aimed at addressing inequality.

When societies attempt to redress systemic inequities and provide fair treatment for historically marginalized groups, resistance from the majority is a predictable, if not inevitable, response. What feels like progress to one group can feel like a loss to another. This phenomenon, rooted in psychology, social dynamics, and cultural identity, often transforms equality into a battleground.

Fear of Loss: The Power of Perception
Psychologists point to loss aversion as a key driver of resistance. People fear losing what they perceive as theirs more than they value gaining something new. In the context of social change, efforts to redistribute opportunities or resources to marginalized groups, such as workplace diversity initiatives, can feel to the majority like favoritism or unfair quotas. The reality that their rights remain intact often does little to assuage the emotional perception of loss.

Compounding this fear is a mindset known as zero-sum thinking. Many see opportunities and resources as a fixed pie: if one group gets a larger slice, another must get less. This belief frames the push for equity as a direct threat to the majority’s status, even though social equity often creates broader benefits for society as a whole.

Identity Under Siege
Resistance is not just about resources, it’s also about cultural identity. When dominant norms are challenged by changes like gender-neutral policies, anti-racist education, or expanded LGBTQ+ rights, these shifts can feel deeply personal to those who see their traditions as under attack. This fear of cultural loss often fuels narratives that frame change as an existential threat to the majority’s way of life.

Visible changes exacerbate this perception. Policies aimed at diversity, for example, are often highly noticeable: new hiring practices, updated media representation, or inclusive language reforms. These changes stand out more than the entrenched inequities they seek to address, making them seem disproportionate or unnecessary.

Status and Power: The Fight to Stay on Top
Social dominance theory offers another lens to understand the pushback. Those accustomed to holding power within a social hierarchy often resist efforts to level the playing field. For these groups, rebalancing isn’t just about perceived loss, it’s a challenge to their very status, sparking defensive claims of oppression.

The perception of threat is amplified by polarized media and political rhetoric. Leaders and platforms that oppose social progress often frame equity efforts as an attack on the majority, fueling fear and resentment. This narrative turns equality into a zero-sum game and victimizes those who already hold power.

The Role of Historical Context
Another factor driving resistance is historical amnesia. Without an understanding of the systemic barriers faced by marginalized groups, rebalancing efforts can seem unjustified. For instance, policies like affirmative action, intended to address historical inequities, are often misinterpreted as preferential treatment, rather than as remedies for long-standing disadvantages.

Bridging the Divide
Resistance to social progress isn’t rooted in actual losses of rights, but in the perception of loss. Psychological tendencies, cultural attachment, and divisive narratives all play a role in creating this resistance. Addressing it requires empathy, education, and open dialogue.

By fostering an understanding of systemic inequities and the broader benefits of equity, societies can bridge divides and navigate the inevitable pushback that accompanies change. Social progress may be disruptive, but it paves the way for a more inclusive and equitable future – one where progress is not seen as a loss, but as a shared gain.

Correcting the Map: Africa and the Push for Equal Earth

As regular readers know, I often write about geomatics, its services, and products. While I tend to be a purist when it comes to map projections, favouring the Cahill-Keyes and AuthaGraph projections, I can understand why the Equal Earth projection might be more popular, as it still looks familiar enough to resemble a traditional map.

The Equal Earth map projection is gaining prominence as a tool for reshaping global perceptions of geography, particularly in the context of Africa’s representation. Endorsed by the African Union and advocacy groups like Africa No Filter and Speak Up Africa, the “Correct The Map” campaign seeks to replace the traditional Mercator projection with the Equal Earth projection to more accurately depict Africa’s true size and significance. 

Origins and Design of the Equal Earth Projection
Introduced in 2018 by cartographers Bojan Šavrič, Bernhard Jenny, and Tom Patterson, the Equal Earth projection is an equal-area pseudocylindrical map designed to address the distortions inherent in the Mercator projection. While the Mercator projection is useful for navigation, it significantly enlarges regions near the poles and shrinks equatorial regions, leading to a misrepresentation of landmass sizes. In contrast, the Equal Earth projection maintains the relative sizes of areas, offering a more accurate visual representation of continents.  

Africa’s Distorted Representation in Traditional Maps
The Mercator projection, created in 1569, has been widely used for centuries. However, it distorts the size of continents, particularly those near the equator. Africa, for instance, appears smaller than it actually is, which can perpetuate stereotypes and misconceptions about the continent. This distortion has implications for global perceptions and can influence educational materials, media portrayals, and policy decisions.    

The “Correct The Map” Campaign
The “Correct The Map” campaign aims to challenge these historical inaccuracies by promoting the adoption of the Equal Earth projection. The African Union has actively supported this initiative, emphasizing the importance of accurate geographical representations in reclaiming Africa’s rightful place on the global stage. By advocating for the use of the Equal Earth projection in schools, media, and international organizations, the campaign seeks to foster a more equitable understanding of Africa’s size and significance.   

Broader Implications and Global Support
The push for the Equal Earth projection is part of a broader movement to decolonize cartography and challenge Eurocentric perspectives. By adopting map projections that accurately reflect the true size of continents, especially Africa, the global community can promote a more balanced and inclusive worldview. Institutions like NASA and the World Bank have already begun to recognize the value of the Equal Earth projection, and its adoption is expected to grow in the coming years. 

The Equal Earth map projection represents more than just a technical advancement in cartography; it symbolizes a shift towards greater equity and accuracy in how the world is represented. By supporting initiatives like the “Correct The Map” campaign, individuals and organizations can contribute to a more just and accurate portrayal of Africa and other regions, fostering a global environment where all continents are recognized for their true size and importance.

Mass as Delay: Rethinking the Universe’s Clockwork

Every once in a while, a new idea comes along that doesn’t just tweak the edges of our understanding, but tries to redraw the map entirely. John C. W. McKinley’s Mass Imposes Delay principle is one such idea. Published in mid-2025 and still sitting at the intersection of speculation and serious theoretical intrigue, this deceptively simple thesis – that mass is not just an object of gravity, but an agent of temporal delay – invites us to reconsider what we think space, time, and matter are really doing.

What if mass is not a thing, but a tempo? What if the cosmos is not a machine, but a performance – its rhythms set not by ticking clocks, but by the gravitational drag of being itself?

At its heart, McKinley proposes that mass structures time by imposing delays on how photons, and by extension, all information, resolves into physical experience. Rather than viewing mass merely as the cause of curvature in spacetime (as in general relativity), or as a Higgs-bestowed quality of particles (as in the Standard Model), this theory suggests something more metaphysical and yet startlingly concrete: mass sets the timing of reality’s unfolding.

Delay × Mechanics = Observed Physics

This is McKinley’s governing equation. Delay, introduced by mass, interacts with basic mechanical instructions, what he calls “photon-coded instructions”, to produce the physical phenomena we observe. It’s a view that doesn’t discard quantum field theory or general relativity but reframes them as emergent from an underlying informational pacing system.

In the Shapiro delay, light signals passing near a massive object take slightly longer to reach us. Traditionally explained as curved spacetime, McKinley reframes it: mass itself introduces a resolution delay.

This subtle shift moves the focus from where things happen to when they are allowed to happen.

A Delayed Universe: From Quantum Collapse to Cosmic Expansion

In quantum mechanics, the collapse of a wavefunction – the moment when a system’s potential resolves into a definite outcome – has long baffled philosophers and physicists alike. It’s not the math that confuses us; it’s the implication that reality is, in some sense, probabilistic until someone, or something, causes it to resolve.

McKinley’s theory offers an elegant twist: mass itself acts as a selector. By introducing delay, it filters and sequences quantum outcomes into coherent, observed experience. This bridges relativity and quantum theory by offering a common denominator: timing control.

It also touches cosmology. In a universe where mass determines delay, and delay governs resolution, cosmic time itself becomes pliable. The early universe might have operated under very different delay patterns – suggesting that the laws we observe today could be the outcome of an evolving cosmic schedule. Inflation, dark energy, and even the cosmological constant could be reframed as manifestations of shifting delay regimes.

A Two-Filter Reality

McKinley envisions reality as filtered twice: first by wavefunction possibility and again by mass-governed delay. Picture a vast quantum landscape filled with all possible outcomes, then imagine a “mass curtain” that slows and sequences how those potentials crystallize into reality.

This recalls Mach’s principle, which links inertia to the gravitational influence of distant matter. McKinley extends it: not only inertia, but the timing of reality’s unfolding depends on the universe’s mass distribution.

No exotic particles, no extra dimensions – just a new lens on familiar physics. The photon’s instructions may be timeless, but when they’re read depends on the local mass environment.

Challenges and Promise

Is it testable? Not yet, but in principle yes. If mass imposes resolution timing, high-precision quantum timing experiments might detect non-local delays, or gravitational lensing could show subtle deviations from purely geometric predictions. Such tests could turn this elegant speculation into empirical science.

The biggest contribution may be conceptual: replacing the image of a universe as a stage with actors, with that of a performance unfolding according to a mass-driven tempo.

Final Thoughts

McKinley’s work, still awaiting rigorous peer review, is worth attention. It asks us to imagine mass not as the glue holding the universe together, but as the metronome pacing its unfolding.

We may be on the cusp of a physics that is not only about what exists, but about when it happens. If he’s right, mass isn’t what keeps the universe in place – it’s what slows it down, just enough for reality to make sense.

Sources

  • McKinley, J.C.W. (2025). The Principle of Delayed Resolution. SSRN. Read here
  • Shapiro, I. I. (1964). Fourth Test of General Relativity. Physical Review Letters.
  • SciTechDaily. (2025). “New Physics Framework Suggests Mass Isn’t What You Think It Is.”
  • Wikipedia. Mach’s Principle. Read here

Why Logic Only Wins When Your Opponent Feels Secure

In business, politics, leadership, and high-stakes negotiations, we often fall into the trap of believing that logic and competence are all that’s needed to win arguments and drive outcomes. After all, facts are facts, right? Yet, anyone who’s been in the room when a pitch falls flat or a strategy session derails knows better. The hard truth is this: logic only persuades when the person you’re speaking to feels emotionally secure, and, without that, even the most elegant argument can be perceived as a threat.

People, leaders included, don’t operate in purely rational mode. They operate in identity mode. When someone is secure in their role, confident in their own intelligence, and grounded in their self-worth, they can listen to a strong counterargument without flinching. They can say, “I hadn’t thought of it that way,” or “Let’s explore that.” That kind of openness is the hallmark of true professional maturity.

Insecurity changes the playing field. When someone feels uncertain about their competence, status, or place in the organization or society, even a well-intentioned challenge can land like a personal attack. You may be bringing insight and value to the table, but what they hear is, “You’re not smart enough. You’re not in control.” Once you trigger that kind of emotional threat response, logic goes out the window. Now you’re not having a conversation – you’re in a turf war.

I’ve seen this in boardrooms, in project teams, in conflict mediation. A junior consultant presents data that contradicts the assumptions of a senior manager. The numbers are rock-solid. But the response isn’t curiosity – it’s defensiveness. Dismissal. Or worse, undermining. Why? Because accepting the analysis would require the leader to admit a blind spot, and for some, that’s psychologically intolerable.

In politics, particularly in the polarized landscapes of North America and parts of Europe, the same dynamic plays out on a much larger scale: the political left often leans on data, logic, and evidence-based policy proposals, assuming these will persuade. For many on the political right, especially in populist circles, political identity is rooted not in reasoned analysis, but in emotional belonging, cultural defense, and distrust of intellectualism. Logical arguments about climate change, public health, or wealth inequality frequently fail not because they’re weak, but because they challenge the very narratives that insecure political identities cling to for meaning and safety. Until the left acknowledges that logic only works when the listener feels secure enough to engage with it, their arguments, however sound, will continue to bounce off hardened ideological shields.

This is why so many skilled communicators emphasize emotional intelligence alongside analytical sharpness. It’s not enough to be right, you have to be received. If you want your logic to land, you need to create a container of safety. That means pacing before leading. Asking questions before offering answers. Establishing rapport before pointing out gaps. It means checking your tone, your timing, and your audience’s readiness.

There’s also a counterintuitive insight here for those who are confident in their own competence; dial it down sometimes. Over-projecting brilliance can make insecure colleagues feel smaller, and smaller people don’t collaborate well. They retreat, they sabotage, or they lash out. The best leaders aren’t just smart, they’re smart enough to know when not to show it all at once.

Winning with logic is a strategic act, not just an intellectual one. You have to play the long game. It’s not about proving someone wrong, it’s about making them feel safe enough to explore the possibility that they might be. Only then do real insights emerge, and only then can collaboration thrive. So next time you’ve got the facts on your side, pause. Ask yourself: does my audience feel secure enough to hear the truth?

Because if they don’t, even the truth won’t save you.

Quantum Awakening: The Cat Steps Out

For nearly a century, Schrödinger’s cat has prowled the imagination of physicists and philosophers alike, half-alive, half-dead, trapped in a quantum box of uncertainty. It’s been a durable metaphor, capturing the mind-bending strangeness of quantum superposition, where particles can occupy multiple states at once, but only collapse into a definite reality when observed. Now, a series of new experiments have not only extended the cat’s mysterious life, they may well have cracked open the lid of that theoretical box.

In one breakthrough, researchers at the University of Science and Technology of China have managed to sustain a quantum superposition in a group of atoms for an unprecedented 1,390 seconds, over 23 minutes. To put that in perspective, most quantum states decay in milliseconds, collapsing under the weight of their environment. These scientists cooled ytterbium atoms to near absolute zero and suspended them in a laser-generated lattice, creating a sort of optical egg carton that isolated the atoms from external noise. The result? A stable, coherent quantum state that lasted longer than any yet recorded. If Schrödinger’s feline had been curled up in that lab, it might have been both alive and dead long enough to get bored.

The implications are profound. Quantum coherence over such extended periods could radically advance quantum computing, quantum communications, and even fundamental tests of the boundary between quantum and classical worlds. It also hints at the possibility of observing, and perhaps one day manipulating, quantum phenomena at larger, more tangible scales. The line between weird and real is getting thinner.

Yet, the story doesn’t end in China. Across the world in Sydney, engineers at the University of New South Wales have been tinkering with the quantum cat’s metaphorical whiskers in a different way. They’ve embedded an antimony atom with eight possible spin states into a silicon chip, creating a quantum bit (qubit) capable of holding significantly more information than the binary states of traditional bits. Each of these eight spin configurations acts like a tiny door into a different potential reality, giving rise to a computational system that can tolerate a degree of error, essential in the fragile world of quantum information.

This “hot Schrödinger’s cat,” as some have dubbed it, refers not just to the technical feat but to the strange warmth of the system, higher energy levels that challenge the traditional assumption that quantum systems must be deeply frozen. By designing systems that can operate at relatively warmer conditions, and still retain quantum coherence, scientists are inching toward scalable, real-world applications of quantum logic.

So what does this mean for the cat, and for us? It means we’re closer than ever to pulling that quantum feline out of abstraction and into the world of working tools. The cat is no longer just a paradox. It’s a partner, mysterious, elusive, but increasingly real. And in the glow of the lab’s lasers and chip circuits, it might even be purring.

Sources
• Wired: Scientists Have Pushed the Schrödinger’s Cat Paradox to New Limits
• Phys.org: Quantum Schrödinger’s Cat on a Silicon Chip