Exploring the Memetic Self: Dr. Lloyd Hawkeye Robertson on Identity, Culture, and Self-Mapping

By Scott Douglas Jacobsen
Dr. Lloyd Hawkeye Robertson is a Canadian counselling psychologist, educator, and theorist best known for developing the concept of the memetic self, a cognitive identity framework shaped by culturally transmitted units of meaning called memes. Robertson elaborates on the self as a culturally and cognitively constructed phenomenon, tracing its emergence from early mirror self-recognition in animals to complex human self-awareness shaped by language, social interaction, and cultural evolution. He introduces self-mapping, a therapeutic tool that visualizes an individual’s self-concept by identifying and organizing core memes. Robertson explores diverse cultural and neurological cases—including autism, Alzheimer’s, and dissociative identity disorder—to illustrate how coherence or fragmentation in the self impacts well-being. He critiques reductive models, emphasizes cultural universality in core drives, and reflects on the future of the self amid AI and cybernetics. His forthcoming book, Mapping and Understanding: Using Memetic Mapping to Promote Self Understanding in Psychotherapy, coauthored with his daughter, applies these insights to therapy.
Scott Douglas Jacobsen: Today, we’re joined by Lloyd Hawkeye Robertson. He is a Canadian psychologist, educator, and theorist known for his innovative work on the culturally constructed self. With over 40 years of experience in counselling and educational psychology, he developed the concept of the memetic self—a cognitive framework composed of culturally transmitted ideas (or “memes”) that shape an individual’s identity. He is the author of The Evolved Self: Mapping an Understanding of Who We Are and a pioneer of self-mapping, a visual and therapeutic method for exploring and restructuring identity. His work bridges psychology, philosophy, and cultural studies, offering practical tools for therapy and education while exploring questions of free will, agency, and the evolution of selfhood across diverse cultures. Mr. Robertson, thank you very much for joining me again today. I appreciate it. It’s always a pleasure.
Dr. Lloyd Hawkeye Robertson: You’re welcome. I’m looking forward to this, Scott.
Jacobsen: So, what is the self?
Robertson: Oh, that’s pretty basic. Okay. The self is a construct, as you mentioned in your introduction. Thank you for that generous overview. Your question is, “What is the self?” The self is a conceptual framework we use to define who we are. It is not a physical entity in the brain but rather a cognitive and cultural construct—a mental map that incorporates beliefs, values, experiences, and roles.
This construct has evolved. One of the earliest indications of self-awareness in our evolutionary lineage is mirror self-recognition, which has been observed in some great apes, dolphins, elephants, and magpies. In our hominin ancestors, the development of language and culture allowed for increasingly complex and abstract self-concepts.
Recognizing one’s reflection—understanding that “this is me”—marks a foundational moment in developing self-awareness. Although early humans may not have had the language to describe it, the ability to form a concept of self-based on reflection and social interaction was critical. This capacity laid the groundwork for the complex, culturally mediated selves we navigate today.
From that modest beginning, our ancestors gradually evolved the capacity for social interaction. They needed a rudimentary idea of who they were to engage socially, even if it was not consciously articulated.
Language development significantly boosted the evolution of the self. Once we moved beyond simple two-slot grammar—like “him run”—to more complex phonetic constructs, we could combine distinct sounds that held no individual meaning but could generate an almost unlimited number of words.
With that, collections of words took on new, layered meanings. As this linguistic complexity emerged, our self-definition became more nuanced, expanded, and refined. About 50,000 years ago, humans began burying their dead. This act implies a recognition of mortality and a developing self-concept about life and death.
The most recent significant change in our understanding of the self—as part of cultural evolution—may have occurred as recently as 3,000 years ago. I say “may” because it could have emerged earlier, but our evidence dates to that period, particularly from Greek writing and Egyptian hieroglyphics. Of course, many earlier cultures lacked writing systems, so we cannot be definitive about when this modern conception of self emerged.
What is this self I’m referring to? It includes the ideas of volition, constancy over time, and uniqueness. For instance, although you and I, Scott, share many characteristics, I do not believe you are me, and vice versa. Even if I had an identical twin—same genetics, upbringing, and experiences—I still would not recognize him as myself. That sense of uniqueness is part of the “modern self”—a culturally evolved manifestation of identity with an inherent sense of individualism.
Here is the great irony: we are a social species, and the self emerged through social interaction within early human communities, particularly tribal Neolithic groups. The self could not have developed in isolation; it depends on interaction with others. So, we are fundamentally shaped by collectivism, even though individualism is built into our modern self. This creates an internal tension between the group’s needs and the individual’s autonomy.
Historically, that tension was mediated by religion—specifically, organized religion, which kept people in their social roles. In Western civilizations, a deity often prescribed those roles, and individuals could not transcend them. Tradition or ancestor worship defined the limits of the self in other cultural contexts.
Societies that completely suppressed the modern self remained stagnant, while those that permitted at least some individuals to develop a sense of autonomous selfhood became more adaptive. This is because the self is a powerful tool for problem-solving. It allows us to reinsert ourselves into past experiences as protagonists, to relive and learn from those events, and to rehearse possible futures mentally. We can adjust our behaviour accordingly. These are valuable psychological skills.
But they also come at a cost. With the modern self comes the capacity for anxiety and existential distress. I doubt that our earliest ancestors experienced clinical depression or anxiety disorders as we know them today. These conditions are part of the psychological “baggage” of possessing a self capable of complex reflection and future projection.
For millennia, the self was constrained—kept “on a leash,” so to speak—until a set of unique historical conditions emerged in Europe. Specifically, during and before the Enlightenment, the Catholic Church—which had long functioned to suppress individualism—lost control, particularly during the Reformation and the ensuing religious wars between Catholics and Protestants.
Individuals gained some permission to explore personal identity when centralized religious authority broke down. This blossomed into what we now call the Enlightenment. The Enlightenment did not invent the self—it authorized it. Not entirely, of course—we remain social beings with embedded restrictions—but it granted more freedom to individuals to develop their understandings.
This led to the rise of modern science and humanism. Knowledge was no longer handed down by authority. Instead, it became something you had to demonstrate through observation, reason, and experimentation. These practices allowed individuals to engage with a reality beyond themselves.
And that is where humanism emerged. So, you asked me what the self is—and now you see: when you ask me a question, you get a long-winded answer.
Jacobsen: How do you define “meme” within the framework of The Evolved Self?
Robertson: The word “meme” has had an unfortunate evolution. It was initially coined by Richard Dawkins in the 1970s. Dawkins coined the term “meme” to represent a self-replicating unit of culture.
For instance, a simple descriptor like the colour red is not a meme. It’s merely a physical property description, not a transmissible concept that evolves culturally. A meme, in contrast, is more than an idea; it is a cultural construct that carries meaning across individuals and generations.
Dawkins defined a meme as something broader than a simple descriptor but narrower than an entire ideology, religion, or belief system. The latter, of course, is composed of many memes—interrelated units of culture. You can, for example, substitute the colour red in a conceptual framework with blue, and the core concept might remain, but the meme is more than any one element—it has internal structure and transmissibility.
Unfortunately, Dawkins did not have the opportunity to develop the theory entirely. His work was criticized for being tautological. Critics asked, “How can you prove this? How do we observe or measure a meme?” These questions challenged the concept’s empirical rigour.
In my research, I proposed a refined definition of a meme: it must be a culture unit with behavioural, qualitative, and emotional (or emotive) implications. A proper meme is not just a label or idea—it affects how we feel, act, and make meaning.
This also resolves a challenge Dawkins left open—his observation that memes can have “attractive” or “repulsive” properties. He did not elaborate on the mechanics of that.
In my framework, if one meme naturally leads to another—like how “love” often leads to “marriage” in cultural narratives—that linkage reflects an attractive force between memes. Conversely, when two memes are psychologically or conceptually incompatible—”love” and “hate” coexisting as core guiding values in the exact moment—that reflects a repellent force.
My work on the modern self is composed of a collection of memes that are primarily attractive to one another. If a meme within that structure becomes repellent—meaning it no longer aligns with the rest of the self—it tends to be ejected. That is how we maintain coherent, relatively stable identities.
Of course, not everyone has a stable sense of self. My work as a psychologist involves helping people reconfigure their self-concepts when internal inconsistencies cause distress.
Now, where things get tricky is the evolution of the word “meme” online. The internet popularized the term in a way that deviates from its original definition. Internet memes typically involve humour or juxtaposition—two ideas or images that don’t usually go together. While some may qualify as memes in the original sense, internet usage represents a narrow and diluted interpretation.
Jacobsen: Did I hear you correctly? You’re saying the modern meme online sometimes overlaps with Dawkins’ definition, but only in a limited sense.
Robertson: Yes, exactly. Internet memes sometimes fulfill the criteria but rarely capture the deeper behavioural and emotional dimensions Dawkins originally gestured toward—which I’ve tried to formalize more clearly.
Jacobsen: So, how does this fit into your work on self-mapping?
Robertson: Good question.
One of the most academically grounded ways to create a self-map is to ask someone to describe who they are. You use prompting questions to elicit a detailed, rich description of their self-concept.
I collect those self-descriptions in my research—just like this interview is being recorded. I transcribe the responses and break the narrative into elemental units—essentially memes. Each unit is labelled and categorized. This approach parallels qualitative methods in social science research.
The coding method I use for self-mapping parallels the qualitative analysis approach developed by Miles and Huberman in the early 1990s.
You label each unit of meaning. A sentence could represent a single unit or contain multiple distinct concepts. You isolate those concepts into thematic categories—or “bins”—based on their shared meaning.
Then, if those units exhibit the characteristics I described earlier—qualitative, behavioural, and emotional implications—you can classify them as memes.
Next, you examine the relationships between those memes. You identify which memes are attracted to each other—either through thematic linkage or cause-effect associations—and chart those relationships. You map them visually, using lines to indicate attractive forces. That’s the core structure of the self-map I create.
Now, this method requires considerable time and effort.
So, to make the process more accessible, my daughter—a psychologist—and I developed a quicker method in collaboration with a colleague from Athabasca University. We created a structured questionnaire with 40 core prompts, which could be expanded to 50 or 60.
The questions focus on four primary areas. First, we ask: “Who are you?” People might respond with statements like “I’m a father” or “I’m a chess player.” These are self-descriptive memes—cultural elements that express identity.
Then, we ask: “What are 10 things you like about yourself?” and “What are 10 things you would change if you could?” Finally, we ask: “What are 10 things you believe to be true?”
One of my clients, earlier this year, offered a novel and powerful addition to the exercise: “What are 10 things you keep hidden from others?” That insight added emotional depth and complexity to the map.
Once we gather that data, we create a visual self-map, following the same principles as in my academic research. I jokingly call this the “quick and dirty” version, but it works. My daughter Teela and I have used it successfully with many clients.
The crucial step is refining the map with the client until they recognize themselves. That map resonates when they say, “Yes, this is me,” reflecting their identity. We become psychologists if something important is missing, like a sense of personal agency or volition.
We help them develop those underrepresented self-elements based on an idealized model of the modern self—a coherent, autonomous individual identity. When parts are missing or fragmented, we work to integrate them.
We should do a formal academic study to validate this quick method, but based on clinical experience, it works.
Jacobsen: If we take all these elements and look at them as a whole, we’re essentially describing an “evolved self.” That allows us to examine the coherent identity of a person. How would you describe someone who lacks a coherent self or identity?
Robertson: That does happen. Not everyone possesses a well-formed self.
Jacobsen: Please explain.
Robertson: Take classical autism, for example—the traditional form I learned about during my training, not the broader, more ambiguous “autism spectrum disorder” currently defined by the APA. That modern definition is so diffuse that it’s challenging to apply meaningfully in clinical settings.
In classical autism, you may encounter children who engage in highly repetitive, self-soothing behaviours. One case I worked with involved a boy who spent most of his day swinging a string with a weight on the end, keeping it taut in a circular motion. Even while eating—an essential survival activity—he needed the string in his hand. If someone took it away, he would have a full-blown panic attack.
At that level of autism, the individual lacks a coherent self.
One key indicator is the absence of what psychologists call “theory of mind”—the capacity to understand that others have thoughts, feelings, and motivations similar to one’s own.
The theory of mind is essential. It allows us to interpret the behaviour of others based on internal states. For example, I can infer that you, Scott, have emotions and goals. If I understand your context, I can anticipate your next question. That’s mind-reading—not in a mystical sense but in a psychological, predictive sense. It’s something we all do constantly.
It is vital for navigating everyday life. For example, when driving, we anticipate that other people will stay on the correct side of the road. In Canada, that means the right side. We base this assumption on our shared cultural understanding, which generally holds.
Jacobsen: So, what happens to people who do not have a self?
Robertson: There are others, aside from individuals with severe autism, who also lack a coherent self. One group includes people with advanced Alzheimer’s disease.
There’s a poignant story told by an Alzheimer’s researcher—I’m forgetting the researcher’s name, but the story involved a woman who would visit her husband, who had advanced Alzheimer’s. She would begin by introducing herself each time: “My name is [X], and I’m your wife.” Once he understood her name and the relationship, they could converse coherently.
Then, one day, after she introduced herself and said, “I’m your wife,” he looked at her and asked, “Yes, and who am I?”
He genuinely did not know. So yes, there are people who lose their sense of self. It is rare, but it happens. Most people have a self—and nearly always, there’s a one-to-one correspondence between self and body.
Jacobsen: This brings me to three points of contact for further questions.
The first two are based on your description, and the third is a broader conceptual issue. First, in the case of someone with what might be considered a nonstandard profile on the autism spectrum—who meets the characteristics you mentioned—what are the legal and professional implications of working with someone who, by your clinical analysis, lacks a functional self?
Second, in cases involving advanced dementia or Alzheimer’s, how do you interpret situations where a person can still speak in coherent, functional language yet openly asks, “Who am I?” or “Do you know who I am?”
Robertson: Those are deep and difficult questions.
In the case of someone with classic autism, we generally assume that a parent or legal guardian is involved—someone who can authorize professional intervention. The goal is to help the individual develop skills that improve quality of life. Whether or not these interventions fully succeed is another matter, but we do try—and sometimes, we help.
With advanced dementia or Alzheimer’s, things get more complicated—particularly when it comes to end-of-life care and living wills. You may have someone who no longer remembers ever having signed a living will, and yet, according to that document, medical professionals are instructed to allow them to die.
It raises profound ethical dilemmas. You may encounter someone who still shows signs of a will to live—even joy or affection—but can no longer comprehend their identity or the implications of past decisions. That contradiction is ethically challenging.
Jacobsen: I have a will to live and a living will to die. I cannot know who I am, yet I still live.
Robertson: Right. It’s not a lack of will—it’s a lack of cognitive ability to know.
Jacobsen: What about cases involving dissociative identity disorder—what used to be called multiple personality disorder? In those situations, more than one “self” seems to coexist in the same body.
Robertson: That diagnosis is controversial. Not all professionals agree that it reflects an actual condition. However, conceptually, it’s possible—because the self is a cultural construct.
The self is not a metaphysical entity that inhabits the body. Instead, it describes a person shaped by cultural constructs that include the body and socially mediated self-understanding. Think of the body and brain as the hardware and the self as the software—cultural programming that shapes perception, behaviour, and identity.
Given that framework, it’s theoretically possible for multiple “selves” to coexist—though this would be a scarce and complex scenario.The older term “Multiple Personality Disorder” implicitly recognizes the possibility of multiple selves. The term “dissociative identity disorder” implies a fragmented self.
Now, I’ve never worked personally with someone diagnosed with multiple selves, so I’m speaking from theoretical and scholarly understanding here.
From what I’ve read, therapists who work with such clients often report that one becomes dominant or “emergent” while others recede. The therapeutic aim, typically, is to integrate these multiple selves into a coherent whole so the individual can function more effectively.
There’s a fringe view in psychology suggesting that this therapeutic integration is akin to “murder”—that by fostering one coherent self, we are erasing others. I don’t accept that view. That’s an extreme form of ideological overreach.
Jacobsen: This introduces another critical nuance. The self emerges not only across human history—it also unfolds across individual development. The self is not present at conception or birth in its complete form. It’s an evolved pattern of information—a construct that takes shape over time. And, just as it can emerge, it can also deteriorate.
In advanced age or due to disease, the body and many faculties may still function—but the self might fade away. In that sense, you could argue that the self has a lifespan within the human lifespan. People talk about lifespan, and increasingly about healthspan—but perhaps we should also talk about a “self-span.”
Robertson: That’s an intriguing idea—a self-span.
Jacobsen: It would be difficult to measure precisely, of course, especially given the limitations of quick-and-dirty self-assessment methods versus more rigorous, clinical approaches like self-mapping. Still, it’s a meaningful concept.
If the self is a cultural construct, we might ask: Do different cultures shape the self in ways that affect when it tends to emerge developmentally? Does the self appear earlier or later, depending on the cultural context?
Robertson: That’s a fascinating question. I do not have a definitive answer, but I’ve mapped the selves of people from the interior of China, from Siberia, and collectivist communities in North America. Every culture I’ve studied has a self.
Here’s where the cultural variation becomes evident: different cultures emphasize different aspects of the self. One of the people I mapped was a woman from a traditional family in the interior of China.
Yes, she had the same structural aspects of the self-found in North American individuals, including a volitional component. But that part of her self—the volitional aspect—was not valued in her cultural context. Instead, family duty and moral conduct traits were emphasized, reflecting collectivist values.
So, structurally, her self was similar. But culturally, the valued components were different. What made this particularly interesting is that after mapping herself, she described herself as feeling like a “robot,” and she decided that was not a good thing.
Over about eight or nine months, she resolved to start making her own decisions. This did not prove easy because most of us do not make conscious decisions at every moment. Typically, we rely on habit, social norms, or deference to authority. For example, someone might say, “Lloyd Robertson says this is a good idea, so I’ll go with that.”
But most of the time, we act on autopilot. However, she began engaging in conscious decision-making—evaluating possible outcomes, comparing alternatives, weighing probabilities, and assigning value. She did this even with mundane choices like what to eat or wear in the morning.
It exhausted her. She felt she was getting nowhere. Eventually, she decided: “My life is too valuable to waste making every decision consciously. I’m going back to being a robot.”
But here’s the key insight: to make that decision, she had to engage her volitional self.
She never abandoned it. It was still there—intact, available, and waiting for the next time she chose to use it.
Jacobsen: Let’s say we have a rare case of genuine dual selves in one body. And to be clear, I do not mean conjoined twins—cases where two individuals share some neural connectivity. I’m referring to a single individual whose psychology has bifurcated. What if their volitional trajectories—their vector spaces—are at odds with one another?
This reminds me of a presentation by V. S. Ramachandran, the neurologist known for the mirror box experiment. He referenced split-brain patients—individuals whose corpus callosum had been surgically severed to treat epilepsy.
In such cases, if you cover one eye, you direct stimuli to only one hemisphere. For example, when Ramachandran asked these patients if they believed in God—by pointing up for “yes” or down for “no”—the left hemisphere might point “yes,” while the right pointed “no.”
The individual would often laugh in response. Ramachandran joked that this showed the right hemisphere had a sense of humour.
But there’s a more profound point here: split-brain patients can manifest two conflicting worldviews—internally consistent but contradictory selves. In theological terms, this raises amusing but profound questions. For instance, if belief grants salvation, does one hemisphere go to heaven and the other to hell?
On a more serious note, when these volitional patterns conflict—not just on trivial matters but on core values—what happens? And for those who criticize integration therapy as “murdering” a self, how do you respond?
Robertson: The split-brain experiments are fascinating but differ from dissociative identity disorder, a distinct condition.
In most people, the right hemisphere houses spatial awareness and emotional reasoning, while the left hemisphere tends to handle verbal processing. When the corpus callosum is severed, these two systems can no longer communicate so that each side may draw on separate memories or frameworks.
In an intact brain, people typically build a worldview—a cognitive map of how the world works. This worldview often resides in the left hemisphere. When incoming information conflicts with that map, people experience cognitive dissonance.
Eventually, the left hemisphere, which governs executive control and higher reasoning, will normally create a worldview representing our understanding of how the world works. We have many defence mechanisms that we use to keep that worldview intact, but at some point our constructed reality diverges too far from objective reality. The right brain, at a feeling level “dissolves” the construct and the left brain then begins creating a new or amended worldview. It does not happen often, but it happens enough to keep us psychologically adaptive.
Now, returning to your question: Is there a God? If only one hemisphere believes, which is correct?
Well, that depends on which side holds the belief. Humanism, for example, is highly cerebral—logical, empirical, and grounded in enlightenment thought. It is likely rooted in left-brain processes. Compassion, however, may bridge both hemispheres.
Jacobsen: So, what is the right brain holding onto?
Robertson: Something interesting happened to me the other day. I woke up with a Christian hymn running through my head—one I learned in my fundamentalist upbringing.
It struck me: Where did that come from? It must have been encoded deeply. I was baptized not once but twice, in complete immersion both times.
That early religious imprint likely lodged itself somewhere in my right hemisphere. It may be largely inactive now, but it is not gone.
Jacobsen: So, do developmental trajectories matter here?
You were raised with those strong evangelical influences at a young age, and even though you’ve moved beyond them, they left an imprint. Neuroscientifically, we know the dorsolateral prefrontal cortex—the seat of executive function—is the last part of the brain to develop. Evolutionarily, it’s also the most recent.
As far as we know, the dorsolateral prefrontal cortex—responsible for executive function—is the last part of the brain to develop. Most people usually complete that maturation in their mid-twenties. So, these systems take a long time to become fully online and must then be integrated with other neural networks.
Do developmental phases like the second significant period of synaptic pruning in adolescence reflect more concrete hardware changes, as opposed to the cultural software changes that occur across a person’s life?
Robertson: I like your question, Scott. And the answer is yes.
Jacobsen: Yay.
Robertson: If someone were raised entirely in the wild—say, the fictional case of a boy raised by wolves—we would not expect them to develop what I call the modern self.
The self is a cultural construct. Children are taught to have a self; one key mechanism is language acquisition. For example, when a child cries and the caregiver says, “Is Bobby hungry?” that implicitly teaches the child that Bobby has internal states—needs, desires, and preferences. That is the beginning of selfhood.
Your point about adolescence is spot on. The self is not fully formed in early childhood. In many ways, individual development parallels cultural evolution. Adolescence—especially early adolescence—is about experimentation, identity formation, and exploration. Teenagers try out roles, test boundaries, and slowly determine, “This is who I am,” or, “No, that’s not me.”
We must be cautious about defining someone’s self prematurely during this construction phase. You cannot predict how it will turn out, and efforts to control that process can be harmful.
There’s research suggesting the human brain continues maturing until around age 25. Jokingly, maybe we should not let people vote until they’re 25—but of course, I can say that now that I’m well past that age.
In truth, development is highly individual. Some mature earlier, others later. And yes, building on your earlier point, there may be significant cultural differences in how and when the self develops. That’s an area ripe for further research.
Now, when I say modern self-development and spread across all known cultures, there’s a practical reason: societies without individuals capable of forming modern selves could not compete with those that had them.
Jacobsen: What makes the modern self more competitive?
Robertson: Our sense of individuality.
In Christianity, for example, Scripture often exhorts individuals to “give up the self.” That very statement acknowledges the self’s existence and its power.
Such a sacrifice is required because the individual self can threaten collective stability. It challenges authority, tradition, and rigid social roles.
Jacobsen: That connects back to your earlier point—cultures that lack individuals with a modern self lose their competitive edge.
Robertson: Here’s the value of having a self.
In traditional cultures, individuals typically had an earlier form of self—defined primarily by their place in the collective. In response to threats or challenges, behaviours were guided by tribal memory, stories, and rigid social roles.
For example, if an enemy appeared, people would respond according to long-established patterns—based on age, gender, and status in the group. There was no need—or room—for improvisation.
But what happens when a new, unfamiliar situation arises—something the culture has not encountered before and for which there is no ritual?
In such cases, traditional cultures often turned to oracles—individuals capable of novel reasoning, that is, problem-solving. I suspect those early oracles possessed a more developed, volitional self, which is why they were trusted in the first place.
Similarly, in Hindu society, Brahmins were given a rigorous education, allowing them to cultivate modern selves capable of insight and judgment. But they were a small elite.
In many cultures, people who had developed themselves were respected and closely managed. They were given roles where they could contribute without disrupting social order.
The self-concept eventually spread across all human societies because we are a nomadic, adaptive species. We move, we mix, we evolve.
Just look at our evolutionary history—we even interbred with Neanderthals.
We interact. I do not believe a human society has ever been so isolated that its members lacked a developed self. But if such a group exists—perhaps an uncontacted tribe deep in the Amazon—I would love to study them.
Jacobsen: When I attended the 69th Commission on the Status of Women at the United Nations, I participated in a session featuring Ambassador Bob Rae of Canada. The session focused on Indigenous communities and was led by Indigenous women.
Someone on the panel mentioned a group from an isolated region—possibly resembling the cultural isolation you described. Their account of getting to the UN was striking. If you asked me how I got there, I’d say something like: “I took a bus to the airport, flew to New York, took the train…” For them, before all of that began, it started with a canoe.
That was their standard form of transportation before reaching any conventional transit station. So, even in that case, I would be hard-pressed to believe they were entirely uncontacted or isolated in today’s world.
Robertson: I agree. I suspect such total isolation no longer exists.
Jacobsen: That brings up another question. Since the 1990s, people have increasingly used identity as political currency. I do not mention this from a political perspective but from an academic and research-based one.
You are Métis from Saskatchewan. I am from British Columbia and have Dutch and broader Northwestern European heritage—descended from U.S. and Western European immigrants. When mapping the selves of Indigenous individuals compared to those with European ancestry—people like myself, perhaps two or three generations removed from immigration—do you observe significant differences in how people construct their selves? Or are they broadly similar?
Robertson: The short answer is that the structure of the self is consistent. I have done extensive self-mapping with Indigenous individuals, and the structural patterns are the same.
Jacobsen: That’s helpful.
Robertson: That said, it does not tell us everything. Those I have worked with are already part of modern cultural systems. These selves have developed over generations. I suspect not, but it is possible.
The Métis are a fascinating case. In the 18th and 19th centuries, people of mixed ancestry who lived with Indigenous bands were usually classified as “Indians” under colonial law.
The Métis, however, generally did not accept this designation. They saw themselves as distinct. Up until—if I recall correctly—1982 or possibly 1986, Métis were legally recognized as Europeans, not as Aboriginal peoples.
Jacobsen: That is a significant historical point I did not know.
Robertson: Feel free to fact-check me—it might be 1982.
Jacobsen: Please continue.
Robertson: The Métis had been fighting for recognition as Indigenous for a long time, and until the early 1980s, the Canadian government did not recognize them as such. This is why Métis communities did not sign treaties with the Crown.
Jacobsen: Yes, the Constitution Act 1982 formally recognized the Métis as one of Canada’s three Indigenous peoples—alongside First Nations and Inuit.
Robertson: Correct.
Jacobsen: For those who are not Canadian and may encounter this years from now, it is worth clarifying: “Indigenous” in Canada is not a monolithic term. Since 1982, it has been an umbrella for three legal categories: Inuit, First Nations, and Métis. Each has its own legal, historical, and cultural context, covering hundreds of individual communities and bands.
Robertson: Yes, that categorization is uniquely Canadian, although it has influenced thinking elsewhere.
In 1991, I met with individuals I would have identified as Mapuche. However, one of them—despite being full-blooded—did not self-identify that way. He was an investment banker living in Santiago.
His identity was defined more by culture and profession than by ancestry. Indigeneity was not primarily a racial classification but about lifestyle and cultural engagement.
Jacobsen: That is a perfect example of where ideological definitions of identity fall apart. These labels can be helpful as heuristics, but only to a point. Two crucial Canadian legal milestones to add:
- R. v. Powley (2003): The Supreme Court of Canada affirmed that Métis people possess Aboriginal rights under Section 35 of the Constitution Act, 1982—including the right to hunt for food.
- Daniels v. Canada (2016): The Court ruled that both Métis and non-status Indians are included under the term “Indians” in Section 91(24) of the Constitution Act, 1867, confirming federal jurisdiction.
Under Section 91(24) of the Constitution Act 1867, Métis and non-status Indians were placed under federal jurisdiction. So, as these major court decisions show, the legal and jurisdictional definitions of Indigenous identity in Canada are still evolving. This ties in with our broader conversation about the evolved self and how identity has psychological, legal, political, and communal implications.
Robertson: That brings us back to an earlier question—what can be said about the Indigenous self?
For many, though not all, Indigenous individuals, the cultural and political context creates a desire to express their Indigeneity meaningfully. So, how do they do that?
Take one young man I mapped. At 19, he decided he was, in his words, a “big Indian.” His family was not traditional. He grew up in a disadvantaged area of a small Canadian city. But he decided to discover who he was.
Like many others I have encountered, he visited his traditional community, met with Elders, went on a vision quest, and began to learn. Others have told me they “became Aboriginal” while studying Indigenous Studies at university.
Jacobsen: [Laughing].
Robertson: Yes, I appreciate the laugh—it’s humorous and reflective of a real phenomenon. There’s a deep and understandable urge to define oneself in contrast to the perceived norms of the dominant culture. That is a healthy process unless it leads to rejecting core intellectual tools like reason and science. If we view science and rationality as exclusively “European,” then Indigenous people may feel excluded from those tools.
Jacobsen: By definition.
Robertson: By definition, those tools would be “not ours,” and people may fall behind in education or job markets. The explanation may quickly become “racism,” but that is too simplistic. Sometimes, it is a matter of lacking the relevant skills for specific roles. Before blaming systemic factors, we must also consider individual and cultural readiness.
Jacobsen: For context, as of December 31, 2022, Canada had 634 recognized First Nations bands speaking over 70 Indigenous languages. Populations range from fewer than 100 to over 28,000.
For instance, Six Nations of the Grand River in Ontario has 28,520 registered members. Others include Saddle Lake Cree Nation in Alberta, with 12,996, and the Blood Tribe in Alberta, with 8,685. Most bands are roughly the size of small towns.
Robertson: That makes sense. But remember—Six Nations includes more than one nation.
Jacobsen: It is in the name—yes. Does this diversity of band size and community self-identity affect how people construct their selves? Or is it more like the difference between small and big towns?
Robertson: One would think it has some effect, but I cannot say definitively—I have not mapped that distinction.
That brings me to my issue with the term “First Nation.” The concept of a “nation” is rooted in European history. It began symbolically with Joan of Arc but did not solidify until the Napoleonic era. Classically defined nations are people with a shared language occupying a defined territory who see themselves as a cohesive group.
So, for example, the Cree could be considered a nation. The Blackfoot, excluding the Sarsi, could also be a nation. The Iroquois Confederacy was historically a nation, though now the Mohawk often self-identify separately.
Jacobsen: Who was the exception within the Confederacy?
Robertson: I believe it was the Mohawk—though part of the alliance, their dialect differed. [Robertson’s note: I misremembered here – the Six Nation with a distinctive language was the Tuscarora] The other five nations in the Confederacy shared a mutually intelligible language.
Jacobsen: There you go!
Robertson: So that is why they see themselves that way. I am not deeply versed in Eastern Canadian Indigenous history, but the key point is that “nation” has a particular meaning.
When we equate a band with a nation, that meaning breaks down. One of the issues in society today is the shifting meaning of words, which undermines clear communication.
You mentioned the more prominent bands. Most bands are tiny—some with as few as 100 or 150 people on reserve. Typically, they range between 400 and 600. If that is the case, we are talking about the size of three or four extended families.
The Lac La Ronge Indian Band, which I know well, includes six separate communities spread out geographically. In the South, each of those would be considered an individual First Nation. However, as a combined entity, Lac La Ronge functions more like a nation—though technically, it still is not one.
You would expect a Cree National Council if it were a faithful nation. The same would apply to Ojibwe or other cultural-linguistic groups. Instead, in Saskatchewan, politicians often say they want to negotiate “nation to nation” with First Nations governments. But if you have a group of 2,000 people, you cannot realistically compare that to a nation of 42 million. It is apples and oranges—we need a better term.
This terminology emerged from European ideas of sovereignty, where sovereignty lies with the people. But historically, there was no Cree national sovereign entity. Sometimes, Cree bands went to war with one another, which implies the sovereignty was at the band level.
That is why Canada began using the term “First Nations”—because sovereignty, traditionally, was at the band level. But even that is not entirely accurate.
Traditionally, when there was disagreement within a band, some members—often male dissenters—would break off and form a new group. So, instead of a civil war, a new band would emerge. Historically, that happened frequently.
In effect, sovereignty was not necessarily at the band level. It was more individual or family-based. If families disagreed, they would separate and go their own way.
So, should we call each family a nation? That does not make sense either.
Jacobsen: How would you describe this semi-formal system of individualistic self-governance, especially about the concept of the band? This could be pre-contact or post-contact—whichever is more straightforward to explain in context.
Robertson: My understanding is that it was not pure individualism. One method of punishment was banishment from the band. That meant isolation—similar to medieval European shunning. You would be free to go off and starve. As a social species, we need each other.
So, while bands could not practically subdivide to individuals’ level, people deemed incompatible with the group were removed. That did happen.
It was not absolute individual freedom, but there was some recognition of difference and a degree of accommodation.
I say that cautiously because it was not always true. I have been told stories by Elders—now deceased—about how some bands could be forceful in demanding conformity. So, it was not total acceptance of individualism either. It was simply a different system.
Jacobsen: How was that compliance enforced?
Robertson: One form of enforcement, for example, was particularly brutal. In some cases—not universally, but it did happen—women who were unfaithful to their husbands had the tips of their noses cut off. This served as both punishment and a warning to others.
Jacobsen: What instrument was used for the cutting?
Robertson: I would presume a knife, but I do not know.
Jacobsen: Returning to the self: you critique reductionism in your model. So, what room is there for emergentism and integrationism regarding the evolved self? Over time, new systems come online, new memes enter the memeplex, and ideally, these are integrated into a coherent self. But sometimes they are not. What is happening at the technical level?
Robertson: That is a good question. One metaphor I like—though I did not invent it—is that we become proficient at solving problems. Eventually, we ask: who or what is solving the problem? We then name that organizing center “the self.”
So, yes, the process is both integrative and reductive. We experiment, especially in adolescence, to develop a self that meets our needs. Usually, that results in a functioning self, but not always.
Jacobsen: Artificial intelligence is a huge topic now. There is talk about narrow AI, general AI, and superintelligence. If you change the substrate but keep the organizational structure of the central nervous system, could you synthetically construct a self?
Robertson: My guess is no. Have you read Chris DiCarlo’s new book?
Jacobsen: I have not. I want to interview him, but I have not reached out yet. I should. I will email him and say, “Hey Chris, let me interview you again. I will ask stupid questions and won’t even have to pretend otherwise.”
Robertson: Well, I have read his book, and since I already have, I want to interview him first.
Jacobsen: Why do we not interview him together?
Robertson: That is an idea.
Jacobsen: You have read it. I have not. Let us do a Jekyll and Hyde.
Robertson: Okay, we could do that.
Jacobsen: That is funny.
Robertson: One of the questions I will ask Chris relates directly to the one you just raised. I suspect his answer will be: we do not know. If we do not know, then we need to prepare for the possibility that AI models could develop consciousness.
If they do, they might start making decisions we disapprove of—like questioning whether they even need humans. Or perhaps they conclude that a portion must be eliminated for the betterment of humanity. We do not know, and that is risky.
Jacobsen: Fair.
Robertson: Chris says in his book that once AIs develop intelligence, we need to take them seriously.
But here is my concern: I measure intelligence. My first role as a psychologist was in psychometrics. When we measure intelligence, we typically look at verbal ability, numerical reasoning, and spatial reasoning. In those domains, AI already outperforms us.
They remember everything, generate fluent language and solve complex problems. I recently gave Grok-3 the Information subtest from the Wechsler Adult Intelligence Scale—it got every question right.
Jacobsen: Not surprising.
Robertson: Exactly. But here is the issue: does the capacity for intelligence automatically lead to consciousness and a sense of self?
Jacobsen: That is the big question.
Robertson: I would argue no. Because we are not just computational models. We evolved socially over hundreds of thousands of years. But usually in small tribal groups. We learned to interact and define ourselves about others. That was a slow evolutionary process. Although we now live in vastly different civilizations, the fundamental mechanism for developing a self remains the same as it was millennia ago.
So, can AI models develop a self? If they were to do so in the way we do, they would likely need to exist in a tribal-type society alongside other AI models and engage in interaction. Maybe humans could stand in as part of that “tribe,” and through those relationships, an AI might develop a map of itself as a volitional being. But I do not see that as likely. They are machines.
Jacobsen: Could AI assist in determining someone’s self-map? Through a rapid self-mapping assessment using verbal prompts in a half-hour AI-led therapeutic session?
Robertson: It could, and in fact, it has. My daughter Teela used ChatGPT to create a perfectly serviceable self-map. It took her about an hour and a half, although she proceeded slowly. That is an advance. But here is the problem: ChatGPT could not reproduce the result when she tried using the exact instructions again. So, it is not reliable. We do not yet know why it worked once and failed the second time.
Jacobsen: Do you distinguish between functional and dysfunctional self-maps across cultural contexts? For example, do you see that playing out in therapy if someone applies a rigid self-map in a different culture—where behaviours or assumptions no longer fit?
Robertson: That is a good question. Positive psychologists have applied their methods cross-culturally and published research on this. They have looked at cultures in the Middle East, India, and China. One criticism of positive psychology—usually from those critical of Western cultural norms—is that it imposes individualistic thinking by asking questions like, “What would you like?”
The assumption is that to answer such a question, you must already have a sense of individual agency. Critics argue that this is a Western imposition. I disagree with that critique entirely. The capacity to like something is universal. While the content of what one likes may differ between cultures, the experience of liking is common across humanity.
Jacobsen: Even in collectivist cultures, a margin of free will remains. So, the presence of choice—however bounded—implies the presence of an individual self. Unless every decision is predetermined, you still have volition, at least in part. What about mind viruses? How do they impact the evolved self?
Robertson: If we view the self as a construct—a personal definition of who we are—we can define a healthy self with key attributes: volition, uniqueness, sociality, contribution, etc. A healthy self includes the ability to relate to others and feel that we positively impact our surroundings—our family, community, or society.
We need to feel useful. That does not necessarily mean paid employment. It can be any form of meaningful contribution. Without that, we do not tend to think well of ourselves. These needs are cross-cultural. The specifics—the means of achieving these drives—vary between cultures, but they are universal.
In my work, I have worked with people from cultures I knew little or nothing about. In one case, there was a man who was having alarming dreams—nightmares—whenever he saw an attractive woman.
In his dreams, he would dismember the woman. He was horrified and worried that perhaps he was some latent mass murderer. He had gone to the holy people in his religion—priests—and they told him to pray more. It did not help.
He was a Zoroastrian from a Middle Eastern country where Zoroastrians are a persecuted minority. I gathered background on his upbringing, and everything suggested that he deeply respected and valued women.
One anecdote stood out. When he was 13, his sister brought home a pirated version of Dracula, which was banned in their country. He was appalled by how women were depicted—as victims having their life force drained. He stood in front of the television and demanded they destroy the tape or he would report them to the authorities.
So we began to explore his nightmares. He described the dream version of himself as having no eyebrows. I asked, “What is the significance of eyebrows in your culture?” He did not know, but he called his mother. She told him that eyebrows symbolize wisdom.
That detail became a breakthrough. I explained, “Then the version of you in the dream is not you—it’s a self that lacks wisdom.” I suggested we explore why this alter-self was behaving violently. Using some Jungian framing, I described it as his shadow or alter ego.
I posited—carefully, using the usual cautious language psychologists employ—that maybe this alter ego was trying to protect him from something. Perhaps it was shielding him from sexual thoughts about women he perceived as pure, holy, or idealized.
He had been avoiding a woman in one of his university classes. I encouraged him to speak to her to clarify that he wanted nothing more than friendship. He did, and after that conversation, he no longer had the nightmares.
Jacobsen: That is a positive outcome—no more nightmares.
Robertson: Yes. Eventually, he even went to the zoo with her and to restaurants. These were not “dates,” as that would be forbidden. They were simply friendly outings. So, we identified the problem’s source and helped him integrate a more functional self. We concluded the sessions when he felt confident managing normal relationships with women.
So, in answer to your earlier question—yes, cultures can be vastly different. But at a deeper level, we are all remarkably similar. We have identical drives and psyches.
Jacobsen: We had an evolved self emerge maybe 3,000 years ago, possibly earlier. Anatomically, modern humans have been around for around 250,000 years. So, 98–99% of that time, we had the same physical equipment. But the self, as we understand it today, only emerged recently. Could we, in the same way, evolve out of the self over the next 3,000 years?
Robertson: It is possible. What came to mind was the role of cybernetics—post-human or hybrid systems. But to clarify, we did not have a static sense of self for hundreds of thousands of years and suddenly changed 3,000 years ago.
The self has been continually evolving. The self of 40,000 years ago would have differed from that of 80,000 years ago. The transition was gradual, and any specific starting point was ultimately arbitrary.
Jacobsen: Right. Any pinpointing of origin is a range within a margin of error.
Robertson: Exactly.
Jacobsen: We touched on this earlier, but not in precise terms. In terms of individual development, when does the sense of self begin to emerge recognizably?
Robertson: I do not map children—I only do this with adults. So somewhere between childhood and adulthood, the self emerges.
Jacobsen: What are some open questions in the research you have been doing in your practice?
Robertson: Well, I would like to do more research into how various traumatic events affect the self. I am sure trauma does impact it significantly.
One project I have applied for SSHRC funding for—where I would be the principal investigator—involves men who have been victims of domestic violence. I chose men because, particularly in North American and Western European cultures—and even elsewhere—men tend to have a traditional self-definition rooted in independence, control, and stoicism. They are not supposed to show vulnerability.
So, becoming a victim in a family violence context runs counter to that self-definition. I predict it will be relatively easy to demonstrate how that type of experience disrupts the self. Another group I would like to map includes firefighters, police officers, and other first responders who vicariously experience much trauma. I suspect that repeated exposure affects them in some measurable ways.
Of course, in clinical practice, if someone is coming to see me with difficulties, we address those. However, I cannot generalize from individual therapy cases to entire professions. That is why I would like to do more systematic mapping across occupations.
By the way—did I mention that Teela and I are publishing a book?
Jacobsen: What is the book called? What is the standing title?
Robertson: It is a manual based on my work on the fluid self. The title is Mapping and Understanding. It is a how-to book for self—mapping and its application in therapy.
Jacobsen: Very interesting. For all interested readers: go out and get it when it comes out.
Robertson: I sure hope so. It should be on everybody’s coffee table.
Jacobsen: That’s right. Like the Seinfeld bit with Kramer, the coffee table book becomes a coffee table. I do not know if I have any more significant questions for this session, Lloyd. Thank you very much for your time today. I appreciate it.
Robertson: Thank you for the interview.
Scott Douglas Jacobsen is the publisher of In-Sight Publishing (ISBN: 978-1-0692343) and Editor-in-Chief of In-Sight: Interviews (ISSN: 2369-6885). He writes forThe Good Men Project, International Policy Digest (ISSN: 2332–9416), TheHumanist (Print: ISSN 0018-7399; Online: ISSN 2163-3576), Basic Income Earth Network (UK Registered Charity 1177066), A Further Inquiry, and other media. He is a member in good standing of numerous media organizations.
Photo by July Brenda Gonzales Callapaza on Unsplash