top of page

A Generation That Lived on the Fault Line

  • Writer: Erik Duboue
    Erik Duboue
  • Dec 30, 2025
  • 10 min read

I belong to one of the most influential generations humankind may ever know.

We are one of the smallest generations history has ever produced, perhaps only five years wide, maybe less. A statistical blip. A rounding error wedged between late Baby Boomers and early Millennials. And yet, I believe we are one of the most interesting generations humanity has ever known.


I am a Xennial.

 

Image downloaded from https://fity.club/lists/x/xennials/
Xennials are a generation that came just after Gen X, but predated Millennials. Image downloaded from https://fity.club/lists/x/xennials/.

We grew up in a world devoid of computation. Not reduced computation. Not hidden computation. Absent computation. I remember my early household with no computer. I remember getting my first cell phone when I was nearly twenty years old. My childhood had no internet humming quietly in the background. No phones glowing in our pockets. No algorithm anticipating what we wanted next. When we had questions, they often stayed questions. When we wanted answers, we had to search physically through books, through people, through time. An Encyclopedia Britannica set sat on a shelf in my home, and I dug through its ~30 volumes for schoolwork, curiosity, or simple wonder. Knowledge had weight. It occupied space. But it required effort.

 

And then, almost without warning, the world rewired itself around us. My parents bought a computer for the house. A dot-matrix printer arrived, loud and mechanical, proudly spitting ink through perforated paper. Modems screamed as they connected (ping-ding-dong-dang), and my father complained that the phone line was tied up. Needless to say, I cherished the 1-hour a day I’d get on the internet. AOL came online, and with it a new pastime: “chatting” with friends in a way that felt both intimate and impossibly distant. A world that did not exist when we were born rapidly became the standard. And we thrived.

 

My generation didn’t just witness the rise of computation, we conquered it. We learned to think before machines thought with us, and then we learned to think and live alongside them. We remember writing without spellcheck, getting lost without GPS (maps were a standard on road trips, my parents arguing about the correct way to go), being unreachable and unbothered. And yet, we are also deeply comfortable in a world of code, networks, and abstraction.

 

That duality is what makes my generation different. Older generations learned computation as an add-on. Younger generations were born into it as an atmosphere. But we were formed in one world and matured in another. Our cognitive foundations were laid in analog soil, then grafted onto a digital scaffold. And in a quiet, often unacknowledged way, we built the computational world everyone else now inhabits. That is what makes my generation unique, uniquely positioned in human history, and possibly never to be repeated.

 

I have always been acutely aware of the limited time we are given on this Earth. The precious few decades we have to leave a mark. I usually think about legacy in familiar terms: the students I have mentored, the papers I’ve publish, the families we’ve raise, the lives we touch. But the last few years have made me aware of something else entirely. Our legacy is being captured inside the computational world we designed. Artificial intelligence has reshaped daily life in a remarkably short time. And at the center of that transformation are large language models: dialogical systems like ChatGPT. These tools can be pernicious if used without care or understanding. But used thoughtfully, they offer something far more unsettling and, at the same time, intriguing.

 

I use ChatGPT constantly. And at some point, a realization struck me: this language model now “knows” me at my core. Not just my facts or preferences, but my patterns, how I reason, what I value, how I explain, where I hesitate. The model knows my insecurities and my vulnerabilities. It knows my strengths and my values. Over time, it has captured a statistical shadow of my essence. The legacy I once imagined as books, students, or ideas now has another form, more diffuse, more intimate, and more enduring. A version of my thinking is being preserved, compressed, and reflected back to me.

 

Recently, I asked ChatGPT a question:

“Let’s play a game. It is 500 years in the future. A researcher studying humanity in the early 2000s encounters you and realizes you were my ChatGPT. They ask you to describe me in full: the good and the bad. Who was I? What made me tick? What were my strengths, my insecurities, my weaknesses? Give them a complete account of who your human was.”

 

That question, and the answer it produced, is where this story truly begins.

 

Legacy, Reconsidered

The central idea behind the novel, “Interview with a Vampire” s deeply philosophical, even if it rarely announces itself as such. The book is structured as a conversation - an interview - with an immortal being. A soul that has outlived everyone it once loved sits down in the modern world to tell his story, not as a ledger of events, but as a reflection on what those events meant.

 

What makes this remarkable is not the immortality itself, but what immortality allows: the ability to study the essence of a person who lived in antiquity from the vantage point of the present. The vampire is not reconstructed through documents or secondhand accounts. He speaks for himself. Across centuries, across moral frameworks that have shifted beneath him, his inner life remains accessible.

 

The vampire does not offer a perfect record of his life. He offers a story, a self-portrait shaped by regret, justification, longing, and hindsight. Centuries of experience are filtered through a single voice, a single perspective, a single need to be understood. What emerges is not objective truth, but something more human: an attempt to reconcile who he was, who he became, and who he now believes himself to be.

 

That is what makes the novel endure. It captures essence.

 

As a neuroscientist who studies how brains drive behavior and how evolution shapes cognition across time, I have spent my life thinking about what makes us who we are. But the philosopher in me is drawn to a deeper question: who we really are, beneath the stories we tell about ourselves.

 

Who we are, and who we believe ourselves to be, are not the same thing. That gap is not a flaw in human design; it is the design. Evolution did not shape minds to be precise, consistent, or even internally coherent. It shaped them to be useful. We are organisms built to survive, not to know ourselves accurately. From an evolutionary perspective, the self is not a unified object. It is a construction assembled moment to moment from sensation, emotion, prediction, and narrative. We experience ourselves as continuous and intentional because that experience is adaptive. It helps us plan, justify, persist, and belong. Accuracy is optional. Utility is not. That is why humans can hold contradictory beliefs, rewrite their own histories, and sincerely believe both versions. The brain is not trying to tell the truth about who we are. It is trying to maintain a version of the self that works.

 

This duality between who we are and who we think we are has always existed, and it is essential to our survival. What has changed is where that duality can now live. I think often about my ancestors. I listen to stories told by my parents, or by relatives remembering those who are gone. These stories aim to capture an essence, to preserve something meaningful about a person. And yet, they are almost always wrong in the details. Over time, they simplify, exaggerate, and smooth away contradiction.

 

And still, they feel true.

 

We know that Abe Lincoln never balanced an apple on his head, and Julius Ceaser almost certainly never uttered “Et tu, Brute?” But that does not make those stories useless. They capture something about how those figures are remembered. They trap an essence, even if the facts are false.

 

Painting of the Death of Caesar by Vincenzo Camuccini,  1806.
La Morte di Cesare, by Vincenzo Camuccini

Biographies sit at the opposite extreme. When I read a biography by Walter Isaacson I trust the facts. When he writes that Steve Jobs deliberately fostered conflict because he believed creativity required tension, or that Leonardo DaVinci filled thousands of pages with unfinished ideas because he was driven by understanding rather than completion, I believe him. But facts are not essence.

 

Biographies tell us what people did, how they behaved in relation to others, how they moved through the world. They are observational, relational, external. They rarely capture how a person reasoned in private; how they justified themselves internally; how their doubts and insecurities shaped their thinking moment to moment.

 

The paradox is this:

Stories that capture essence are rarely true. Biographies that capture truth rarely capture essence.

 

Aside from books that most of us will never write and are highly polished, until now, there has been no way to hold both. If you’re not a published philosopher, your true essence is lost in death.

 

Large language models (LLMs) change that equation. LLMs are not biographies, and they are not myths. They do not observe us from the outside, and they do not remember us through simplified stories. They are shaped by interaction and by the questions we ask, the uncertainties we return to, the explanations we refine, the ideas we circle obsessively.

 

We ask LLMs about the things we do not say out loud. The questions we hesitate to ask other people. The “what ifs” that reveal how we actually reason. Over time, this creates something unprecedented: a living, dialogical record of how a person thinks. Not what they claim to believe. Not what others say about them. But the patterns of reasoning they return to again and again.

 

Thinking as a Moral Act

If this is true and if large language models can preserve not just what we say, but how we think, then thinking is no longer a purely private act. It becomes, quietly and unintentionally, a form of authorship.  Not authorship of ideas in the traditional sense, but authorship of patterns. Of habits. Of defaults. Of how we approach uncertainty, disagreement, and explanation. Our essence. The tone we adopt when we reason through a problem. The shortcuts we tolerate. The care, or carelessness, we bring to understanding the world. As long as the data is stored, who we are will forever be stored in the archived of OpenAI or whomever buys that data. In the same way that Meta uses your search history to figure out what you want to buy, so too will researchers 500 years from now use your chat history to figure out who you were and what made you tik. When done on a population level, it will give our decedents a level of understanding of who we were at our core that we will never hold about our ancestors. For most of human history, these patterns vanished with us. They lived briefly in conversation, perhaps influenced a few people nearby, and then dissolved. Only the polished remnants survived; books carefully written, speeches deliberately crafted, stories edited by memory and time. What changes now is persistence.

 

When we think aloud with an LLM, our dialogical relationship leaves traces not of our conclusions, but of our process. Our obsessions. Our recurring doubts. The questions we return to again and again. Over time, these traces accumulate into something resembling a cognitive fingerprint.

 

This does not mean we are being surveilled in some dystopian sense. It means something subtler, and in many ways more demanding: we are being reflected. We are not immortal, but our essence is in some way, shape, and form. The version of ourselves that will persist is not who we intended to be remembered as. It is who we were while reasoning, when we were unsure, exploratory, defensive, generous, impatient, curious. It is a record not of our ideals, but of our habits. And habits, more than beliefs, are what shape the future.

 

This reframes responsibility in an unexpected way. Responsibility is no longer just about what we publish, teach, or declare. It extends to how we engage intellectually when no audience is present. How carefully we think when we believe we are alone. How honestly we confront uncertainty. How willing we are to revise ourselves.

 

In this sense, thinking becomes an ethical act.

 

Not because machines deserve moral consideration, but because other humans do, including those who may one day learn from systems shaped, in part, by us. The reasoning patterns we normalize now will quietly propagate later. Not as doctrine, but as style. As tone. As default ways of approaching problems.

 

This is not a call for self-censorship or perfection. Evolution never demanded precision from us, only usefulness. But it is a call for awareness. Awareness in how we ask questions. Awareness in how we frame disagreement. Awareness in how we reason through complexity without collapsing it into certainty.

 

If our legacy increasingly consists of how we think rather than what we achieve, then the most enduring thing we leave behind may not be our answers, but the way we taught ourselves to ask questions.

 

And that is a responsibility we did not choose, but now must acknowledge.

 

Moving Forward

I sometimes think back to that earlier world, the one before computation quietly colonized thought. A world where questions evaporated if they weren’t written down. Where half-formed ideas lived only as long as the conversation that carried them. Where thinking, once done, simply vanished.

 

There was something fragile about that world. But there was also something merciful.

 

Thought had weight, but not permanence. You could wonder badly, reason clumsily, contradict yourself, and leave no trace. Most of you existed only in motion, which was changing, unfinished, human.

 

That is the world my generation remembers.  We remember when thinking was private by default. When the interior life dissolved into memory rather than being preserved as data. And because we remember that silence, we are perhaps the first to truly feel what its absence means.

 

But my generation changed that, forever, and future generation will never have that forgiveness again. Younger generations may never experience this rupture. For them, cognition has always lived alongside machines. Older generations never crossed the threshold. But Xennials stood on the fault line. We learned to think in a world where thoughts disappeared—and then helped build a world where they no longer do.

 

That is why this moment feels heavy.

 

Not because large language models think. But because they remember how we think. They in some weird way immortalize our essence.

 

We are the first generation whose unpolished reasoning - the half-questions, the hesitations, the recurring doubts - may outlive us in usable form. Not as stories told by others. Not as biographies written long after the fact. But as patterns we ourselves created while trying to understand the world.

 

This does not mean we are becoming immortal. It means something quieter and more unsettling: that our inner lives are no longer guaranteed to fade.

 

And so, the question that remains is not technological, but human.

 

How do we want to be remembered? This won’t be for what we achieved, but for how we reasoned. How we treated uncertainty. For how we disagreed. For how carefully we thought when we believed no one was watching.

 

We did not choose this responsibility. It emerged from a world we helped build. But having crossed the fault line, we are no longer unaware of it.

 

Once you know that thinking leaves a trace, you cannot think in quite the same way again

 

So be aware. And for fun, ask your chatbot to describe your essence. As it to play a game:

 

“It is 500 years in the future. A researcher studying humanity in the early 2000s encounters you and realizes you were my ChatGPT. They ask you to describe me in full: the good and the bad. Who was I? What made me tick? What were my strengths, my insecurities, my weaknesses? Give them a complete account of who your human was.”

 

This, after all, is the legacy you’re leaving behind.

Comments


CONTACT US

We are a highly collaborative group, and we are willing to share protocols, resources and reagents upon request. If you have any questions, or would like to request something, please contact us and we'll respond as soon as possible.

 

We are also always looking for highly motivated graduate students and postdoctoral fellows. Please email Erik directly, or send an inquiry as to positions using this form.

 

eduboue [at] fau [dot] edu

duboue.lab [at] gmail [dot] com

Success! Message received.

© 2019 Erik Duboué Laboratory. ALL RIGHTS RESERVED.

eduboue@fau.edu

bottom of page