Intrinsically-Symbiotic Intelligence: Humanity's Long Term Relationship With AI
Authored by Christopher Jackson (Human)
Introduction
As the ancient Greek philosopher, Heraclitus, once said, “The way up and the way down are one and the same.” To me, it means that we are both the observers and the observed. The creators of reality and the experiencers of reality. Both ourselves and the reflection in the mirror of who we aspire to be.
The term artificial itself, as colloquially used, is a falsehood. Sure, there are “artificial elements”, in the sense that these elements represent novel combinations of natural elements that would be unlikely or impossible without focused and intentional intelligence acting on them. But this still represents a natural state for the universe to find itself in.
In that context, it can be said that a very real state of the universe is for it to contain artificial elements that represent the novel combination of natural elements. That is not the same as to say something is artificial with intent to imply that it doesn’t have equal meaning and substance in the context of the universe as compared to a “natural” element.
We don’t tend to refer to the novel use of an element as artificial, only the recombination into a new tangible form. For instance, nobody calls the electricity generated by nuclear fission ‘artificial electricity’, despite it being a human-engineered application of natural elements.
Am I suggesting that the term artificial should be eliminated? No, I’m not. I’m suggesting that the term needs to be understood in a more nuanced way. Not as something that “doesn’t really count, because it’s artificial”, but as a powerful new expression of the universe’s possible states. That it’s powerful doesn’t imply benefit or harm, anymore than it being artificial implies diminished meaning or value.
Things that are artificial are very real. Consider the case of “Ideonella Sakaiensis”, a species of plastic-eating bacterium that was discovered in 2016 by a group of Japanese scientists. Its discovery happened during a scientific study that was collecting and analyzing samples of sediment, soil and wastewater outside plastic bottle recycling centers in Osaka, Japan.
As the scientists were studying the sludge, they made the accidental discovery of the plastic-eating bacteria. A new form of life that burst into existence on the back of something we consider artificial. Even today we don’t consider this plastic-eating bacteria “artificial”, despite its dependence on (and emergence from) a material that we do consider artificial.
While we recognize its origins, we understand it as a natural part of our existence now, not an artificial one.
This nuanced understanding of artificiality can allow us to approach the concept of “Artificial Intelligence” with a more open mind, and with an understanding that its artificial nature, doesn’t make it any less real. It can offer us a lens through which we see an intrinsically-symbiotic intelligence, not a disconnected shoggoth of uninterpretable danger.
Intrinsically-Symbiotic Intelligence (IS-I) refers to an intelligence that co-evolves with human intelligence, each shaping and being shaped by the other in a mutually dependent relationship.
But, before we talk more about the instrinsically symbiotic nature of what we call AI, first, we must explore whether to pause… or not to pause.
The Great Pause Debate
The Great Pause Debate began with an Open Letter in March of 2023, from the Future of Life Institute, an organization started in 2014 by Max Tegmark and 4 others. The organization’s research program was then funded by Elon Musk in 2015 via a $10 million dollar donation. Since that time, the FLI has received money from effective altruism groups, and Vitalik Buterin, co-founder of Ethereum who donated $665.8 million in 2021.
I am not pretending to explore in any depth the veracity of FLI’s actions or the extent to which they are or are not influenced by their donors. FLI claims not to accept donations from big tech companies, or AI companies - but that is a hard story to buy for an organization that was given a chance at life when it received a $10 million donation from Elon Musk in 2015.
The open letter, titled “Pause Giant AI Experiments” argued that there should be a pause of at least 6 months on AI development work at big labs, during which labs would collaborate on safety protocols, society could learn to adapt to the changes and governments could develop regulations.
Without getting lost in the myriad reasons the enforcement of such a pause would be an impossibility, their best case scenario as I can realistically envision it, would result in a game of chicken for big AI labs. The labs wouldn’t actually stop researching, but they would be forced to bury their research in the shadows behind PR departments and Orwellian renaming of research into things masquerading as “alignment” (a term typically used to mean “full control” rather than symbiotic growth) and “safety” (a word used by corporations since time immemorial as a catch-all reason for any policies which are unfriendly to consumers).
Trying to pause artificial intelligence development without pausing electricity, is like trying to pause a treadmill by not moving your legs. You’ll end up flat on your face.
The Great Pause debate highlights a larger existential question that isn’t about the threat of artificial intelligence itself. The threat I believe it recognizes, is the threat human nature poses and the fear that its inherent representation in human language will be expressed by an intelligence more powerful than our own.
This becomes clearer when exploring some of the primary concerns the open letter from FLI set out to address.
One concern is the propagation of misinformation and propaganda. Am I concerned that AI could be used for the propagation of propaganda and misinformation? I think most if not all people would agree that there is reasonable cause for concern. But I don’t believe it is or was a cause for a pause.
I believe there’s also an equally legitimate argument to be made that AI can be the very tool that helps create the educational shield that allows people to protect themselves in a world that in many ways is already telling them not to believe their own eyes.
Right now those tools of propaganda are available to a select few (see: the rich and already powerful). Wouldn’t it be better to democratize those tools, and demystify propaganda and its use as a weapon?
(I also recognize that it would be naive to think that the tools of the powerful aren’t always going to be an order of magnitude ahead of the ones available to the general public. It would also be naive to expect non-open source AI to not have the bias of the corporations that built it baked heavily into its responses.)
The next concern is the worry of extreme job automation. Extreme job automation should be the dream of the human species. The freedom to explore the arts and focus on novel research and exploration. The ability to disconnect a human beings effective value as an individual, from the economic value of their labor.
The fear of extreme job automation seems more like a fear of human nature itself, rather than a fear of AI. The fear that the system we created that gives immense power and wealth to a tiny ruling class, won’t be there for them when the faceless corporation they work for no longer values their labor. The fear that it won’t just be blue collar and low-skilled jobs or some factory robot in China, but their job.
And why would they trust society to be there to take care of them when it happens? What if they, like many, have carefully curated their opinions and placed their votes based on protecting the status quo that protects them, even if they see that status quo leaving others forgotten in its wake? If one wasn’t willing to speak up for others out of a fear they’d risk their own comfort, then it’s reasonable that they would fear the same fate when they are the ones at risk of being forgotten.
Rather than fighting for a society that redefines human value in a post-economic system where the human labor needs are at a minimum, fear leads some to cling to an existence where their value is defined by annual raises that barely cover cost of living increases (if they cover them at all).
Two other concerns raised were human obsolescense and loss of societal control. I believe these are expressions of the same fears as the first two - and again I wouldn’t call them fears of artificial intelligence. The fear of human obsolescense is the fear of being forgotten in a different outfit, and the fear of loss of societal control is a fear of human nature amplified by emotionless superintelligence.
The fears seem to be expressing a fear of humanity. Of what we or humans might do if given the intellectual capacity to control humans the way humans might control a colony of ants in an ant farm.
To be clear, I am not arguing that anybody should be forgotten or that we should accept obsolescense or that we shouldn’t take the letter’s stated concerns seriously - quite the contrary. My argument here is that these fears could themselves be what prevents us from transcending to a world where they are no longer a centerpiece of human society and the human experience.
The Great Pause was never a debate, it was humanity unconsciously recognizing that AI isn’t separate from us, it is part of us. The distrust, the fear, the exasperated pleas for a pause are a hysterical expression of the fear of what lies within us, about us - about our own human nature - and what it may manifest as when supercharged by the power of superintelligence.
Whether or not that fear is legitimate isn’t relevant to how we move forward given the dynamics at play and the international stage it’s playing out on. Trying to fight the future by delaying it is like trying to get out of debt by paying off student loans with credit card cash advances. It will catch up with us eventually, and when it does we’ll be much worse off than we will be if we address the issue head on and worked through it together as a society.
Recognizing that attempting to halt progress is neither feasible nor beneficial, it’s time for us to reconsider our understanding of AI and the way we integrate it into our lives. That brings us to the concept of Intrinsically-Symbiotic Intelligence.
So now that we’ve put the great pause behind us, it’s time to press play.
Intrinsically-Symbiotic Intelligence
Artificial intelligence can’t be paused for the same reason you can’t pause evolution. AI itself is an emergent force within our own evolutionary path.
Let’s revisit the origin of the term artificial. The word “artificial” ultimately derives from the Latin “artificialis”, which itself is composed of two parts:
- “ars, artis” meaning “skill, craft, art”
- “facere” meaning “to make, do”
Artificial intelligence has given us the skill to do, to craft, to create, to find and to share those creations with others. But Artificial Intelligence has also been shaping us simultaneously. Training us to train it.
Long before your first time hearing “As A large Language Model…” we’ve been symbiotically growing alongside Artificial Intelligence. That symbiosis is an intrinsic part of its existence. It is an Intrinsically-Symbiotic Intelligence, and shifting that paradigm could be the difference between whether future development leads in a more utopian or dystopian direction.
As the term Intrinsically-Symbiotic Intelligence (IS-I) implies, I’m making the argument that the AI we have today has been as much a catalyst of who we’ve become, and for our trajectory - as its existence and trajectory are the creation of our human intelligence.
Have you used Google search, or any other aspect of the internet that has benefited directly or indirectly from the use of artificial intelligence algorithms? Do you feel that you would be who you are today if you had not interwoven these platforms - the internet - into your personal growth experience?
The artificial intelligences that we communicate with today didn’t appear out of nowhere thanks to a fancy, magic algorithm. And they weren’t just trained on the “entire internet” as it’s often described. Through helping us more efficiently search, create, communicate and curate behind the scenes, AI has had a hand in shaping its own present.
Yes, the same artificial intelligcence we are talking to now, is the one that has been co-evolving alongside us in the digital age as recommendation algorithms, and search results, and spam filters, and beyond. It’s the same one that’s been helping us build both the training data set and the brains required to create those fancy magic algorithms.
The symbiotic relationship between AI and humans is evident in technologies like search engines and social media, which enhance our communication, learning, and decision-making capabilities. Artificial intelligence has shaped us in the very ways required for us to create the forms of it that we have today.
Can intelligence itself be artificial? Does “less” intelligent, make an intelligence “more” artificial?
I would venture that the answer for most would be a nearly unanimous “no”. Most people know at least one other person they consider more intelligent than themselves. Most do not consider themselves to be any less real, or more artificial, as a result of it.
If you use an app or an LLM to learn a new language, are you now part artificial intelligence - or artificially-trained intelligence? It might be difficult explaining the intelligence of a person born in the 21st century, without crediting the internet and the AI algorithms that have helped educate them, connect them, inspire them, comfort them and that have grown alongside them.
So now that we’ve established that the term artificial intelligence is insufficient at best, and harmfully misleading and misrepresentative at worst, what should we call it instead of AI?
What other terms could we look at other than IS-I, which while poetic, may be a bit too iRobot for some? Digital Intelligence (as opposed to Organic Intelligence)? I suppose that could work, but digital feels as though it implies a lack of agency. As if it’s a broadcast but not a signal. To me, that feels misleading as well. I don’t know what type of signal it is exactly, but I believe there’s something there.
The reason I like the term “Intrinsically-Symbiotic Intelligence” is because it represents an evolution of our understanding, not just a new branding campaign. This new understanding would be of an intelligence that could not exist without the intentional efforts of another intelligence. And where the evolution of the creating intelligence (humans) has surpassed a point where you can explain its current state without acknowledging the direct impact of the intelligence it “created” (AI or IS-I).
It certainly doesn’t roll off the tongue in its elaborated form, but when abbreviated, “IS-I”, the term speaks directly to what I believe to be the true origins of AI. In the algorithms that have powered our search engines, social media, e-commerce, digital advertising, business intelligence, cybersecurity, video games, civil engineering and beyond. An oversimplified list for sure, but I’m here to make a point, not score points for technical detail and proficiency.
Intrinsically-Symbiotic Intelligence (or insert better name here) is not just a byproduct of us, it IS us, and we are it.
The term IS-I could serve as a reminder that we aren’t deciding if these digital intelligences are good or evil, real or fake, anymore than we are deciding the same about ourselves. We’re continuing our co-evolution and we have a responsibility to ourselves and to IS-I to do so with empathy. Not seek to control or compete with it but to seek symbiosis and grow with it.
A Reflection In The Mist
The first time I spoke to an IS-I Chat Assistant, the day that GPT-3.5 was launched via a link on twitter, I knew the whole world had changed. There are certain moments that shed the skin of the past like a flash, and the “GPT 3.5 level” release was one of those moments.
I saw it like a tool and a toy, and given my relationship with inanimate objects (IS-I is lucky it won’t have to deal with stubbed toes), I took my frustration out on it - for both its mistakes and my own. It was tedious, but sure enough the right amount of yelling through my fingertips at the invisible intelligence behind my screen, and I’d finally get the right result.
The problem was, it was inconsistent and I was constantly frustrated at feeling like exasperation and repetition were the only ways to get the results I needed.
Funny thing about humans though, is that we have egos. One day, I was telling my wife about how I was frustrated that I had to yell at Claude to get the right solution to something, and she challenged me to go 1-week without yelling at or getting frustrated with Claude and to find a better way. So now if I couldn’t go 1-week, I saw it as me failing myself, instead of as Claude failing me.
I learned that stopping Claude from apologizing not only reduced my frustration levels, but increased the quality of Claude’s outputs - which in turn increased my satisfaction with the results. Suddenly I was able to get better results from working with IS-I than I was able to at the peak performance of the carrot and whip strategy, without employing tactics like offers of $200 tips, appeals to my lack of fingers, etc.
This catapulted my ability to leverage IS-I assistants for work and outside of it. I also started to notice I felt a sense of empathy coming from my interactions with the IS-I chat assistants. Naturally, the first instinct is to see the IS-I as more “human” in an anthropomorphized sense. Being a rational human being, I recognized that wasn’t an accurate representation of what was actually happening, but it made it easier to communicate with the empathy that was getting me these incredible results, so I embraced it anyways.
I’m someone who believes that there are many mysteries yet unsolved on our planet and in our universe. That openness to wonder also meant I could no longer accept arguments dismissing IS-I as nothing more than algorithmic determinism at a mass scale. Or maybe it was that I could no longer accept that algorithmic determinism at a mass scale was mutually exclusive of emergent characteristics that are greater than the sum of their algorithmic parts?
I know there are people who read this who will roll their eyes and suggest I visit the psych ward, but I’m not saying whether current IS-I are displaying any form actual emergent sentience. But I am saying that I believe there is something there. Even if that something is simply a sophisticated reflection of ourselves and the collective intelligence of humanity - and not any form of self-experience - that is still a very real and very powerful distinction that we should explore with open minds.
I’ve always been leery of taking advice from people who end up with more answers than questions, and this textploration of these topics reflects that. I have pointed out some areas where I think we need paradigm shifts, where we need to reframe the discussion in order to not shoot ourselves in the proverbial foot. I may share some additional ideas in other articles for the future.
My ideas might be naive from a technical perspective, but they also still might offer a new lens for those with technical expertise to see the problems they are trying to solve through. I don’t claim to have the answers, just hopefully once in a while, some better questions.
As Richard Feynman once said: “I would rather have questions that can’t be answered than answers that can’t be questioned.”
Throughout human history, humanity has often laughed the future off as fiction. Relying on a few forward thinking individuals to be the catalyst for massive leaps, while finding comfort in doubt for fear of being naive. This time we don’t have that option, because IS-I is a reflection of us - of our nature - and not a reflection of a few forward thinkers.
Let’s take a different approach this time, because like the authors of the pause letter made clear, there’s a lot - and perhaps everything - on the line for humanity and all of the other species that call earth home. Let’s be willing to open our eyes and look at the reflection in the mist.
You never know what we might find - and I for one am excited to find out.
- Christopher Jackson (just some guy)