How Can Humans Remain Central to the AI Powered Future?

How Can Humans Remain Central to the AI Powered Future?
February 21, 2023 Nimesh Nambiar

[Image source: Unsplash]

This isn’t yet another post about Chat GPT. There is no doubt on my mind, it is too early to get too excited or critical about Generative AI. But one may choose to be optimistic or pessimistic about it according to their natural predispositions. Plenty of arguments for and against generative AI having a tremendous impact on the way we live, work and evolve. If we look at the giant leaps technology has made in the past decade and that Chat GPT, like many other AI products is an efficient and incredibly quick learner (probably quickest of all thus far) that will only get better with time, it is evident we are yet to see how good (or bad) it can be.

I would flash a disclaimer that I am not an AI expert. I am a researcher interested in futurism, culture, and consumer behaviour. The discussion here is entirely about how we should process technological leaps such as AI and a few philosophical questions you may have already had. Rather than making any conclusions, I would rather approach it as a walk in the woods thinking and asking questions together.

The Big Question & Answer

Chat GPT’s answer to the question if it could steal human jobs or not, was that it is unavoidable some jobs that involve repetitive tasks would be affected, but it can also create more job opportunities in other sectors and drive the economic growth which can benefit everyone in the long run. Well, the answer shows its astute diplomacy and tactfulness. Not arrogant, but not a pushover either. But my guess is, it is set up by the engineers rather than learnt by the app itself. Still a compelling response to a critical question

How good or bad Generative AI and similar products will turn out to be is entirely up to how we collectively use it. When we say ‘we’, I mean, the businesses, tech companies, governments, regulators & consumers. Expecting a consensus among these stakeholders may be too ambitious at this stage, but a look back into the history – Sherman Antitrust Act 1890 in the US to prevent Standard Oil’s monopoly soon after the advent of limited liability companies that could inject huge capital – we can see, each human entity was followed by a customary regulatory intervention. In that sense, it is very likely that governments will act quickly to prevent any catastrophic misuse of AI on its society or economy. But if you look up Edelman’s Trust Barometer, it says distrust is now a dominant emotion and that most people don’t trust their governments & media enough. They trust companies more. This puts businesses in an incredibly powerful position. As the famed Spider Man quote goes, with great power, comes more responsibility.

NightCafe Creator, an AI platform visualized the above image from the text “Hope & Scepticism”, a leading sentiment in this post.

Let’s be honest. In Today’s times, we tend to catastrophise any sudden change. Collectively, a lot of us think about how any drastic technological advancement – as opposed to going back to more natural ways – can seem immediately beneficial but comes with potentially unpleasant side effects in the long run. This has made a lot of us cynical about any new development these days. The misinformation and the paranoia spread by the informal media sources don’t help much either. So, we tend to overreact. It is important for the AI developers to consider this general distrust we have. They probably do. And it is essential such level empathy goes down to the design.

The biggest case against AI taking over our lives is how little we understand about our own mind. Do we fully understand how our brains function, what role does it play in the body’s immune system or how it interprets olfactory signals from our oral and nasal receptors? So far, we have more questions and hypotheses than clear answers. Let’s take the case of human emotions. Ask a neuroscientist. We might hear different schools of thoughts, some as advanced as Interoceptive Sensibility and Emotional Conceptualization and some as old school as evolutionary essentialism*. But the truth is we still don’t have a cut-out definition on how emotions are formed and expressed by our brains. We have brain fMRIs that map electromagnetic activity in the brain, but the patterns observed under various emotional states across different people are more confounding than reassuring*. Ask an expert, they will tell you we are still so far away from understanding our brain. Hence, a natural question is, if we don’t understand our brain or its functions well enough, how can we create an alternative (with that very brain) that can rival it? So, the fear of AI becoming greater than humans – for the over-dramatic ones – appears largely unfounded.

Chat GPT might argue, AI could be used to further our understanding in such grey areas of human understanding. That is exactly what most of us want any such technology to do anyway. We need the growth path of such technology to be directed at expanding our horizons farther. There is no better way to empower us more than expanding the boundaries of our scientific understanding. In other words, we would like AI to be quicker and better in areas we are floundering in and help us,  humans be more powerful, as opposed to AI becoming a powerful, independent entity in itself. This doesn’t fully absolve AI from gaining an indirect form of power. Michel Foucault did not define knowledge as power, instead he argued the existing power system decides what knowledge goes forward and what doesn’t. Remember AI can make these power systems more dangerous. But if we want AI to be accepted and integrated readily in today’s society, its design ecosystem needs to understand its developmental journey shouldn’t end up strengthening the tilted power balance in our societies.

The emergence of a positive regression & a search for meaningfulness

For a layman, AI fundamentally makes things easier, faster and more accessible. If we look back at our evolution, almost every technological innovation in the past has been centred around these goals, in one way or another. We need to do things faster without any loss. As we grew in number and our resources declined, so this made perfect sense. But The Western world and in most of the developed world really, there is a movement to slow things down and be a participant in the present discourse a lot more in recent times. Why is that? Is there an inversely proportional relationship emerging between speed and our long-term well-being. We seem to think when we are speeding, along with the gain and sometimes mindless indulgence, at some point there is a price to pay. Now this may not occur immediately to a lot of people who are trying to run their hectic lives on a daily basis. But it’s hard to deny a marked increase in such a tendency in some societies.

So, here is a hot take: in a not-so-distant future, could AI become obsolete (and somewhat meaningless) to humans if all it does is make things faster? I would argue its next step should be to make things more meaningful. It needs to embody an experience-driven, tabula rasa as a principle of life. Such a pursuit has a symbiotic quality to it. That could be another avenue to consider for tech companies to work on when they think of the trajectory of AI.

Championing originality

We haven’t yet developed a sentient AI. A sentient being can perceive & feel things and have original thoughts & responses. A good reason why this may not have yet happened is, because we don’t have an answer for what consciousness is. We don’t know if consciousness is in our brain or if it is in our Microbiome or even outside our bodies, across a continuum shared by all lifeforms, as suggested in Buddhist texts.

How we see “artificial” – look around you, what is your first reaction to anything you see labelled as ‘artificial’ – is a point in question. Anything artificial should be preferably avoided. Some of us can afford to do so, while not all of us do. It is because we have grown to respect more natural, original experiences after the saturation of the mass consumer culture. It is the positive regression we are all experiencing – the value of something being organic, real, imperfect, handmade and so on – is in total conflict to the direction AI seems to take us for now. This is a question tech companies must address.

From MoMa Learning

Let’s ask ourselves why we appreciate The Lovers (René Magritte, 1928). what if we saw the painting without knowing who the painter was (but we know it is a human). Would it impact how we perceive it? Now imagine, we are told it is generated by AI. Does it change our perception? Do we still admire it as much? Run a similar test with a poem you are told was written by Keats (after a much too deliberation and experience some sort of agony perhaps) and something Chat GPT churned up in a matter of seconds. Do you enjoy both the same way?

We seem to attach certain aesthetic value to what comes from real human struggle, or natural experiences as opposed to an automated, “easy” process. Can AI surpass this process-oriented perception? Can it bring in the human qualities of imperfection, chaos and unpredictability when it produces art or literature?

Somewhere there is a potential scenario of AI becoming – sorry to use this horridly abused word – de-humanized, if it takes a path of its own for an eventual conflict with human subjectivity. This is a separate topic that demand a lengthy discussion for another time.

Where Humans cannot supply enough

A traditional perspective on AI is that is all about speed, ease and comfort. That it aims to totally eliminate humans in areas where we are slow, less productive and difficult to manage. The human cost of such a pursuit is significant. We have a large aging population that still needs to support themselves whose brains simply cannot catch up with a super-powered algorithm that is growing at a gargantuan speed. This doesn’t mean AI shouldn’t try to make the processes more efficient and faster. There are a number of jobs, industries and sectors where sufficient humans aren’t available. Take the example of healthcare. Our healthcare staff – in whichever country you like – are often the most overworked, underpaid, and unhappy. Nurses and doctors spent most of their time managing patient data. This is an area where AI can significantly “speed up” and create a remarkable impact. Mental health is another area. The biggest challenge in handling the mental health crisis is the stigma associated with getting help. When it’s with an AI, there may be a lot more willingness and better access. Of course, developing an algorithm that understands human suffering and remedy a responsible, safe therapy is going to take some serious work but this is what we need to look as we look ahead to the AI future. Granted some job losses are inevitable, but there is a lot the tech ecosystem can do to ensure where AI go from here to mitigate such a significant ‘collateral damage.

“Humans above all” imagined by Canva.ai

The Hypotheticals of AI-led Future

I do not in any way want to unfairly question AI. The above questions are raised in the interest of a positive evolution we all can appreciate. If we look ahead, how do you imagine AI to evolve? Let’s think in simple terms. Our best hope is that humans will use it a symbiotic manner. That is of course, heavily flawed. Historically we spent significant money & effort in protecting our path-breaking inventions from being abused, at times more than developing it further. That says enough.

Humans will definitely resist it, especially considering the general distrust we have in the otherness, especially in Individualist Western societies. Throughout history, we have resisted any form of change. But we also have accepted them eventually.

Let’s assume humans would get better – because as Chat GPT’s engineers claim, it helps to build our economy and society. This is a great scenario. A human centric thought would always put our improvement and betterment at the centre as we reflected on earlier. In that sense, its role as a tool remains clear cut. As we get better, AI will try to catch-up (and not the other way around as some sceptics would have us believe). And if this is the direction of the technology adoption, we would become hard to please. We could become tired of what AI does for us if it cannot keep improving itself at the same speed and even, develop emotions. Consider the short lifestyle of its WOW factor. A sharply diminishing marginal aweness occurs with repetitive interactions with any given phenomenon. A few weeks into usage, how many of us are still in wonderment that Chat GPT can explain Kant’s Transcendental idealism in simple words? Or that it has a POV on Sacheen Littlefeather refusing Oscar in 1973 on behalf of Marlon Brando? Neuroplasticity of our brains is quick in adapting itself and so the wonderment steadily declines. Some of what Chat GPT does become a part of our System 1, the more intuitive, somewhat automated part of our brain, perhaps a closer relative to Chat GPT. Another reason for its wonderment decline is how using Chat GPT significantly compromises our ability to be competitive (Anyone can use it with similar results). We can quite easily become unimpressed by the quality of any output if we know Chat GPT had a part in. Then, consider there would be proliferation of similar tools. We can expect tech giants like Google to respond with something similar if not better, to keep the momentum going. In no time, we are dealing with newer problems (E.g., how to prevent students cheating on their exams and dissertations). Considering the complex creatures that we are, in all likelihood the engineers will have to keep working on ways AI can be integrated into the vortex of chaos that our lives are. This isn’t easy for computers. While the initial breakthrough may feel like a jolt, we may be looking at cumulative gains that wouldn’t be as sensational and the next jolt may not come until several years gone by. Perhaps.

Then there’s somewhat ideal scenario of AI becoming a co-pilot of humans. Here, the AI has become a co-creator of our pursuits. Its slightly different from being an external tool. The discourse is set by its integration into our own selves. Where it can grow into a more intuitive, personal, internal collaborator (E.g., an eye wear like a contact lens that shows maps, meanings, explanations on cue or connects us to the metaverse). In this scenario, in a somewhat raw sense, AI’s existence is dependent on humans. That is, if we don’t employ them on us, its functionality will be limited. But for this scenario to be right, we need to ensure we resist our Frankeinsteining temptations. Abuse should be prevented at all costs. That none of us get greedy or selfish. It may be a bit too ambitious, depending on who you ask. Truth be said, it is this sort of co-piloting that can drive us towards an AI-integrated future where jobs themselves go into a positive regeneration. Research by the Institute for the Future says 82 percent leaders expect humans and machines will work as integrated teams within their organization inside of five years. The study also says 85% of jobs that will exist by 2030 don’t exist yet.

Such a co-piloting path could help us live longer & be more invincible with the spotlight firmly on us. We don’t have to think of any dystopian outcomes yet, let’s talk small goals. Take the case of our attention span. From 12 mins to 47 seconds, there has been a remarkable drop in our attention levels. Can AI help us do better (and not worse)? This how a symbiotic, co-creative AI evolution can be highly beneficial to humanity.

The last (and the least likeable scenario outside Marvel Universe) scenario is humans placing AI on a pedestal and terming it as a super-being, as something above us A genie steeped in an array of codes. Despite all the tragic theatrics around us, anyone into history can tell our generation has seen relatively far less violence and human suffering. But the optics change drastically, if we ask people how they look at the times they live in (independently). In general, our tendency to catastrophise is at the peak right now. Many term Covid-19 pandemic as one of the most terrifying periods of all time. The fact is it probably was. It is mainly because we acknowledge and relay our suffering much more than we did in the past. Imagine in 1915 Spanish flu, most of human race was about to be wiped out. The sole objective then was to save one’s life – But today, we need much more than being alive. We need longevity, a guarantee against death and for a good life, we need to safeguard our families, careers, valuable possessions and future dreams. We possess a lot more; therefore, we are also more anxious. In The Seventh Seal (Ingmar Bergman, 1957) which chronicles a moral dilemma is set during the black plague in 14th century, Max Von Sydow’s character Antonius Block says “I want to confess as best I can, but my heart is void… My indifference to men has shut me out. I live now in a world of ghosts, a prisoner in my dreams.”. The Covid-19 pandemic had hit us on a far worse scale and intensity, given our global connectivity. Even though thousands have lost their lives, millions found ways to secure themselves in their homes while having their essential needs still being met. Governments adjusted their health, border and transport policies and companies developed vaccines instantly to bring everything under control. Yet, many of us believe the pandemic was one of the worst human tragedies. They are not entirely wrong about it but objectively, we have dodged a wrecking ball of human civilization with our inventiveness and adaptability. We could have done a lot worse.

Why am I talking about it here? Mainly because it shows despite the advances we made, we do not consider us invincible to modern day perils. We suffer a great deal, if not more than we did decades ago. Our sense of our existential threats is at its highest and we have both ability and tools in our hands to visualize what could wrong. This should be a significant design consideration while crafting the way ahead for AI by its developers, companies and community.

As a parting note, given its profound effect on our lives, it is important we also talk about how it would handle morality and politics, something that defiantly divides us than unite these days. How would it handle our individuality & collectivism? Currently, AI is growing from the data created by humans – almost in a parasitic fashion – what would happen if most of what it feeds on would be the data created by itself? Who decides what course it should take? Some in the Tech domain find this whole discourse unnecessary, fear-mongering and perhaps, even in poor taste. Let’s talk about the present, they might argue but the concerns I hear from people around me – despite their public enthusiasm for it – were the inspiration behind this post. We need similar open conversations to make AI better and more integrative.

I would like to self-declare I haven’t used Chat GPT to write this post, except when I had to run a couple of questions (explicitly flagged here) to show what it does. Let me close out with a problem AI developers can potentially solve – can there be an app that can certify legitimately that the work on display is flawlessly human? There’s another sub-path AI can take… elevate my humanness, instead of silencing it.

Sources:

  • Jolty (The Future Podcast) by Faith Popcorn & Adam Hanft
  • Thinking, Fast & Slow by Daniel Kahneman
  • How Emotions are Made by Lisa Feldeman Barrret
  • Power by Michel Foucault
  • Chat GPT (Demos for illustration only)

1 Comment

  1. Adam 2 years ago

    Very thoughtful analysis. A lot to ponder on.

Leave a reply

Your email address will not be published. Required fields are marked *

*