Artificial Intelligence vs Authentic Intelligence: The moral battle of the future
We have been reading many stories about Artificial Intelligence (AI), especially since the emergence of ChatGPT—a generative AI tool that possesses immense human-like powers to consume and create content on the fly.
They say only those who embrace AI will thrive in the future, and that it will replace most jobs that humans can do today. There are others, like Elon Musk, who say this AI technology has the potential to go rogue and it needs to be regulated before it’s too late.
Whatever the verdict on that, and whether people choose to use these AI tools or not, one thing is for sure:
AI is going to impact all our lives at a very fundamental human level, and there’s no escaping that.
To illustrate my point, I would like to share three mini stories that I encountered recently.
These are simple stories; not about the wonderful things AI can do for us; but about our reaction to that as humans—leading to the discussion about the battle we all will find ourselves in eventually.
Once you are done reading this post, I would love to hear your views too. Do let me know in the comments.
Let’s start with the stories…
The Three Stories
Story 1: The WhatsApp debate
In one of the WhatsApp groups that I am part of, a debate broke out between two people. Let’s call them person A and person B.
In the exchange of messages between them, there was a point when person A had to prove a particular point and he presented his thought process through a well-articulated message that potentially ended the argument—his analysis on the topic was thorough.
Person B’s blunt response to that message—I think you used ChatGPT to come up with your answer.
End of story.
Story 2: The project plan
At a friend’s workplace a key project was being planned. His team members were given a week to come up with ideas for the project.
One of his junior team members came back within a day, with a detailed plan that looked like a week’s worth of effort went into it. This team member hadn’t demonstrated such abilities in the past, and this came as a surprise to everyone.
My friend’s instinctive reaction after seeing the presentation—These ideas are all copied from ChatGPT and no original thought went into it.
End of story.
Story 3: The Twitter personality
There’s a person I’ve been following on Twitter for a long time. Recently his tweets started sounding very different, as if someone else was writing them.
The tweets now seem to have a lot more depth and insight which was lacking earlier. It is good to see this change, but it also feels very odd since the switch happened overnight.
My gut feeling—This person is using ChatGPT to craft his tweets.
End of story.
Well, those are not exactly the end of those stories, but the start of a new way of human behavior. Let’s delve further.
Moral Science Time
Now let’s consider each of these stories as moral science lessons.
With whatever information I presented to you, now tell me—if you were to pick sides—whose side would you pick in each of the stories.
It boils down to what you think is right or wrong.
- Is the act of outsourcing your thinking to ChatGPT morally wrong; or is this an acceptable way of life now?
- And if it is an acceptable way of life, what does that say about the authenticity of the person using such tools?
- And before jumping to any of the above conclusions, do we even know if those people used ChatGPT?
- Are those assuming, that the other person used ChatGPT, at fault then?
Take a moment to think about it.
If we observe all the stories carefully, they present a complex moral dilemma.
People made a remarkable improvement in the way they articulated their ideas, with or without the aid of ChatGPT, we can’t say for sure—but what it ended up creating is doubt in the minds of others around them.
There is now a question mark on the AUTHENTICITY of those people despite the good quality of ideas they presented.
Is this a win or a loss for them?
For that let’s delve deeper into the idea of authority or authenticity.
Authentic Intelligence
Authenticity is typically an outcome of our trust in someone’s genuine abilities.
It is usually driven by prolonged exposure to and assessment of what we know about that someone—a track record of sorts, like that Twitter personality who I have been following closely for years.
When it’s someone new we go by their qualifications and positions because they’ve earned them through hard work. Their authority in their respective fields is a passport to gaining authenticity—consider doctors, lawyers, and other professionals or brands.
There’s a context through which we assess them. But what happens when this context goes missing. Let’s look at the three stories again.
Story 1:
I know that person A has a history of doing thorough research and presenting his thoughts in a logical manner. My gut instinct says it was a genuine message that came from the recesses of his mind and it’s not a simple copy-paste job.
But person B didn’t have that context. He hadn’t engaged much with person A in the past to make a sound judgement, and he immediately jumped to a conclusion that undermined person A’s thinking capabilities.
It also put a doubt in everyone else’s mind in that group since they did not have a similar context about person A. It was a direct undermining of person A’s authenticity.
Story 2:
I cannot say with 100% certainty that the team member used ChatGPT to come up with the presentation, but my friend is certain it is a copy-paste job since he has the full context of the abilities of his team member.
It’s a dilemma of sorts for him—he got the output he wanted but isn’t comfortable with the way it was achieved. In his mind he’s dealing with an artificial person.
Now, what if this was a new boss. Lacking the same context. Would that put the team member at an advantage over others. Does that mean the team member is now an authentic person.
Story 3:
I am certain that this Twitter personality has been using ChatGPT to craft tweets. I say that with confidence since I have the context of this person’s earlier tweets over the years.
But what about someone who is new to the platform and is seeing those tweets for the first time. Isn’t he or she now experiencing an artificial personality. To think about it, there’s nothing wrong in him using ChatGPT to craft better tweets—it’s commendable that he’s trying to improve the quality of output and experience for his followers.
But the key question remains—is that the authentic self of that person or is that an artificial persona which has now become his primary identity.
What do we value going forward
It’s a muddled-up equation.
There is no way we can avoid using these AI tools, because someone else will do it before us. The question of authenticity will be the last thing on most people’s minds when they keep getting the output they desire.
The foundations of our society built on the notion of authority via education, hard work and experience are going to be hijacked by these AI tools and those that learn to use them well.
It is like people misusing filters on Instagram to project a fake image of themselves.
We’ll now also have brains and ideas with filters that fake intelligence.
Funnily, the biggest concerns around AI have been around the emergence of fake images and videos. Images from tools like Midjourney are now so good that it is hard to tell if an image is real or not.
We can now generate fake scenarios where we place real and imaginary people in almost real settings—like a Mahatma Gandhi taking a selfie with other Indian freedom fighters—and you’ll be left wondering if this is in fact real.
It happened with the Pope recently sporting a cool jacket, and then with pictures of Donald Trump being arrested by the police, and the latest one from yesterday—a fake image of a bomb blast at the Pentagon, picked up by mainstream media as real news, sending the stock market into a spiral—all fake stuff but extremely convincing and with implication for the real world.
While the world focusses on those images and the potential they have for mischief and altering the truth—the real danger lies in the eventual and inevitable flood of artificial people all around us, especially on social media.
Going forward, authority and authenticity will be under constant attack.
To make matters worse—Blue Ticks on Twitter are now available for a price, while the original intent of that move was to make it difficult for the bots to thrive. Meta has followed suit and is now charging for the blue tick too.
And with work-from-home becoming the norm everywhere, we’ll have a flood of artificial knowledge workers to deal with—where anyone can do anyone’s job with the aid of AI.
Eventually AI will end up doing that job too without the need for humans.
- What are the jobs and people that we will value going forward?
- What’s the value of authentic human intelligence going to be?
- Who do we become eventually?
- And who do we think our children will become?
These are questions mankind will be forced to ponder about.
The Resistance
There will be resistance. Have no doubt about that.
It will appear in many forms—Government regulations, data protection rules, human rights, technology cartels and cozy clubs that will try to protect the old ways, and so on.
It’s not going to be easy. It wasn’t easy in the three stories I shared either.
I am sure you are undecided about which side of the moral line you belong to in those stories. Do we even know how to spot that line anymore. I am still confused.
It is a moral battle we will inevitably find ourselves in—to identify and value AUTHENTIC Intelligence in a world soon to be dominated by ARTIFICIAL Intelligence.
There’s no point in being a human otherwise.
And this is just the beginning. Good luck to all of us.
Till then,
Cheers!!!
Kartik Dayanand Boddapati
PS. I created the banner image for this post using AI, everything else is a creation of my restless mind.
Comments
Post a Comment
Let me know what you think of this post