Is AI a Threat to What is Human? But What Is ‘Human’ Anyway?

An AI-generated image of a library.
An AI-generated image of a library.
Marisa Marini/Pixabay
‘What’s particularly concerning is that our chances of detecting AI are getting slimmer. There's already more content generated by AI than humans on the internet...This means that we will have to truly define for ourselves, what is ‘human’ anyway? This may well be the most important question of our time.’

At this year’s Brain Bar, it seems like artificial intelligence (AI) is on everyone’s minds—with quite a few panels dedicated to the topic, in English and Hungarian.

There’s suspense in the air. Will AI take over jobs? Who’ll be obsolete and who gets to live another day? Will it destroy us, or will it help humankind evolve?

One speaker, Ludovic Peran (product manager for responsible & human-centred AI at Google), gave the audience a simple breakdown of the technology and some of the challenges.

A concerned artist asked him:

‘Will AI take over art?’

Ludovic responded that his team at Google is working on ways that these tools can work with artists instead of replacing them: ‘In the end it can be a very powerful tool for creativity if used responsibly.’

Generally, he addressed the room’s fear of AI by suggesting that any tools need to be tested before they are released to the public. The tech companies need to be more responsible from the beginning.

However, is it perhaps too late? We’ve already got AI images and videos spread out all over the web/social media. It’s getting impossible to tell the difference. Voice cloning software is available to anyone. And the more you elect to pay, the more options you have for quality.

Ludovic said that Google is currently working on watermarking images in metadata of generated content. They are making some progress, as are other companies. But, of course, there’s no shortage of images and videos that are already out there. And, as with most technology, some resourceful people always find ways around protections.

But, certainly, it should help decrease some of the volume. However, Ludovic admits that when it comes to text, developing technology that would detect if it’s generative AI is ‘a bit more difficult.’ This will likely please students who have been increasingly relying on ChatGPT for their papers. This may mean that, going forward, teachers will have to provide other assignments which only a human can complete.

Igor Tulchinsky, investor, author, and founder of WorldQuant—who was also a speaker at the event—is an optimist when it comes to both AI and humans. He said that humans have something that machines don’t: history. ‘AI has breadth and scale, but no soul,’ he insisted.

He also believes that AI is not a viable threat to humanity…until it solves the problem of getting unplugged. Then again, long-lasting batteries and solar charging is probably in the cards someday?

We think of intelligence as sacred, he said, but ChatGPT already uses intelligence to predict the next word in a sequence. He asks:

If intelligence becomes a commodity and everyone has access to it…what would that world look like?

He suggests that what will be important are the skills that help us use AI better. That is, being able to ask AI the right questions so that it can do the things that it’s good at.  There’s no need to spend all your time composing an email, he insists, but even if you use AI to write it, you’ll need to add that human touch. ‘We need to focus on the key 10% (that humans are needed for) and automate the other 90%.’

Will AI put people out of work? Most definitely. Tulchinsky says that currently there are a three times as many lawyers as there are position. With AI, there will eventually be six times as many as needed given that much of their work will be possible to speed up or replace with AI tools.

Now that may be good news for those of you who hate lawyers, but no one is invincible. Or at least, hardly no one.

But, insisted Tulchinsky, we shouldn’t be scared of AI because the good uses outweigh the bad in all kinds of technology. ’Focus on creating good things and the number of good things will outnumber the bad things,’ he said.

Speaking of good things outnumbering the bad…perhaps there’s some hope.

Sheehan Quirke, more likely known to followers as The Cultural Tutor on X (formerly known as Twitter), was also in attendance. What’s interesting is that he made his account just over 500 days ago and it has already reached 1.5M followers. Some might call him an ‘influencer’ but most such people don’t tend to post about the ancient Greeks, philosophy, classical architecture, and old paintings. Yet, here we are.

Quirke made the account largely for himself and he set high standards. ‘When you share with others as you’re learning, you learn faster,’ he said, ‘I write about it from a place of passion and hopefully that comes through.’ His most viral post, in the early days, was about the danger of minimalist design and death of detail. It came out of frustration of how the world isn’t what it used to be. Everything is the same now, colourless, functional. ‘The world is literally becoming more greyscale,’ he said while wearing a grey suit.

Given the popularity of an account like his, perhaps there’s indeed a chance of some good things rising to the top when it comes to AI.

And it can’t replace everything that humans do either.

As AI veteran and author of ‘Smart Until It’s Dumb’,’ Emmanuel Maggiori noted, there’s a big difference between just writing vs. knowing what to say and finding an angle. An AI can write for him, but it can’t figure out what to say.

Of course, AI also doesn’t have its own voice. It borrows from many others and can attempt to imitate, but it does not have its own. It might fool some people, but many can innately detect that lack of connection.

Maggiori was also quite concerned about the use of AI in self-driving vehicles, military, and other context that can have fatal consequences. He fears our overconfidence in AI.

Petya Balogh, software developer (creator of NNG, a navigation software that’s used in many vehicles), entrepreneur and Angel invest, who argued against Maggiori’s more skeptical stance, had a more positive outlook on AI. He thought that any concerns Maggiori brought up were just a matter of time before they can be resolved. ‘Things often have to get worse before they get better,’ he insisted. He does not anticipate a war between machines vs. humans. ‘We will surrender willingly because they are better,’ he said. He’s far more concerned about natural stupidity than artificial intelligence. Stupid humans using powerful tools is what scares him more.

But what’s particularly concerning is that our chances of detecting AI are getting slimmer. There’s already more content generated by AI than humans on the internet. As tools like ChatGPT update, much of its training material will be AI generated and very little will be by humans, further blurring the line between reality and the artificial sort.

This means that we will have to truly define for ourselves, what is ‘human’ anyway? This may well be the most important question of our time.

‘What’s particularly concerning is that our chances of detecting AI are getting slimmer. There's already more content generated by AI than humans on the internet...This means that we will have to truly define for ourselves, what is ‘human’ anyway? This may well be the most important question of our time.’

CITATION