Professor Esma Aïmeur discusses the double-edged sword of GenAI ahead of the World AI Cannes Festival

esma-aimeur

The majority of GenAI conversations in the media revolve around corporations, in particular, who are hiring, acquiring, or going out of business due to artificial intelligence. 

If the story talks about individuals at all, it’s typically in the context of layoffs or in-demand new skills like prompt engineering. But Esma Aïmeur, a professor at the computer science department of Université de Montréal (Canada), focuses her academic body of work on how humans interact with and are impacted by AI. After studying in her native Algeria and getting a Ph.D. in Artificial Intelligence from Université de Paris 6, Pr. Aïmeur came to Canada and has researched AI—and its ensuing impact on data privacy and cybersecurity—for over 20 years.

Speaking with BetaKit ahead of her talk at the World AI Cannes Festival (WAICF), Pr. Aïmeur shared more about the explosive power of GenAI and why she calls it a double-edged sword. 

A dream became a reality

What we call regular technology today—for instance, smartphone applications or semi-autonomous vehicles—would appear indistinguishable from magic in the eyes of previous generations. And you don’t even need to go that far back to find that perspective: in a French-written 2004 paper featuring her point of view on the future of AI, Pr. Aïmeur cited an article about artificial intelligence written by Tim Menzies in 2003, a West Virginia University professor at the turn of the millennium.

“To me, it’s like a double-edged sword that requires careful management and supervision.”

After explaining his belief that AI’s goal of “emulating general human intelligence” will “bind” future generations of AI researchers, he ended with what seemed like a near-impossibility at the time: “Nevertheless, I still dream of the day when my word processor writes articles like this one while I go to the beach.”

Around 20 years later, his dream came true—ChatGPT can easily produce thousands of articles while Professor Menzies sits on the beach. 

“The dream became reality,” said Pr. Aïmeur.

But this new reality also came with challenges—in particular, collecting without consent and keeping safe or selling private information such as bank and medical records. Even without malicious actors, of which Pr. Aïmeur is quick to point out that there are many, sensitive information can sometimes leak unintentionally or through a simple error. This focus on the individual vis à vis technology is the foundation of Pr. Aïmeur’s body of work. Since the early 2000s, her research has focused on Artificial Intelligence for data privacy protection for both individuals and organizations, and years later, on security awareness, and social engineering cyber-attacks that lead to identity theft, bias, discrimination, and re-identification.  Since 2015 her work has focused on misinformation, disinformation, and fake news powered by AI techniques.

“It’s not only computer and cyber security for me, it’s something more than that,” said Pr. Aïmeur. “There are some psychological factors that we should take into account. It’s a multifaceted problem and we need to address this—studying behavioural analysis is important because it involves understanding both how individuals create risks to organizations and how to mitigate those risks.”

The double-edged sword of GenAI

For nearly two decades, AI research was incremental. Pr. Aïmeur says that generative AI rapidly changed things and is creating innovations at a break-neck pace, particularly in cybersecurity. This rapid acceleration is also the topic of Pr. Aïmeur’s WAICF talk—that GenAI empowers new “sophisticated AI attacks,” such as how tools like ChatGPT or deepfake technology can lead to devastating social engineering hacks.

“Let’s say, for example, that you receive a phone call and you think that it’s the CEO of your organization—and it’s not because the hacker has cloned the voice of your CEO asking you to do something urgently,” she noted.

That said, the professor`s goal is not to scare people; she also firmly believes that GenAI is one of humanity’s strongest tools to protect people online. For example, large language models can now easily scan millions of records to identify if any sensitive information is disclosed. New GenAI tools can even search images to see if there’s any risk of sensitive information showing  in the picture. Moreover, pattern recognition capabilities assist in detecting fraudulent activities, especially in financial transactions.

“In my WAICF talk, I will focus mainly on generative AI because it brings new challenges to cybersecurity, and it also provides new tools for defending against threats,” said Pr. Aïmeur. “So to me, it’s like a double-edged sword that requires careful management and supervision.”

Learning when it’s all moving so fast

Beyond speaking, Pr. Aïmeur said she’s enthusiastic about WAICF for its potential to bring people together from different disciplines across academia, startups, big business, and politics. She is also a judge for the Festival’s AI awards, where a panel chooses finalists from over 30 submissions based on the three criteria: creativity, sustainability, and inclusion. In the end, judges will award both a competition winner and a “coup de coeur,” meaning the award that struck a judge with its potential to be great.

“I’m really excited about [WAICF], due to the opportunity to connect with some of the brightest minds in AI,” said Pr. Aïmeur. “I would like to meet people that I haven’t seen for a long time, but I want to see new people; for me, the event is like a melting pot of ideas and innovations. And I think I’m particularly interested in learning about the latest AI strategies that are being developed around the globe.”

The post Professor Esma Aïmeur discusses the double-edged sword of GenAI ahead of the World AI Cannes Festival first appeared on BetaKit.

Originally published on BetaKit : Original article

Leave a Reply

Your email address will not be published. Required fields are marked *