The Pacific Regional and National Security Conference (PRNSC), this week dedicated session on Artificial Intelligence (AI) brought to the forefront the transformative potential of AI and the need for careful governance.
Speakers delved into AI’s vast opportunities alongside its inherent risks, particularly for the Pacific Islands and beyond.
There was emphasis on the need for sustainable, culturally sensitive, and inclusive approaches.
This sentiment was echoed by Co-Director of the University of Melbourne’s Centre for AI and Digital Ethics, Professor Jeannie Paterson, who described AI as a ‘powerful tool’ whose impact is shaped by the intentions and actions of its users.
“The challenge of AI at this point in time is that we don’t understand it that well, and itnoperates fast and at scale. These factors escalate the security challenges because harms can occur quickly and widely, while responses may be poorly targeted or too slow,” Profesor Paterson noted.
The rapid advancement and scalability of AI technologies were discussed as double-edged swords that could enable significant progress while posing governance and response challenges.
Founder of SOLE FinTech, Semi Tukana, shared his insights from a long career in software design and development.
He highlighted AI’s role in enhancing productivity, especially in streamlining software development processes.
“Coming from a 42-year background in software design and software development, we are continuously searching for productivity tools. For us, artificial intelligence is an excellent tool for productivity. As I will be explaining further, we are now using AI to help us develop our software more efficiently,” Tukana explained.
However, the discussion also brought attention to the significant energy consumption associated with AI, particularly with large language models (LLMs).
“In line with the dominant theme of this session is the energy question. It’s incredibly energy intensive to build data centres and LLMs, which are the foundation of modern AI applications.
There is a growing concern about the energy use in the continuous expansion of AI,” Professor Paterson added.
There was a consensus on the importance of developing more sustainable AI practices, potentially by adopting smaller, less energy-intensive models.
The need for robust governance frameworks was a recurring theme. Tukana emphasised that leaders and decision-makers had to be protected from the hype surrounding new technologies.
“With new technologies, our leaders need to be protected. I’ve noticed that with the advent of blockchain about 10 years ago, leaders were quick to embrace it without fully understanding it. We need to ensure that leaders and decision-makers are protected from the hype. AI should be seen as a productivity and creativity tool, but we must be cautious of its malicious uses,” Tukana cautioned.
Professor Paterson discussed the Organisation for Economic Cooperation and Development’s (OECD) high-level guidelines on AI, advocating for principles such as human dignity and oversight but stressing the need for these to be adapted to local contexts.
“Currently, every country is managing AI governance in its own way, leading to diverse approaches in enabling productivity and protecting against AI threats. The OECD provides high-level principles focused on human dignity, sustainability, and having a human in the loop for decision-making. However, these are just frameworks and must be tailored to fit local conditions,” she explained.
The pervasive threat of deepfakes and misinformation emerged as a crucial topic. “These AIgenerated fabrications significantly amplify risks, exacerbating an already expanding ‘truth deficit.’”
Additionally, discussions addressed the cultural and ethical implications of AI, such as biases in AI-generated content and concerns about cultural misappropriation.
The long-term impact of AI on skills and the workforce was another area of concern.
Prof Paterson said the potential decline in critical thinking and essential skills due to increased automation highlights the need for interdisciplinary teams and critical engagement with technology.
Panelists stressed the importance of educating communities, especially in the Pacific, through practical demonstrations and direct engagements to effectively convey AI’s capabilities and risks.
From a national security perspective, panelists advocated for regional cooperation and the development of local expertise to combat AI-enabled scams and other security threats. They emphasised that AI governance must respect human rights and cultural values.
“While the Pacific Islands can benefit from global models and frameworks, these must be tailored to their unique needs and contexts. There is immense potential for AI to transform societies, but this also brings significant challenges that require careful management.
Effective AI governance must balance innovation with regulation and respect for human rights and cultural values.
“While there’s no international obligation dictating how countries should manage or use AI, this space requires regional cooperation to ensure that standards and principles are interoperational.”
Professor Paterson concluded by reiterating the importance of contextualising international frameworks like those of the OECD to local needs: “The OECD provides a starting point, but countries need to refine and operationalise these principles according to their specific values and circumstances.