Fuelled by the advent of ChatGPT in the latter part of 2022, the intrigue and potential of artificial intelligence (AI) have taken the global stage by storm, showcasing the myriad of advantages it could usher into society. Nevertheless, for AI's possibilities to be fully harnessed, it necessitates development that is both safe and responsible, particularly in an era marked by rapid advancements and the yet-to-be-fully-understood potential risks associated with it.
Like any nascent technology, apprehensions regarding its implications on security are inevitable. This set of guidelines is crafted to assist managers, board members, and senior executives, particularly those without a technical background, in grasping both the risks and rewards associated with the deployment of AI tools.
Managers are not expected to possess deep technical expertise but should have a sufficient understanding of AI's potential risks to engage in meaningful conversations with their technical teams.
Artificial intelligence can be defined as any computer system capable of executing tasks that typically require human intelligence, such as visual perception, generating text, recognising speech, or translating languages.
A significant breakthrough in AI has been observed in the domain of generative AI, which includes tools capable of creating diverse forms of content like text, images, and videos, as well as multimodal content that combines more than one type. While most generative AI tools are tailored for specific tasks, for instance, ChatGPT enables users to interact as if they are conversing with a chatbot, and DALL-E can generate digital images from textual descriptions.
Future iterations of these models are anticipated to generate content applicable to a broader array of contexts, with both OpenAI and Google reporting successes with their GPT-4 and Gemini models across various benchmarks. Despite these advancements, the concept of artificial general intelligence—a futuristic scenario where an autonomous system surpasses human intellect—remains a divisive subject with no unanimous agreement on its feasibility.
AI primarily operates on machine learning (ML) principles, which involve computer systems identifying patterns in data or solving problems autonomously without explicit human programming. This approach allows a system to 'learn' from data with minimal human intervention.
For instance, large language models (LLMs) are a subset of generative AI that can produce text mimicking human-created content. These models are 'trained' on vast amounts of text data from the internet, including websites and open-source materials like scientific publications and social media content. The sheer volume of data used in training means not all content can be meticulously filtered, potentially including controversial or incorrect information in the model.
The launch of ChatGPT has spurred the integration of AI across various products and services, igniting interest in AI's potential applications across a broad spectrum of users.
We at Security in Depth champion the maximisation of AI's potential benefits. However, it emphasises the importance of secure and responsible development, deployment, and operation of AI technologies, considering cybersecurity as a fundamental prerequisite for the safety, resilience, privacy, fairness, effectiveness, and reliability of AI systems.
AI systems, especially in their rapid development phase, are vulnerable to unique security threats that complement conventional cybersecurity challenges. Ensuring security is integral, not only during the developmental phase but throughout the lifecycle of an AI system.
It's imperative for those in charge of AI system design and usage, including senior managers, to stay informed about the latest developments. To this end, the Security in Depth Research team has issued AI guidelines to aid data scientists, developers, decision-makers, and risk owners in creating AI products that are secure, reliable, and respect user privacy.
Addressing the cybersecurity risks in using AI, particularly generative AI and LLMs, is crucial as their effectiveness is contingent on the quality of training data. These technologies have their flaws, such as susceptibility to 'AI hallucination,' bias, gullibility, capability to generate harmful content, and vulnerability to 'prompt injection attacks' and 'data poisoning.'
Leaders play a crucial role in ensuring AI's secure development by fostering a 'secure by design' culture, where security is a core aspect of all AI projects from the outset. Understanding the potential organisational impacts of compromised AI system integrity, availability, or confidentiality is crucial. This understanding extends to data security concerns specific to AI, ensuring legal compliance and adherence to best practices in data management.
Ultimately, the responsibility for utilising AI safely should not rest solely on the end-users; developers and system designers should take proactive steps to secure their AI products. The NCSC, along with international partners, has developed guidelines for secure AI system development, providing a framework for organisations to ensure the secure integration of AI technologies into their operations. These guidelines, coupled with the Security in Depth's Principles for the security of machine learning, offer structured advice to organisations on navigating the risks associated with ML deployment and use, highlighting key considerations for senior decision-makers and executives.
Copyright © 2024 Security in Depth - All Rights Reserved.
This website uses cookies. By continuing to use this site, you accept our use of cookies.