Ethical considerations regarding the use of Generative AI
Generative artificial intelligence (Generative AI) has become a transformative technology with a seemingly limitless range of applications and the promise of widespread societal benefits. However, behind this promise lie numerous ethical and business challenges that require careful consideration.
As the use of Generative AI grows and more companies incorporate this technology into their products and services, its impact on society becomes increasingly evident. For this reason, there is an urgent need to address the ethical dimensions of the widespread use of these tools to ensure that their development and deployment aligns with society’s core values.
What is Generative AI?
Generative AI refers to a family of deep learning models with an uncanny ability to create content such as write text, images, or other types of data, that closely resembles the information on which they were trained. By analysing patterns within the training data, these algorithms gain the ability to generate new and innovative results. For example, Generative AI could be asked to write a short story in the style of a specific author. If the Generative AI system has been given samples of that author, the system can create the text the user has requested.
Generative AI models have a wide versatility and variety, from those designed to generate text or audio to those that analyse datasets (medical history, court rulings, statistical series etc.) from which they produce reports. Each of these approaches has led to the development of different products and services that are not only advancing the understanding of AI, but also offer significant advantages compared to products and services that do not incorporate Generative AI.
Diffusion models excel when it comes to creating visual and multimedia content or performing tasks such as filling and expanding images, especially when provided with a textual prompt detailing the desired outcome. Prominent diffusion models for image generation include DALL-E 2, Image GPT, Midjourney and Stable Diffusion.
Generative AI use cases
Potential applications of Generative AI span multiple sectors, including financial services, education and healthcare. In banking, Generative AI can help in detecting fraudulent transactions, generating synthetic data to train machine learning models, protecting customer data using GANs (Generative Adversarial Networks) to estimate value-at-risk and predicting potential losses in specific scenarios, among others.
In education Generative AI can revolutionise personalised course design, enhance student learning through virtual simulations and restore historical learning materials. And in healthcare, applications include drug discovery and development, personalised treatments, medical imaging, and population health management, among other transformative applications.
Ethical dilemmas in the use of Generative AI
Despite its seemingly limitless potential, Generative AI is not without its drawbacks: The use of this technology several ethical dilemmas, including the perpetuation of pre-existing biases and intellectual property and copyright concerns. In addition, liability arising from the potential ability to generate fake news or the ability to impersonate individuals, as happened recently in Colombia with lawyers in filing a lawsuit in court citing fake cases “invented” by ChatGPT. These critical issues require careful consideration and thoughtful solutions.
As the use of Generative AI grows, so do concerns about its potential misuse, which has led to the creation of initiatives such as specific regulatory and legislative frameworks to encourage the responsible and sustainable development of Generative AI.
The European Commission has been one of the first bodies to act, publishing its “Ethical Guidelines for Trusted AI” in 2019. These guidelines highlight the importance of AI systems complying with laws, adhering to ethical principles and being technically and socially robust. Seven fundamental requirements are identified:
Ethical requirements for trustworthy AI
For Generative AI systems to be fair and free from bias, they must have these core requirements:
- Human Agency and Oversight:
There is general agreement that any AI system must empower humans, protect their rights, and allow for human oversight. However, Generative AI poses unique challenges in this regard, such as the creation of content bubbles that limit the different views on the information that reaches the user. On the other hand, the application of biases to Generative AI could facilitate malicious manipulation of information to influence public opinion. When applied to social networks, the capabilities of Generative AI can be used to mislead users, leading to distorted perceptions of reality, and increased social pressures. Finding the right balance between the immense potential of Generative AI and the need for human agency and oversight requires continuous monitoring, research and development of tools and policies that promote user autonomy and mitigate potential negative impacts.
- Technical Robustness and Security:
The increasing sophistication of Generative AI raises questions about its security and reliability. One of the most widespread concerns is that Generative AI contributes to the spread of misinformation and fake news, influencing public opinion. In that sense the creation of very convincing fake images of high-profile figures, known as deepfakes, is one of the biggest concerns around Generative AI, as they can be used for political propaganda or to discredit individuals or organisations. Continued research into the vulnerabilities of AI systems and the development of robust countermeasures will help reduce the potential damage due to fake news and deepfakes. Here close collaboration between governments, technology companies and cybersecurity experts will be crucial to properly address the security and technical robustness issues of Generative AI.
- Privacy and Data Governance:
Generative AI relies on large datasets, including personal and copyrighted information, usually collected through the internet. Training AI with copyrighted data without permission can lead to infringements and even violations of copyright and the possibility that AI-generated content lacks originality and closely resembles existing works. The ethical development of Generative AI must involve clear data governance practices, including strict policies for data collection, storage, and use. In addition, addressing ambiguities about the copyright of AI-generated content will be essential to foster a fair and legally compliant environment.
- Transparency:
Uncertainty can sometimes arise about how AI systems actually work. The so-called black box of AI must be opened with clearer definitions of what an AI system is designed to do. Here, explainability is vital, as all AI systems that are designed must be able to show how they arrived at the outputs they give. Transparency includes visualising the internal processes of the AI model, analysing its learned representations and testing its results with real-world data.
- Diversity, Non-Discrimination, and Fairness:
A particularly controversial issue that has arisen with the growth in the use of Generative AI centres on the quality and diversity of training data. There have been cases of Generative AI models working with personal data or images that reinforce sexual or racial stereotypes or underrepresent certain groups. Addressing these biases requires careful design, ongoing evaluation, and the responsible selection of training data. Inevitably, Generative AI can also be used by malicious actors to generate offensive content, including discriminatory or violent images and text, propaganda, or even fake pornography. Ensuring diversity, non-discrimination and fairness in Generative AI applications is not only an ethical imperative, but also crucial to building inclusive and equitable AI systems.
- Social and Environmental Welfare:
Given the excitement and often awe created by Generative AI, it is easy to overlook some of the less desirable consequences of the rapid adoption of this technology on the environment and society. These Generative AI systems require significant computational resources, and it is estimated that a ChatGPT query uses 3 to 30 times more electricity than a traditional Google search. Therefore, the expected growth of Generative AI will need to be accompanied by a paradigm shift towards the use of more sustainable energy sources to power the data centres that host these types of applications if we are not to exacerbate the climate crisis.Another challenge potentially affecting all human workers is how Generative AI will change the nature of work. For several years, economists have debated the impact that robots and AI will have on jobs, particularly those involving low-skilled repetitive tasks that are easier to automate. Now, with the growth of this technology a much wider range of jobs are potentially at risk, including administrative workers, content creators, programmers, customer service and sales representatives, among others. Addressing these ethical implications requires a multi-faceted approach, involving efforts to reduce the energy footprint of AI systems and initiatives to reskill and retrain the workforce for the evolving employment landscape.
- Liability:
As the adoption of Generative AI grows, there is a clear and urgent need for better and more precise regulation to address the problem of liability. The stochastic algorithms that drive these Generative AI systems sometimes create “hallucinations,” which are meaningless or demonstrably false. A recent case involved a team of lawyers in the US who did not realise that ChatGPT had invented the quotes and references they used in a court filing. Integrating human oversight into Generative AI systems will be essential not only to detect hallucinations, but also to ensure ethical decision-making, reduce potential bias and challenge actions that appear nonsensical.
The proposed European Artificial Intelligence Law
In the emerging field of artificial intelligence governance, the European Union has taken a significant step by proposing in 2021 the AI Act (also known as the AI Act) – a comprehensive legislative framework designed to regulate AI systems. The Act classifies AI systems into four categories according to their risks:
- Unacceptable risk
- High risk
- Limited risk
- Low (or minimal) risk
This framework aims to strike a balance between encouraging innovation and safeguarding fundamental rights, health, safety, security, the environment, democracy, and the rule of law.
The AI Act raises important questions about how we should regulate Generative AI artificial – a rapidly developing field that encompasses systems such as ChatGPT and large language models (LLMs).
The proposed legislation draws heavily on existing European Union (EU) laws, such as the General Data Protection Regulation (GDPR) and the Digital Services Act, that requires the LLMs behind Generative AI to have sufficient safeguards against producing content that violates these laws.
It recognises that these so-called foundational models, such as OpenAI’s GPT-4, which drives ChatGPT, require special attention because they are capable of a wide range of general tasks, so any errors or biases in the underlying model can potentially affect many applications built on top of these models.
Accordingly, providers of Generative AI systems will be subject to additional transparency requirements, including:
- Disclosure of Generative AI:
Providers must inform users that the content they encounter is AI-generated. Clear communication is crucial, especially when AI interacts with real humans.
- Responsible Design:
Providers have a responsibility to design and train their models with safeguards to prevent the generation of illegal or harmful content. This extends to respecting fundamental rights, including freedom of expression.
- Data Transparency:
Providers must publish summaries of their use of training data that may be protected by copyright.
It is important to clarify that these requirements do not classify fundamental models as high-risk AI systems. Instead, they seek to align Generative AI with the overall objectives of the AI Law: the protection of fundamental human rights, health, safety, security, the environment, democracy and the rule of law. It is also important to note that this proposed law is still under evaluation and is expected to be adopted in early 2024.
The regulation of Generative AI through the future AI Law marks a crucial moment on the path towards responsible AI implementation. While it sets a promising precedent, much work remains to be done. Creating a clear and adaptable framework that accommodates the versatile nature of Generative AI is crucial. Moreover, this conversation extends beyond Europe’s boarders, as the world faces the complex challenge of regulating Generative AI while fostering innovation and protecting society’s interests.
Other strategies to regulate Generative AI
Outside the EU, other regions and countries are developing legislation to regulate the use of AI. However, creating new laws is often a slow process and legislators may struggle to fully understand the potential of this technology and keep up with its advances. For this reason, many other strategies and proposals have been proposed to address the ethical, legal, and social challenges posed by Generative AI. Here are some of the key strategies and proposals:
- Voluntary Guidelines and Best Practices:
Technology companies and AI developers are working on developing voluntary guidelines and best practices to govern the development and deployment of Generative AI. While industry self-regulation can be quick and flexible, there are concerns about its effectiveness and potential for bias.
- AI Ethics Framework:
Some organisations and institutions promote AI ethics frameworks. For example, the IEEE (Institute of Electrical and Electronics Engineers) has developed a Global Initiative on Ethics of Autonomous and Intelligent Systems, which includes guidelines for ethical AI.
- AI Impact Assessments:
AI impact assessments seek to evaluate the potential social, economic, and ethical consequences of deploying generative AI systems, helping policymakers make informed decisions.
- Robust Data Governance:
Some regulations focus on robust data governance, ensuring that data used in AI training is representative, diverse, and ethically sourced. Stricter data privacy laws, such as the GDPR in Europe, already play a role here.
- Public participation:
Some initiatives advocate for public participation in the decision-making process on the regulation of Generative AI. Public consultation and input can help ensure that AI systems are aligned with society’s values and needs.
- Independent AI Ethics Organisations:
Establishing independent AI ethics committees or oversight bodies could help ensure more objective and expert control over the regulation of generative AI.
These strategies and proposals can act as a complement to formal legislation and can help shape an ethical and legal environment for Generative AI developments beyond EU borders.
To find out more about ethical considerations regarding the use of AI, we invite you to read NTT DATA’s white paper here.