By Col Binoj Koshy (at Linkedin).
Large Language Models (LLMs) are revolutionizing the world. These powerful AI systems can generate human-quality text, translate languages, write different kinds of creative content, and answer your questions in an informative way. But with great power comes great responsibility.
To ensure LLMs are used for good, and not for evil intent; we need to “tame” them responsibly and with ethics. The role of the developer, the prompt engineer, the sponsors, the auditors and the leadership are vital in all aspects of the rolling out of AI into the society. The recent incident at Las Vegas, Nevada, USA; has highlighted the importance of responsible and ethical AI deployment and usage. The perpetration in the Tesla CyberTracker blast outside the Trump International Hotel on 01 Jan 2025, has been found to have queried certain unpleasant queries on ChatGPT. Experts are divided on whether the role of ChatGPT contributed in the plans to explode a Tesla CyberTracker outside the hotel. This has raised concerns as there were in-depth answers to 21 queries out of the 22 queries which the perpetrator had asked ChatGPT. Concerns regarding the safe use of artificial intelligence has also been voiced.
The incident sited above, involves a US Army Master Sergeant, Matthew Livelsberger, using ChatGPT to gather information about explosives and then probably using that information to prepare the detonation at the site. Experts like Wendell Wallach, bioethicist and author, refering to the incident, emphasized the need for greater accountability when AI is used to execute a crime. He further, expressed concers, highlighting the challenge of assigning responsibility when actions are “filtered through a computational system.”
A key concern is the rapid pace of AI development outstripping the creation of adequate safeguards. While AI models like ChatGPT are not capable of understanding the danger of the questions they are asked, their ability to provide detailed responses on potentially harmful topics necessitates stronger safety protocols and ethical guidelines.
This case underscores the urgent need for collaboration between AI developers, policymakers, and ethicists to ensure responsible innovation and prevent the exploitation of AI for harmful purposes. As Wallach cautions, “The corporations sit there with the mantra that the good will far outweigh the bad. But that’s not exactly clear.”
The principle of ‘Garbage in is Garbage out’, also applies to these LLM models that form part of AI. The LLM by itself does not understand ‘non-responsible’ training, and assumes all that supplied to it is ethical and clean. To ensure LLMs are used for good, not evil; we need to “tame” the LLM ethically and responsibly:
- Dangerous Learning: While LLMs hold immense promise, their learning process can be a double-edged sword. These models learn by devouring massive datasets of text and code, indiscriminately absorbing both the good and the bad. This “dangerous learning” can lead to several ethical pitfalls:
-
- LLMs can inadvertently internalize biases present in the data, perpetuating harmful stereotypes and discriminatory views.
- LLMs can be exploited to generate harmful content, such as hate speech, misinformation, and even instructions for malicious activities.
- The very nature of their learning process raises concerns, as LLMs may unintentionally memorize and expose dangerous information from their training data.
- The Bias Factor: LLMs learn from massive datasets of text and code, and unfortunately, these datasets can reflect societal biases. This can lead to LLMs generating outputs that are sexist, racist, or otherwise discriminatory. To combat this, we need to:
-
- Feed LLMs a balanced diet of data that represents all walks of life. This means actively including diverse voices, cultures, and perspectives.
- Develop sophisticated tools to identify and neutralize biases in both the training data and the LLM outputs.
- Continuously monitor LLMs for bias and make necessary adjustments. Encourage users to flag problematic outputs, creating a feedback loop for improvement.
- Taming with Responsibility and Ethics: LLMs can seem like mysterious black boxes, making it difficult to understand how they arrive at their conclusions. To build trust and ensure responsible use, we need transparency:
-
- Provide clear documentation about the LLM’s training data, architecture, and limitations.
- Encourage LLMs to explain their reasoning, shedding light on their decision-making processes.
- Allowing for wider scrutiny and community-driven auditing.
- The Real-World Interpretation: Deploying LLMs ethically requires careful consideration of their potential impact:
-
- Establish clear ethical guidelines for LLM development and use, addressing issues like privacy, misinformation, and potential misuse.
- Incorporate human oversight, especially in sensitive areas like healthcare and law enforcement, to ensure human judgment remains central.
- Educate users about the capabilities and limitations of LLMs, fostering responsible use and critical evaluation of outputs.
- Tackling Specific Ethical Challenges:
-
- Implement privacy-preserving techniques to safeguard sensitive information in training data and user interactions.
- Develop mechanisms to detect and mitigate the spread of misinformation generated by LLMs.
- Anticipate and address potential job displacement due to LLM automation, providing support and retraining opportunities for affected workers.
Taming LLMs is an ongoing process that demands a collaborative effort. By prioritizing ethical considerations, we can harness the immense power of LLMs for the benefit of humanity, ensuring a future where AI is a force for good.