I'm sure we all know about ChatGPT by now. In the latest news, we have seen the misuse of ChatGPT at Vanderbilt University used to respond to a traumatic incident. ChatGPT has taken over the world, but what are its implications?
ChatGPT is a large language model created by OpenAI, which uses deep learning algorithms to generate human-like responses to natural language queries. With its ability to understand and generate human-like text, ChatGPT has become an indispensable tool for businesses, individuals, and organizations looking to improve communication and streamline workflows.
Its capabilities are based on its vast training data, which includes large amounts of text from various sources, allowing it to understand and respond to various topics and contexts. Many have seen this innovation as an opportunity to easily get through assignments and tasks, a shortcut that seems extremely helpful, but can it backfire?
Chatbots are computer programs designed to simulate human conversations, and ChatGPT is one of the most powerful tools for creating them. With its advanced natural language processing (NLP) capabilities, ChatGPT can understand and respond to a wide variety of customer inquiries and provide personalized solutions. ChatGPT's abilities are dependent on the data it has been trained on.
If the data is biased, incomplete, or inaccurate, ChatGPT's responses may also be biased, incomplete, or inaccurate. While ChatGPT has been trained on vast amounts of text data, it may still have limitations in its understanding of certain topics or contexts.
The MSU shooting was a devastating incident. On February 13, 2023, a mass shooting occurred in two buildings on the campus of Michigan State University in East Lansing. In response to the horrific event at Michigan State, which left 3 students dead and 5 others injured, the Tennessee-based university sent an email to its school community on the 17th of February 2023.
According to the Vanderbilt Hustler, a student newspaper, "The recent Michigan shootings are a tragic reminder of the importance of taking care of each other, particularly in the context of creating inclusive environments." An alarming sentence appeared under the school's email. "Paraphrase from OpenAI's ChatGPT AI language model, personal communication, February 15, 2023," was written in a smaller print. The assistant dean of Peabody delivered an apologetic email the next day in response to complaints from students regarding how he utilized AI to create a letter amid a tragic event. By the Vanderbilt Hustler, Nicole Joseph, who was among the original letter's three signatories, termed utilising ChatGPT as "bad judgement."
Even though just one gunshot happened, Vanderbilt's letter made mention of "recent Michigan shootings." As dean of the institution, Benbow expressed his sadness over the injuries and deaths at Michigan State, which he knew had a tremendous impact on people in the district. "I am also deeply troubled that communication from my administration so missed the crucial need for personal connection and empathy during a time of tragedy." Students notified the editor-in-chief of the Vanderbilt student newspaper, Rachael Perrotta, that they were "outraged about this situation and confused as to what prompted administrators to turn to ChatGPT to write their message about the Michigan State shooting."
The fact that AI has taken over to the point that we are unable to even take the time out to write a heartfelt response to death and tragedy but instead, use a computer programme to do so, is concerning. ChatGPT is useful, no doubt, AI is a result of technological development and innovation, but let's not take this for granted. With its ability to process large amounts of data quickly, AI has revolutionized industries such as healthcare, finance, and transportation. However, despite its many benefits, AI is not immune to controversy.
When misused, AI can lead to biased outcomes, unintended harm, and ethical concerns. AI can take away from personalisation and the human touch that some kinds of text written by humans possess, like in the Vanderbilt case. ChatGPT is a computer programme that does not possess emotion and can not convey it in a real way either.
OpenAI has acknowledged as much, explaining on its website that "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers." It is important that we carefully consider the potential risks and benefits of AI and take steps to ensure that it is used ethically and responsibly. This includes developing standards for AI transparency and accountability, investing in research to identify and address potential biases, and engaging in public dialogue about the ethical implications of AI. By doing so, we can ensure that AI is used in a way that benefits society as a whole.