by Burak Senel
If you have Internet access, I’m willing to bet this is not the first time you heard about ChatGPT. It’s all over Twitter, LinkedIn, and, as students, educators, and researchers more and more adapt the tool, it’s in classrooms and scientific journals.
A large language model trained to chat in a way humans are used to, ChatGPT offers immense potential for students, educators, and researchers. Students, educators, and researchers need to learn how to make the most out of this tool, respecting some foundational principles we have collectively crafted both in education and research; while shedding some practices that might not serve us anymore in this new era of knowledge transfer and dissemination.
To that end, let’s first learn about the candle to understand the light bulb.
"Technology is a useful servant but a dangerous master." —Christian Lous Lange, Norwegian historian and political scientist
Humans have been obsessed with machines doing things for us since at least as far back as Ancient Greece. The main motivation behind this obsession has been our desire to do as little as possible for as much work as possible. Starting with the domestication of plants and animals, we have exploited the natural, horses for transportation and trees for heating are two examples. With the discovery of other forms of energy such as the magnetic field and electricity, we gained the ability to exploit the artificial, AKA electronic devices such as the computer.
Machines either amplified what we already could do (microscopes and telescopes helped us see the tiny or the far away) or they gave us the ability to do what we couldn’t (planes helped us fly). We also wanted machines to do what we could and maybe more. As far back as the 18th century, people tried to create devices that could produce human sounds, and later, with the invention of computers, tried to emulate human dialogue, as in the case of ELIZA, a bot that manipulates the language input by a human and twists the statement to form a question with words such as “I” replaced with “you”, and throws it back at the human, which gives it the illusion of understanding human speech.
However, maybe until a couple of years ago, most of us didn’t think computers could achieve cognitive tasks, let alone use natural language, almost indiscernible from the way we use it. Computers, after all, were very good at executing tasks that could be expressed in formal language, a set of unambiguous instructions, whereas the language we use is natural language, polysemous and often ambiguous. The more nuanced and semantically flexible nature of natural languages meant that machines couldn’t process them. That is, until natural language processing matured enough.
We observe that attempts were made to make computers intelligent in the 1950s, with neural networks, each neuron a mathematical function that can talk to other neurons; but we didn’t have the computational power or data necessary to fully make use of this new approach. As a result, they didn’t gain traction. Not until computers became more powerful, and data abundant (particularly in the 2000s), did people start to realize that with neural networks complex tasks could be achieved. Having computers analyze data, find patterns, and generate data similar to the data they have seen was finally possible. Since language has many of these patterns (once impossible to capture fully by formal, rule-based systems), the new methods in machine learning such as the transformer model coupled with super-powerful computers have led to language models that could produce and understand natural language.
As big organizations such as Google and Stanford University built artificially intelligent systems for their own purposes, a company that wanted to open up such systems to “benefit all of humanity” was born. That company was, and still is, OpenAI. In 2018, OpenAI published the paper where they introduced the Generative Pre-trained Transformer system (GPT), a large language model that was able to generate textual data and was pre- trained on a very large corpus, built on the transformer model Google scientists describe in their revolutionary paper “Attention is All You Need.” This pre-trained model was then trained further to chat like a human in a positive reinforcement and punishment fashion that many social scientists would find familiar. And just like that, ChatGPT entered the chat.
And soon ChatGPT exploded in popularity. This popularity was partly thanks to its omnipresence enabled by the Internet and an intuitive interface that looked like any instant messaging app. For me, the biggest factor in ChatGPT’s popularity was that something about this tool spoke to our desire to create a human-like machine.
As ChatGPT gained traction in various domains, its applications in research and education became increasingly evident. Educators, researchers, and students have started to explore the potential of ChatGPT as a tool for transforming traditional practices in various ways. Educators used it to enhance their teaching. Researchers put a bunch of ChatGPT’s with different personalities and watched the way they interacted. One of my students used it to create a YouTube video about ChatGPT helping a student write the perfect essay and won an award for it.
One of the most significant applications of ChatGPT in research is its ability to simulate complex social phenomena, enabling researchers to study human behavior with unprecedented precision. By generating realistic scenarios, ChatGPT allows for the exploration of social learning, observational learning, and imitation in controlled settings, much like the classic experiments of Solomon Asch and Albert Bandura.
As computational social science gains momentum, ChatGPT facilitates the analysis of social systems, helping researchers test hypotheses and uncover new social patterns. In this sense, ChatGPT is analogous to the impact of statistical software packages like SPSS and R, which have transformed the way researchers analyze and interpret data. ChatGPT can be used to help explain data to the researchers within the web app, or write code in R to analyze data.
ChatGPT’s potential is not limited to research; it also holds the promise of reshaping education. By leveraging AI-driven personalization, educators can create customized learning experiences tailored to each student’s unique needs and strengths. This can lead to improved educational outcomes and reduced achievement gaps.
Moreover, AI-generated scenarios enable students to develop critical thinking and problem-solving skills by exposing them to diverse perspectives and complex situations. It is as easy as asking ChatGPT to provide two different perspectives on a topic. For instance, in my multimodal rhetorical class, I asked ChatGPT to provide debate topics about AI and provide two different viewpoints for each topic. My students, then, voted on one topic and chose one side to further debate the topic voted.
The responsible use of ChatGPT in research and education requires addressing potential ethical concerns. Researchers must ensure that AI-generated scenarios do not inadvertently perpetuate stereotypes or reinforce harmful beliefs. Data privacy and informed consent must also be considered when using AI-generated content in research and education.
One major concern regarding ChatGPT's use in research and education is that it can come between the student and teacher or literature and researcher, potentially leading to misinterpretations or misunderstandings. Additionally, the widespread use of ChatGPT may result in a loss of uniqueness and personality in writing, as the model tends to generate content in a more homogenized manner.
We will discuss similar concerns and address criticism, such as that from Dr. Chomsky, Dr. Roberts, and Dr. Watumull, in the upcoming posts.
For now, let us remember the simple fact we have taught and conducted research with less powerful tools. Let us also remember that the telescope that will let us see some of the first galaxies in our universe started with a telescope that helped us barely make out the mountains on our moon. As we integrate conversational agents such as ChatGPT into our current practices, these challenges should not deter us from harnessing the potential of ChatGPT.
By embracing this technology and addressing its ethical implications, we can usher in a new era of knowledge production, personalized education, and AI-enhanced human-directed science.
Next >> Module 2: ChatGPT: A Faux Scientist or a Promising Tool for Science?