ADVANCING CONVERSATIONAL AI: BEST PRACTICES IN PROMPT ENGINEERING FOR ENHANCED CHATBOT PERFORMANCE
Keywords:
Artificial Intelligence, Large Language Models (LLMs), ChatGPTAbstract
Technologies related to artificial intelligence are progressing at a rapid pace. This evolution brings with it new possibilities and problems, one of which is the growing usage of large language models (LLMs) in human-AI interaction. One method that can be utilized to optimize goal-directed interactions with LLM-based artificial intelligence systems is known as prompt engineering. This refers to the process of generating well-structured instructions to receive the desired information or responses from an LLM program. On the other hand, there is a dearth of studies on the implications of AI literacy on prompting behavior or on the perception of LLM-based AI systems by those who are not specialists through prompt engineering. This is an especially crucial consideration when thinking about what LLMs mean for universities. Here, we take a look at this problem from every angle, provide a skill-based method to prompt engineering, and give serious thought to how students' level of AI literacy affects their ability to use prompt engineering. Additionally, there is qualitative data on the natural behaviors of students concerning AI systems that are based on LLM. For the goal-directed application of generative AI approaches, the results demonstrate that timely engineering is an essential ability. This is because it forecasts the quality of the output of LLM. In addition, the findings indicate that particular domains of AI literacy have the potential to facilitate the facilitation of speedier engineering and more individualized LLM adjustments inside the classroom setting. To create a hybrid intelligent society where students can utilize generative AI technologies like ChatGPT effectively, we, therefore, propose that the present curriculum incorporate AI instructional content.
References
J. Berg, M. Raj, R. Seamans Capturing value from artificial intelligence Academy of Management Discoveries, 9 (4) (2023), pp. 424-428
Y. Cao, S. Li, Y. Liu, Z. Yan, Y. Dai, P.S. Yu, et al. A comprehensive survey of AI-generated content (AIGC): A history of generative AI from GAN to ChatGPT (2023) https://arxiv.org/pdf/2303.04226
S. Schöbel, A. Schmitt, D. Benner, M. Saqr, A. Janson, J.M. Leimeister Charting the evolution and future of conversational agents: a research agenda along five waves and new frontiers Information Systems Frontiers, 26 (2024), pp. 729-754, 10.1007/s10796-023-10375-9
Y.K. Dwivedi, N. Kshetri, L. Hughes, E.L. Slade, A. Jeyaraj, A.K. Kar, et al. So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy IJIM, 71 (2023)
J.H. Choi, K.E. Hickman, A. Monahan, D.B. Schwarcz ChatGPT goes to Law school SSRN Electronic Journal (2023), 10.2139/ssrn.4335905
J. White, Q. Fu, S. Hays, M. Sandborn, C. Olea, H. Gilbert, et al. A prompt pattern catalog to enhance prompt engineering with ChatGPT http://arxiv.org/pdf/2302.11382v1 (2023) Google Scholar
J. Oppenlaender, R. Linder, J. Silvennoinen Prompting AI art: An investigation into the creative skill of prompt engineering (2023) https://arxiv.org/pdf/2303.13534
R. Bommasani, D.A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, et al. On the opportunities and risks of foundation models https://arxiv.org/pdf/2108.07258 (2021)
J.D. Zamfirescu-Pereira, R.Y. Wong, B. Hartmann, Q. Yang Why Johnny can't prompt: How non-AI experts try (and fail) to design LLM prompts Proceedings CHI 2023 (2023), pp. 1-21
L. Floridi, M. Chiriatti GPT-3: Its nature, scope, limits, and consequences Minds and Machines, 30 (4) (2020), pp. 681-694, 10.1007/s11023-020-09548-1
Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, et al. Survey of hallucination in natural language generation ACM Computing Surveys, 55 (12) (2023), pp. 1-38
P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, G. Neubig Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing ACM Computing Surveys, 55 (9) (2023), pp. 1-35
H. Dang, L. Mecke, F. Lehmann, S. Goller, D. Buschek How to prompt? Opportunities and challenges of zero- and few-shot learning for human-AI interaction in creative applications of generative models https://arxiv.org/pdf/2209.01390 (2022)
Y. Hou, H. Dong, X. Wang, B. Li, W. Che MetaPrompting: Learning to learn better prompts (2022) https://arxiv.org/pdf/2209.11486
J. Oppenlaender, R. Linder, J. Silvennoinen Prompting AI art: An investigation into the creative skill of prompt engineering (2023) https://arxiv.org/pdf/2303.13534
X. Han, Z. Zhang, N. Ding, Y. Gu, X. Liu, Y. Huo, et al. Pre-trained models: Past, present and future AI Open, 2 (2021)
T. Wu, M. Terry, C.J. Cai AI chains: Transparent and controllable human-AI interaction by chaining large language model prompts https://arxiv.org/pdf/2110.01691 (2021)
G. Betz, K. Richardson, C. Voigt Thinking aloud: Dynamic context generation improves zero-shot reasoning performance of GPT-2 (2021) https://arxiv.org/pdf/2103.13033
J.D. Zamfirescu-Pereira, R.Y. Wong, B. Hartmann, Q. Yang Why Johnny can't prompt: How non-AI experts try (and fail) to design LLM prompts Proceedings CHI 2023 (2023), pp. 1-21.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Sudeesh Goriparthi (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.