ChatGPT is not wrong, we are asking the wrong questions – What is prompt engineering and how can it help you get the right information from artificial intelligence?

Source: eKapija Wednesday, 24.07.2024. 14:11
Comments
Podeli
Illustration (Photo: Pixabay.com/Gerd Altmann)Illustration
Has it ever happened to you that you don’t get an adequate answer when communicating with ChatGPT? Maybe you tried to formulate the question from different angles, changed the words or added details in order to get the desired answer. If so, you in fact engaged in prompt engineering, although maybe you weren’t aware of it. Prompt engineering, a term which is only to grow in importance in the years ahead of us, is the process of designing and optimizing inquiries made to AI, especially big language models, in order to get precise, relevant and useful answers. The idea is to formulate the inquiry in a way that enables the model to clearly understand what it is being asked and to provide a quality response, without the need for additional training or modification of the model itself. In essence, prompt engineering helps you get from AI exactly what you need.

– Prompt engineering is the process of the optimization of an inquiry until the desired answer is obtained from the AI model. In the context of artificial intelligence, prompt engineering is important because it improves the performances of big language models without the need to modify the model itself – Tihomir Opacic, AI consultant and software engineer with over 25 years of experience, explains for eKapija. His company Orange Hill Development also organizes AI consultations and training for traditional companies and software development teams, and they also provide education in field of prompt engineering, among other things.

The key components of a good prompt, according to our interviewee, include clear instructions, the context, allocating a specific role to the model, a request for formatting, the tone and an example of an answer.

– Components such as clear instructions and the context are essential to getting quality answers – adds Tihomir, who is the founder and owner, or a co-owner, of two companies – Orange Hill Development and Viking Code. He is also the strategy director at CDT Hub and the technical director at the Dutch company Coding Chiefs.

As he points out, big language models have limited knowledge, especially when it comes to recent events.

– Techniques such as Retrieval Augmented Generation (RAG) and the implementation of AI agents can help the models access the current data and give precise answers – says Tihomir.

These techniques enable the models to use external data sources as the context, which improves the accuracy and relevancy of the answers.

– Prompt engineering is important for all professions which use big language models intensively – explains Tihomir, adding that professions which require high precision, such as programmers, marketers, bankers and technical support, benefit the most from prompt engineering.

Tihomir Opacic (Photo: Nikola Mihaljević)Tihomir Opacic


What are we doing wrong when creating prompts?

The most frequent mistakes that people make when they are talking to big language models are insufficiently clear prompts (clear instructions) and providing an insufficient quantity of data (the context).

– People then fall in a trap by pointing out to the model that it has made a mistake, to which the model most often starts apologizing for the mistake, then providing another unsatisfactory answer formulated in slightly different way. The majority of the model’s unsatisfactory answers can be solved by providing a wider context and clearer instruction – says Opacic.


More complicated problems, he explains, require the use of more advanced techniques of prompt engineering, such as few-shot prompting, chain-of-thought prompting, tree-of-thought prompting and other prompt engineering techniques.

Future of prompt engineering

Tihomir predicts that prompt engineering will become even more important in the future.

– In the past nine months, based on the answers by the new versions of big language models such as OpenAI GPT 4o and Anthropic Claude 3.5, we see that the prompt engineering techniques are used in the basic models themselves so as to additionally improve the quality of their answers. Certainly, it is still important for us as users of these systems to have good knowledge of prompt engineering techniques, so as to more easily recognize the situations in which, with some effort, we can get much better answers – he says.

In the future, the AI consultant believes, we can expect the model itself to steer us, through conversation, toward using some of the efficient prompt engineering techniques, without us, as users, even being aware that it is happening, and all this to improve the quality of the answers. Computer scientists worldwide, he adds, are still publishing research discovering new prompt engineering techniques. They often find inspiration in techniques which were developed to improve the productivity of us people.

– When we observe conversations with big language models, it is ultimately very useful to draw a parallel with conversations we have with people. If we outsource a work task to a human or a model, if we don’t provide enough quality information and instructions, the outsourced task will not be done well – our interviewee concludes.

I. Zikic


Comments
Your comment
Full information is available only to commercial users-subscribers and it is necessary to log in.

Forgot your password? Click here HERE

For free test use, click HERE

Follow the news, tenders, grants, legal regulations and reports on our portal.
Registracija na eKapiji vam omogućava pristup potpunim informacijama i dnevnom biltenu
Naš dnevni ekonomski bilten će stizati na vašu mejl adresu krajem svakog radnog dana. Bilteni su personalizovani prema interesovanjima svakog korisnika zasebno, uz konsultacije sa našim ekspertima.