With Open AI’s Chat-GPT and rivals releasing generative AI tools to help with the creation of text, computer code and graphics, businesses of all sizes are experimenting with ways to remove the daily grind from everyday tasks.
But in fields such as marketing, HR, customer service, law, computer programming and data analysis, how can individual companies be sure that they can apply Large Language Models to their own unique use cases and to their competitive advantage?
This is where prompt engineering comes in – the new in-demand job role which may hold the key to unlocking the full potential of AI systems and help firms get the most out of what LLMs have to offer.
These so-called ‘AI whisperers’ are responsible for interpreting AI behaviour, identifying the strengths and limitations of AI systems and directing their development to achieve specific objectives.
Unlike traditional coders, prompt engineers program in prose, sending commands written in plain text to the AI systems – which has sparked online debates over whether its best to have a background in software engineering or English.
TechInformed spoke with Albert Phelps a prompt engineer at Accenture to discuss the nuts and bolts of his current role
Why are companies like Accenture employing prompt engineers?
Prompt engineering is just one of the many kinds of job roles emerging from the increased accessibility of artificial intelligence technologies. Recent advances in language-based AI mark a new inflection point, and an opportunity for businesses to reinvent how work is done. With as much as 40% of all working hours set to be impacted by Large Language Models (LLMs) like GPT-4, according to Accenture research, it’s time now to identify and build ways of using these tools quickly and productively.
What is prompt engineering exactly?
Prompt engineering is a hybrid of critical thinking and programming in natural language. It involves closely working with LLMs to ensure they produce better results, writing instructions, and giving examples – the ‘prompts’ – to allow LLMs to effectively generalise to new tasks.
The role is all about taking business challenges and turning them into a ‘text-in, text-out’ problem for the LLM to solve. Prompt engineers aim to distil the meaning of the task into a prompt for the model that gives consistent, high-quality outputs.

Prompt engineer Albert Phelps
What other consideration do you need to apply?
You need to be efficient. Tokens in prompt engineering refer to the number of words within the prompt and the charging model of closed-source LLMs is based on how many tokens – or parts of words – that an organisation inputs. Couple this with the fact that there is an overall limitation on how much you can submit to the model for it to process, and that means being efficient in the queries that you give is important.
Software engineer or English graduate?
The requirement for proficiency with sentence structures and the meaning of words means that many prompt engineers often come from history, philosophy or English language educational backgrounds. At the same time, the role has parallels with software engineering in the sense that it is about frequent iteration and optimisation, after initially proving something works.
I come from a financial services background, where I started out on the front line of retail banking. This involved complaint handling, compliance and quality-checking, all of which involved paying close attention to phrasing and wording.
Then I moved more into risk-based compliance; looking at what regulations around data privacy and cybersecurity say, and helping technical teams comply with them. After coming across GPT-3 around the start of 2021, I started experimenting with it, and my excitement with what it could potentially do, really started from there.
How will businesses know if they need a prompt engineer or not?
Firms can use prompt engineers to advise on use cases and how the technologies could fit into a given business, given their familiarity with getting the best out of them. The methods they develop to prompt AI often inform tool and tutorial libraries to allow others to then realise the value of these technologies more quickly.
Can ‘AI whisperers’ play a role in ensuring systems are ethical?
The power of LLMs present immense opportunity, but it is also important to take the risks seriously, from inaccurate outputs to negative feedback loops and other biases. Humans are a critical element of the process in training LLMs, both within the design and testing phases to give appropriate validation and ensure tasks are being completely not only effectively, but safely and responsibly.
Alongside technology investments, companies need to make sure they have the appropriate safeguards in place with an effective governance structure, and detailed compliance procedures. Prompting responsibly is key.