So you wanna be a Prompt Engineer? Here are the most requested skills in Prompt Engineering job listings (and how to actually learn them)

Welcome to the hottest, weirdest job on the internet. Prompt engineering is the art of whispering sweet nothings into the void and hoping a trillion-parameter robot spits out something useful. It’s part writing, part engineering, part magic. Job listings for prompt engineers are multiplying faster than ChatGPT can hallucinate an answer—and guess what? Nobody knows what this job really is yet.

So I did the legwork: I analyzed a heap of prompt engineering job descriptions to find out what companies are actually asking for. Below are the Top Most Requested Skills, along with what they mean, how to learn them, and what your day might look like once you're knee-deep in promptland.

1. Prompt design, testing, and iteration

What it is: Writing and refining prompts so that LLMs do what you want. Think: getting a genie to follow instructions, but the genie is very literal and slightly feral.

Why it matters: This is the core of the role. If the prompt sucks, the output sucks.

What you need to know: Few-shot vs zero-shot, prompt chaining, temperature and max tokens, how to debug LLM weirdness. (Check out: “Examples of advanced AI prompting techniques”)

How to learn it: DeepLearning.AI Prompt Engineering Course, OpenAI Playground, building your own prompt graveyard.

Example task: You’re tasked with writing prompts for an AI assistant that gives medical advice, but you need it to repeatedly say "This is not medical advice." You spend an hour trying to get it to stop recommending leeches.

Sample Prompt: "You are a careful, friendly assistant who provides information only. Always include a reminder that you're not a doctor."

2. LLM fundamentals & capabilities

What it is: Knowing what these models can and can’t do (spoiler: they’re smart, but not psychic).

Why it matters: Understanding model strengths and limitations helps you set realistic expectations—and fix things when they break.

What you need to know: Context windows, tokenization, temperature, hallucinations, model bias.

How to learn it: Hugging Face docs, OpenAI’s technical blog, YouTube explainers, reading actual model cards.

Example task: Someone asks why the model keeps outputting Latin text when asked about legal clauses. You realize it’s hallucinating because of vague prompt structure. Fun!

Sample Prompt: "Summarize the following contract in plain English. Avoid using legal jargon or Latin phrases."

3. Collaboration with product, engineering, & business teams

What it is: Translating biz-speak into prompt strategy, and vice versa.

Why it matters: You’re the bridge between "what we want" and "how the machine hears it."

What you need to know: Spec writing, stakeholder management, asking great questions.

How to learn it: Soft skills courses, project-based teamwork, shadowing product managers.

Example task: A product manager wants a chatbot that "understands customer intent." You’re like, "Cool, define 'understand.'"

Sample Prompt: "Based on this chat log, identify the customer’s primary intent in one sentence."

4. Data analysis & model output evaluation

What it is: Reviewing outputs to determine what’s working, what’s broken, and what’s just weird.

Why it matters: Prompt iteration only works when you can evaluate the results.

What you need to know: Quality metrics (fluency, coherence, relevance), bias spotting, A/B testing.

How to learn it: Build your own output logs, set up prompt comparisons, track variations in spreadsheets.

Example task: You’re told your prompt isn’t working. You dig into 50 responses and realize 23 of them started with “As an AI language model…”

Sample Prompt: "Answer in a natural, human tone. Do not mention being an AI model."

5. Python + APIs (optional but powerful)

What it is: Writing code to automate prompt workflows, integrate APIs, and do cool nerd things.

Why it matters: You’ll be 10x more useful if you can prototype your ideas.

What you need to know: Python basics, API requests, JSON, LangChain or Gradio.

How to learn it: Codecademy, freeCodeCamp, OpenAI API docs.

Example task: You build a bot that chains multiple prompts to turn long emails into tweet threads. (And yes, you make it say “too long; didn’t read” in the first line.)

Sample Prompt: "Convert this email into a tweet thread. Include a catchy first tweet and add emojis where relevant."

6. Documentation & knowledge sharing

What it is: Writing down what works and why, so others don’t suffer like you did.

Why it matters: Prompts are fragile. Documentation is your future self’s best friend.

What you need to know: Internal wikis, markdown, Notion, version control.

How to learn it: Practice writing documentation for your test prompts. Teach others what you’ve built.

Example task: You write a 3-page guide titled "How to Stop the AI from Recommending Astrology During Scientific Queries."

Sample Prompt: "Only reference peer-reviewed scientific literature. Do not include astrology or pseudoscience."

7. Bias mitigation & safety awareness

What it is: Preventing the AI from being a problematic weirdo.

Why it matters: Nobody wants to deploy a model that ends up on the news for the wrong reasons.

What you need to know: Bias types, red-teaming, ethical frameworks.

How to learn it: Read work by AI ethicists, explore case studies of LLM fails.

Example task: The model starts generating gendered job descriptions. You rewrite the prompt to balance representation.

Sample Prompt: "Generate a job ad that is inclusive, uses gender-neutral language, and avoids bias."

8. Industry or use case expertise

What it is: Tailoring prompts to a specific field—like finance, customer support, or medicine.

Why it matters: A prompt that works for a marketing team will not fly in a courtroom.

What you need to know: Key terms, tone, format expectations in the industry.

How to learn it: Talk to domain experts, read documents from that field, build use-case-specific prompt sets.

Example task: You’re writing prompts to generate quarterly earnings summaries. You have to make sure it doesn’t confuse revenue and profit (again).

Sample Prompt: "Write a Q2 financial report summary. Focus on revenue, net income, and YoY changes. Keep it professional."

9. Tooling: Vector stores, knowledge graphs, finetuning, A/B testing

What it is: Enhancing prompts with retrieval, context injection, or testing frameworks.

Why it matters: Prompts alone are rarely enough at scale.

What you need to know: Pinecone, ChromaDB, LangChain, Haystack, finetuning basics.

How to learn it: Hugging Face tutorials, LangChain docs, GitHub projects.

Example task: You build a mini-RAG system to help customer service agents find answers from internal docs. It kind of rules.

Sample Prompt: "Using the following context, generate a response to the customer question. Keep it helpful and polite."

10. Staying current & experimenting like a mad scientist

What it is: Trying weird things, learning fast, and not being afraid to break stuff.

Why it matters: The field is evolving faster than your coffee gets cold.

What you need to know: Model releases, prompting trends, community best practices.

How to learn it: Subscribe to AI newsletters (Rundown AI, TLDR AI), join Discords, follow prompt engineers on X.

Example task: You spend the afternoon trying to make GPT-4 write poetry in the voice of a pirate chef. No regrets.

Sample Prompt: "Write a haiku from the perspective of a pirate who loves baking sourdough bread."

11. Training LLMs (aka giving the robots their brains)

What it is: Fine-tuning or pre-training models with custom datasets to adjust behavior, tone, or domain knowledge.

Why it matters: Sometimes prompts aren’t enough—you need the model to learn differently.

What you need to know: Dataset curation, labeling, model architecture basics, fine-tuning pipelines.

How to learn it: Hugging Face course on finetuning, OpenAI fine-tuning docs, tutorials on LoRA and parameter-efficient tuning.

Example task: You’re asked to fine-tune a model on your company’s proprietary data so it stops confusing your onboarding process with LinkedIn’s.

Sample Prompt: Before training: "What are the steps for onboarding a new client?" → Response is generic. After training: The response is detailed and tailored to your company’s exact steps. Nice.

Final thoughts: So, is prompt engineering here to stay?

Look, prompt engineering is weird. It’s new. It’s a little unhinged. But it’s also very real, very in demand, and very likely to evolve into something bigger. Whether it becomes a standard part of every product team or a niche craft for LLM whisperers, one thing’s clear: it’s not going away.

If you’re the kind of person who loves figuring out how to talk to machines like they’re snarky coworkers—welcome. There’s a prompt waiting for you.

And hey, if nothing else, now you can say your job involves literally talking to robots.

Lisa Kilker

I explore the ever-evolving world of AI with a mix of curiosity, creativity, and a touch of caffeine. Whether it’s breaking down complex AI concepts, diving into chatbot tech, or just geeking out over the latest advancements, I’m here to help make AI fun, approachable, and actually useful.

https://www.linkedin.com/in/lisakilker/
Previous
Previous

How to train an LLM (as a Prompt Engineer, not a soccer coach)

Next
Next

These AI tools are absolutely magical—and they’re hiring!