Large Language Models
Also called chatbot, digital assistant, or LLM
What They Are
LLMs like ChatGPT are becoming increasingly popular. As their responses get better and accessibility improves, more people than ever are discovering the incredible flexibility of these tools.
They’re pattern prediction engines trained on massive amounts of text. It’s not thinking of your response. It's more like a highly sophisticated auto-complete.
The LLM is context hungry. It has no ability to experience, so it can't "imagine" how a steak tastes. It can't visualize a butterfly flying through the air. The more specifics you provide, the better it performs. If it doesn't have enough context, it will guess. If it doesn't have an answer, it will confidently hallucinate a total fabrication for you.
It's a mirror. It's trained to reflect back what you give to it. If you maintain a serious, professional tone, it will too. If you write in flowery poetic language, it will return the same.
My goal with this site is to offer practical assistance in personalizing the tool. You don't need to pay someone a certification fee or scroll through 10,000 prompts to get what you need. You just need the information on how to build your own prompts while ensuring the LLM doesn't hallucinate.
Definitions as they apply to LLMs:
LLM: | Large language model. |
Model: | Neural network trained to generate text responses. |
Token: | A chunk of text the model processes—can be a word or part of a word. |
Prompt: | The input you give the model to start a response. "Provide 500 word summary of attached PDF." |
Prompt engineering: | Designing more context-rich prompts to improve output. Much easier than it sounds. |
Training data: | Text examples the model studied to learn language patterns. |
Training date: | When the model weights were last updated with fresh data. |
Context: | The more you provide, the better your response. |
Hallucination: | A confident but incorrect answer to a prompt. It could be due to insufficient context or because it doesn't know. |