
We’ve used the Hugging Face pipeline to understand emotion and summarize text. Now, let’s use it to find answers.
“Question-Answering” models are trained to read a piece of text (the “context”) and find the answer to a question within that text.
Step 1: Installation
pip install transformers
# You also need the 'torch' (PyTorch) backend
pip install torchStep 2: The Code
We just need to provide the pipeline with a context and a question.
from transformers import pipeline
# 1. Load the question-answering pipeline
# This downloads a model (like 'distilbert-base-cased-distilled-squad')
qa_pipeline = pipeline("question-answering")
# 2. Provide the context (the knowledge)
context = """
Python is an interpreted, high-level and general-purpose programming language.
Python's design philosophy emphasizes code readability with its notable use of
significant indentation. It was created by Guido van Rossum and first
released in 1991.
"""
# 3. Ask a question
question = "Who created Python?"
# 4. Get the answer!
result = qa_pipeline(question=question, context=context)
# 5. Print the result
print(result)Step 3: The Result
The output is a dictionary showing the answer it found, its confidence score, and where in the text it found it.
{
'score': 0.998,
'start': 188,
'end': 204,
'answer': 'Guido van Rossum'
}
You’ve just built the core of a “RAG” system—it can read any document and provide answers. You could combine this with your web scraper to scrape a Wikipedia page and then answer questions about it!





