Session 81 · Week 21 · Handout W
What AI actually is — concepts and honest assessment
Name

Date
Fill in the definition column from the session. Then complete the honest assessment and reflection at the bottom. No hype in either direction — accurate expectations are more useful than excitement or fear.
Core concepts — fill in the definitions
Term What it actually means Why it matters for running locally
Token
Parameter
Training
Inference
Context window
Quantization (Q4)
Training cutoff
Models are good at
· Explaining concepts
· Summarizing text
·
·
·
Models are NOT good at
· Verifying current facts
· Precise arithmetic
·
·
·
Honest assessment
Why does running a model locally (on your own machine) matter, specifically for you and for privacy?
A language model is described as "a mathematical function that predicts the next token." What does this explain about why models sometimes give confident but wrong answers?
Complete: "Before this session I thought AI was ___. Now I understand it is ___."
Sessions 82–84 · Week 21 · Handout X
Ollama — installation, models, and API reference
Name

Date
CLI commands
ollama pull modeldownload a model
ollama run modelinteractive chat
ollama listlist installed models
ollama rm modelremove a model
ollama --versionshow version
systemctl status ollamaservice status
Model recommendations by RAM
Free RAMRecommended modelSize
4 GBgemma2:2b~1.6 GB
6 GBllama3.2:3b~2.0 GB
8 GBllama3.2:3b~2.0 GB
12+ GBgemma2:9b~5.4 GB
Check available RAM: free -h
API: /api/generate
curl http://localhost:11434/api/generate \
  -d '{
    "model": "llama3.2:3b",
    "prompt": "your prompt here",
    "stream": false
  }'
Extract response: pipe to | jq -r '.response'
API: /api/chat (multi-turn)
curl http://localhost:11434/api/chat \
  -d '{
    "model": "llama3.2:3b",
    "stream": false,
    "messages": [
      {"role":"system","content":"..."},
      {"role":"user","content":"..."}
    ]
  }'
Exercises
1.Install Ollama. What version is installed? Which model did you pull and how large is it on disk?
Version
Model pulled
Disk size
2.Ask the model something it should struggle with — a recent event or large calculation. What happened?
Prompt used
What happened
3.Send a prompt via the API with curl + jq. What fields does the full JSON response contain?
Command used
JSON fields
4.Test the stateless API: ask a question, then ask a follow-up WITHOUT including the first exchange. Does the model answer correctly?
First question
Follow-up without context
Result + why
5.Monitor htop while running the model. What CPU and RAM usage do you observe during inference?
RAM used
CPU usage
Tokens/sec
Session 85 · Week 22 · Handout Y
Script that talks to Ollama — template and build guide
Name

Date
The template below is the foundation for any script that uses the Ollama API. Fill in the blanks, then use it to build your chosen script. All three script options use this same query_model function.
The query_model template — fill in the blanks
~/scripts/query-template.sh — fill in the blanks in blue
#!/bin/bash
set -euo pipefail
 
# Model to use — override with: OLLAMA_MODEL=gemma2:2b ./script.sh
MODEL="${OLLAMA_MODEL:-llama3.2:3b}"
API="http://localhost:11434/api/generate"
 
function query_model() {
    local prompt="$1"
    local system="${2:-}"
 
    local payload
    payload=$(jq -n \
        --arg model "$MODEL" \
        --arg prompt "$prompt" \
        --arg system "$system" \
        '{model:$model,prompt:$prompt,system:$system,stream:false}')
 
    curl -s "$API" -d "$payload" | jq -r '.response'
}
 
# Check Ollama is running
if ! curl -s http://localhost:11434 >/dev/null 2>&1; then
    echo "Error: Ollama is not running. Start it with: systemctl start ollama" >&2
    exit 1
fi
 
# Your code below — use query_model "prompt" "system_prompt"
Choose your script and complete the checklist when done
Option A — explain.sh — reads a file and explains what it does
Takes a file as $1. Reads its contents. Sends to the model with system prompt "You are a concise Linux expert."
Handles missing file argument (usage message + exit 1)
Handles file that does not exist
Limits input to first 100 lines (head -100) to avoid context overflow
Correctly quotes file contents for jq using --arg
Tested on: a bash script, a config file, a log file
Option B — error-explain.sh — explains errors from a log file
Reads last N lines of a log file. Sends to model asking for plain-language explanation of errors.
Accepts log file as argument or reads from stdin
Defaults to last 50 lines; accepts -n flag for custom count
System prompt instructs model to focus on errors specifically
Tested on /var/log/syslog and /var/log/auth.log
Option C — commit-message.sh — generates git commit messages
Runs git diff --staged. Sends the diff to the model. Asks for a commit message in "type: description" format.
Checks that it is inside a git repository (git rev-parse)
Handles the case where nothing is staged (no diff output)
System prompt specifies commit message format
Tested on a real staged change
Final verification — all scripts
set -euo pipefail is present
Ollama running check is at the top
jq -n --arg used for JSON construction (safe quoting)
Script is in ~/scripts, executable, tested with bash -x
query_model function is in ~/.bashrc for interactive use
Session 87 + S96 · Week 23/24 · Handout Z
Final project design + Phase 6 vocabulary checklist
Name

Date
Complete the design document in session 87 before starting to build. The checklist is completed in session 96.
Problem statement
what problem does this solve?
What it does
user does X → tool does Y → output is Z
Course skills used
list phases and sessions
Phase 1 (sessions ___):
Phase 2 (sessions ___):
Phase 3 (sessions ___):
Phase 4 (sessions ___):
Phase 5 (sessions ___):
Phase 6 (sessions ___):
Success criteria
3–5 specific, testable criteria
Build plan
one goal per sprint session
Sprint 1
Sprint 2
Sprint 3
Refinement
Phase 6 — AI, Ollama, and final project vocabulary
I can explain this
I need to review
LLM
token
parameter
training vs inference
context window
quantization (Q4)
training cutoff
Ollama
ollama pull / run
port 11434
/api/generate
/api/chat
system prompt
stream: false
jq -n --arg
keep_alive parameter
cold vs warm start
stateless API
Final course reflection (Session 96)
1. What concept from this entire course changed how you think about computers?
2. Describe your final project in one sentence. Then: what was the hardest technical problem you solved?
3. Complete: "One thing from this course I will apply outside of computing is ___."
4. What is the first thing you will learn or build after this course ends?