Linux seminar · Kubuntu LTS · Ages 15–18

Phase 5 — Teacher Manual

Solving real problems

Sessions
20
Weeks
16 – 20
Format
4 days × 1 hour
Machine
Student-built from Phase 4

Phase goal

By the end of these five weeks, students can automate tasks with bash scripts, connect to remote machines over SSH, understand and control running processes, read and diagnose from system logs, manage archives, and schedule recurring tasks with cron. Everything in this phase runs on the machine they built in Phase 4.

Phase 5 is where the course stops being about learning Linux and starts being about using Linux. The skills are real, the problems are real, and the solutions have value outside the classroom. A student who finishes Phase 5 can administer their own machine, automate repetitive tasks, and debug problems they have never seen before.

One thing to establish at the start of Phase 5: students should transfer their dotfiles from Phase 3 to their new Phase 4 machine. Either via USB or by recreating from their ~/dotfiles backup. This is the portability test from session 48 made real. Reference it explicitly — they prepared for exactly this.

Sessions

Week 16
Bash scripting depth
"Scripts that handle failure are more useful than scripts that assume success."
Session 61
Bash scripting — control flow depth
5 minreview
20 minconcept
30 minexercise
5 minclose

Phase 3 covered the scripting basics. Phase 5 builds on them with the patterns professionals actually use: case statements, arrays, string manipulation, arithmetic, and the discipline of writing scripts that handle failure. Students bring their Phase 3 scripts to this machine and extend them.

Ask students to open their Phase 3 synthesis script on the new machine. If they transferred their dotfiles, it should be in ~/scripts/. If not, recreate it from memory or from the ~/dotfiles backup on the old machine via USB. This reconnects Phase 3 work to Phase 5 context — the script they wrote is the starting point, not a new exercise.

Case statements. A cleaner alternative to long if/elif chains when testing one variable against multiple values:

case "$variable" in
  "value1") commands ;;
  "value2") commands ;;
  "value3"|"value4") commands ;; # multiple matches
  *) default_commands ;;         # catch-all
esac

Case statements are the correct tool for menu scripts, option parsing, and file extension detection.

Arrays. Bash supports indexed arrays: fruits=("apple" "banana" "cherry"). Access with ${fruits[0]}. Length: ${#fruits[@]}. Iterate: for item in "${fruits[@]}"; do echo "$item"; done. Arrays are essential when processing lists of items — filenames, usernames, server addresses.

String manipulation. Without external tools: ${var#prefix} removes shortest prefix match. ${var##prefix} removes longest. ${var%suffix} removes shortest suffix. ${var%%suffix} removes longest. ${var/old/new} replaces first match. ${var//old/new} replaces all. Practical use: ${filename%.txt} strips the .txt extension. ${path##*/} extracts the filename from a path (like basename).

Arithmetic. $((expression)) performs integer arithmetic: echo $((3 + 4)). ((count++)) increments a variable. $((size / 1024 / 1024)) converts bytes to MB. Bash arithmetic is integers only — for floating point, use bc or awk.

  1. Rewrite the menu script from session 51 using a case statement instead of if/elif. Compare the two versions — which is more readable?
  2. Write a script that takes a list of filenames as arguments ($@), stores them in an array, and reports: total count, largest file by size, and any files that do not exist.
  3. Write a function strip_extension() that takes a filename and returns the name without its extension. Use string manipulation (no external commands). Test with: notes.txt → notes, archive.tar.gz → archive.tar, README → README.
  4. Write a script that converts bytes to human-readable size (KB, MB, GB) using only bash arithmetic. Input: number of bytes. Output: appropriate unit. Test with: 500, 51200, 1048576, 2147483648.
case statementesac bash array${#array[@]} string manipulation ${var%} $(( arithmetic )) integer-only arithmetic
Teacher note

Task 3 — strip_extension — has an edge case that matters: files with no extension (README) should return unchanged. Files with multiple dots (archive.tar.gz) — should the function strip .gz or .tar.gz? Both are valid design choices, but the student must make one deliberately and handle it consistently. Students who discover the edge case independently are thinking like engineers. Students who produce a function that breaks on README or archive.tar.gz have not fully tested their work — show them why testing edge cases matters before moving on.

Session 62
Bash scripting — real automation
5 minproblem
15 minconcept
35 minexercise
5 minclose

Script arguments with getopts. For scripts that need multiple options (like real commands), getopts parses flags properly:

while getopts "n:v" opt; do
  case $opt in
    n) name="$OPTARG" ;;  # -n takes a value
    v) verbose=1 ;;       # -v is a flag
    ?) echo "Usage: $0 [-n name] [-v]"; exit 1 ;;
  esac
done

This is how real command-line tools are written. Students who learn getopts write scripts that behave like proper Unix commands.

Here documents (heredoc). Write multi-line text directly in a script:

cat << EOF
This is line one
This is line two
Variables work: $USER
EOF

Useful for generating configuration files, sending emails, or writing multi-line output cleanly.

Process substitution. diff <(command1) <(command2) — compares the output of two commands as if they were files. while read line; do ...; done <(command) — processes a command's output line by line. Avoids temporary files.

Write one substantial script that uses getopts. Choose your own purpose, but it must: accept at least 2 meaningful flags with getopts, print a usage message with -h or on invalid input, log all actions with timestamps, and solve a real problem you actually have. Examples: a backup script with -s (source), -d (destination), -v (verbose); a file search script with -t (type), -s (size), -n (name pattern); a system report script with -c (cpu), -m (memory), -d (disk), -a (all).

After writing: test every flag combination. Test -h. Test an invalid flag. Test missing required arguments. Fix every failure.

getoptsOPTARG heredoc (EOF) process substitution <() usage message-h flag convention
Teacher note

The getopts exercise produces the most diverse scripts of any session in Phase 5. Students with organisation tendencies write backup and report scripts. Students with technical curiosity write system monitoring scripts. Let the diversity happen — the goal is getopts fluency, not a specific script. Students who finish early should add a --dry-run mode that logs what would happen without actually doing it. This pattern appears constantly in real-world tooling.

Session 63
Bash scripting — error handling
5 minscenario
20 minconcept
30 minexercise
5 minclose

Scenario: your backup script runs as a cron job at 3am. It fails halfway through — maybe the destination disk is full, or a source file is locked. You wake up and your backups are half-complete with no indication of what went wrong. This is the problem error handling solves. A script that fails silently is worse than a script that does not exist.

set -e. Add set -e near the top of every script. It causes the script to exit immediately when any command fails (returns non-zero exit code). Without it, a script continues running after errors and may cause cascading damage. With it, the first failure stops execution.

set -u. set -u causes the script to exit on undefined variable references. Without it, rm -rf "$DIR/" where DIR is undefined runs as rm -rf "/". With it, the script exits immediately. Always use set -u in scripts that modify the filesystem.

set -o pipefail. By default, a pipeline's exit code is the exit code of the last command, even if earlier commands failed. set -o pipefail makes the pipeline fail if any command in it fails. Combined: set -euo pipefail at the top of every serious script.

trap. Runs a command when the script exits, receives a signal, or encounters an error:

trap 'echo "Error on line $LINENO"; cleanup' ERR
trap 'cleanup' EXIT

function cleanup() {
    rm -f /tmp/lockfile.$$
    log "Script ended"
}

trap EXIT ensures cleanup always runs — even if the script is interrupted with Ctrl+C. trap ERR runs on any error. $$ is the script's process ID — useful for unique temporary filenames.

Locking. Prevent a script from running twice simultaneously (important for cron jobs):

LOCKFILE="/tmp/myscript.lock"
if [ -f "$LOCKFILE" ]; then
    echo "Already running. Exiting."
    exit 1
fi
touch "$LOCKFILE"
trap 'rm -f "$LOCKFILE"' EXIT
  1. Take the backup script from session 62. Add set -euo pipefail at the top. Test it with a source directory that does not exist. Does it exit cleanly with a message?
  2. Add a trap EXIT that logs when the script ends (success or failure). Add a trap ERR that logs the line number of any error. Test both by introducing an intentional error.
  3. Add locking to the backup script. Run two instances simultaneously — what happens to the second one?
  4. Demonstrate the set -u protection: create a test script that does rm -rf "$UNDEFINED_VAR/tmp" without set -u. Observe what bash does. Now add set -u — what changes?
set -eset -u set -o pipefail set -euo pipefail traptrap EXIT / ERR $$ (PID)$LINENO lockfile pattern
Teacher note

Task 4 — the set -u demonstration with rm -rf — needs careful handling. Do it in ~/sandbox only, with a known directory path. The point is not to actually destroy anything but to demonstrate what would happen. Students who see rm -rf "/tmp" execute because UNDEFINED_VAR was empty will be permanently convinced of set -u's value. Make the demonstration controlled: create a directory called ~/sandbox/test-delete/, set UNDEFINED_VAR intentionally, show the command that would run, then show how set -u prevents it.

Session 64
Scripting synthesis — personal automation tool
50 minbuild
10 mindemo

No new content. Each student writes or substantially refactors a script that incorporates the full Phase 5 scripting toolkit: getopts, arrays, string manipulation, set -euo pipefail, trap, and locking. The script must solve a real problem on their machine. It must be something they will actually run again after today.

The script must include all of the following:

  • Comment header: name, purpose, usage, arguments, author, date
  • set -euo pipefail
  • getopts for at least 2 flags
  • At least one array
  • trap EXIT for cleanup
  • Logging with timestamps
  • A lockfile (if the script could be run by cron)
  • Usage message on -h or invalid input
  • bash -x tested
  • At least 3 edge cases tested and handled

Demos: 2 minutes each. Show what it does, one interesting implementation detail, and one edge case you handled.

Teacher note

Collect these scripts. They are significantly more sophisticated than the Phase 3 synthesis scripts. At the end of Phase 6, this comparison — Phase 3 script vs Phase 5 script — is one of the most concrete demonstrations of growth across the course. Students who struggle to see how much they have learned will see it clearly in this comparison.

Week 17
Networking and SSH
"The internet is just computers talking to each other. You can talk too."
Session 65
Networking — what an IP address is
5 minquestion
25 minconcept
25 minexercise
5 minclose

Ask: what is your IP address right now? Most students will not know. Some will say "I can Google it." Ask: what would Google tell you, and is that the same address your machine uses on your local network? The answer reveals the distinction between public and private IP — which is the session's foundation.

IP addresses. Every device on a network has an IP address — a numeric identifier that allows other devices to find it. IPv4 addresses are 32-bit numbers written as four octets: 192.168.1.100. IPv6 addresses are 128-bit, written in hexadecimal: 2001:db8::1. Most home networks currently use IPv4 with NAT.

Private vs public addresses. Private address ranges (RFC 1918): 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16. These are used inside home and office networks. They are not routable on the internet — your router translates between them and the public internet via NAT (Network Address Translation). When you visit a website, the router substitutes its public IP for your private IP. The website sees the router's IP, not yours.

Key networking commands on Linux:

Essential networking commands
ip addr show # show all network interfaces and their IPs ip route show # show routing table (default gateway) ping hostname/IP # test reachability, measure latency traceroute hostname # trace the route to a destination nslookup hostname # DNS lookup — resolve name to IP dig hostname # detailed DNS query ss -tulpn # show listening sockets (ports) hostname # show machine's hostname hostname -I # show machine's IP addresses

Ports. An IP address identifies a machine. A port number identifies a specific service on that machine. Common ports: 22 (SSH), 80 (HTTP), 443 (HTTPS), 25 (SMTP email), 53 (DNS). When you connect to a website, your browser connects to the server's IP address on port 80 or 443. When you SSH into a machine, you connect to port 22. ss -tulpn shows which services are listening on which ports on your own machine.

DNS. Domain Name System — translates human-readable names (example.com) to IP addresses. When you type a URL, your computer asks a DNS resolver for the IP. The resolver checks its cache, then queries a hierarchy of DNS servers. The result is cached for the TTL (time to live) period. /etc/resolv.conf shows which DNS server your machine uses. /etc/hosts is a local override — entries here take precedence over DNS.

  1. Run ip addr show. Identify your machine's local IP address and network interface name. What does "lo" (loopback) do?
  2. Run ip route show. What is the default gateway IP? What does that device do?
  3. Run ping -c 4 8.8.8.8. What is 8.8.8.8? What does the output tell you about latency? What does packet loss mean?
  4. Run ping -c 4 google.com. How does this differ from pinging the IP directly? What resolved the name?
  5. Run ss -tulpn. What services are listening on your machine right now? On which ports?
  6. Read /etc/hosts. What entries are there? Add a custom entry: 127.0.0.1 mycomputer. Test it with ping mycomputer.
  7. Run dig google.com. Read the ANSWER SECTION. What IP did DNS return? What is the TTL?
IP addressIPv4 / IPv6 private IP (RFC 1918) NATport DNSTTL default gateway ip addr / ip route ping / traceroute ss -tulpn /etc/hostsdig
Teacher note

Task 6 — adding an entry to /etc/hosts — is one of the most immediately practical things in this session. Students who understand that /etc/hosts overrides DNS can block domains (add a entry pointing an ad server to 127.0.0.1), create local shortcuts (point a memorable name to a local machine's IP), or test website changes before DNS propagates. All of these are real use cases. Let students explore the implications.

Session 66
SSH — controlling machines remotely
5 mincontext
20 minconcept
30 minexercise
5 minclose

SSH is the skill that transforms everything learned so far into remote capability. The terminal on a remote machine is identical to the terminal on a local one. Every command, every script, every Phase 2–3 skill applies. SSH is how real systems are administered — not by sitting in front of a server, but by connecting from anywhere.

In session 17 we established that the terminal emulator (Konsole) and the shell (bash) are separate programs. Ask: "If the shell can run on a remote machine, what does the terminal emulator connect to?" The answer is: on a remote SSH connection, Konsole connects to a terminal session on the remote machine. The shell is remote. The keyboard input travels over the network. The output travels back. This is the session 17 payoff.

SSH (Secure Shell) is an encrypted protocol for remote login and command execution. The basic command: ssh username@hostname_or_ip. The connection is encrypted end-to-end — no one observing the network can read the commands or their output. SSH replaced telnet, which was identical but unencrypted.

What happens when you SSH: your client connects to the remote machine on port 22. SSH performs a key exchange — both sides agree on an encryption key without sending the key over the network (Diffie-Hellman). Your credentials are verified. A shell session starts on the remote machine. Everything you type is transmitted encrypted. When you close the session (exit or logout), the connection ends and the remote shell stops.

scp — secure copy. Transfers files over SSH: scp file.txt user@remote:/destination/. Copies from local to remote. scp user@remote:/path/file.txt ./ copies from remote to local. scp -r copies directories recursively.

Setting up SSH server. Install: sudo apt install openssh-server. Start: sudo systemctl enable --now ssh. Verify: sudo systemctl status ssh. The server listens on port 22. Configuration: /etc/ssh/sshd_config.

SSH between classroom machines. Students pair up — one acts as server, one as client. Then switch.

  1. Server machine: install and start the SSH server. Verify it is running with systemctl status ssh. Find your IP with ip addr show. Tell your partner.
  2. Client machine: connect with ssh username@partner_ip. Accept the host key fingerprint. Log in with the partner's password. Verify: run hostname — does it show the remote machine's name?
  3. From the SSH session: navigate, create a file, run a script. Observe that everything works exactly as if sitting at that machine.
  4. From the SSH session: run who — can you see your own session listed?
  5. From the local machine: copy a file to the remote machine with scp. Verify it arrived.
  6. Disconnect (type exit or Ctrl+D). What happens to the remote processes you started?
SSHport 22 encrypted protocol host key fingerprint openssh-server scpsshd_config systemctl enable whoCtrl+D (logout)
Teacher note

Task 4 — running who from the SSH session to see your own connection — is a small but memorable moment. Students can see their own session as a network connection: username, terminal type, login time, and the client IP. This makes the abstract concept of "a remote session" concrete and visible. It also previews the security implications — if someone else were connected, they would also appear here.

Session 67
SSH keys — secure authentication without passwords
5 minproblem
20 minconcept
30 minexercise
5 minclose

Problem: your backup script (from session 62) needs to copy files to another machine over SSH. But SSH requires a password. You cannot type a password at 3am when cron runs the script. The solution is SSH keys — authentication without a password, but more secure than a password. Today we understand why and implement it.

Public key cryptography. You generate a key pair: a private key (stays on your machine, never shared) and a public key (can be freely shared). A message encrypted with the public key can only be decrypted with the private key. This asymmetry is how authentication works without sending a password over the network.

SSH key authentication. Generate a key pair: ssh-keygen -t ed25519 -C "your_comment". This creates ~/.ssh/id_ed25519 (private — never share this) and ~/.ssh/id_ed25519.pub (public — safe to share). Copy the public key to the remote machine: ssh-copy-id username@remote. This appends the public key to ~/.ssh/authorized_keys on the remote machine. Now SSH from this machine to that remote will work without a password — the SSH client proves identity by signing a challenge with the private key.

Key types. ed25519 is the modern recommended type — small, fast, secure. RSA 4096 is older but widely supported. Avoid RSA 1024 (too short) and DSA (deprecated).

SSH config file. ~/.ssh/config stores SSH connection shortcuts:

Host myserver
    HostName 192.168.1.50
    User username
    IdentityFile ~/.ssh/id_ed25519
    Port 22

With this config, ssh myserver is equivalent to the full command. Essential for machines you connect to regularly.

Permissions matter. SSH will refuse to use keys with incorrect permissions: chmod 700 ~/.ssh, chmod 600 ~/.ssh/id_ed25519, chmod 644 ~/.ssh/id_ed25519.pub, chmod 600 ~/.ssh/authorized_keys.

  1. Generate an ed25519 key pair: ssh-keygen -t ed25519 -C "name-course-2024". When asked for a passphrase — add one (optional but recommended for security). What is the passphrase protecting?
  2. View your public key: cat ~/.ssh/id_ed25519.pub. What format is it? What are the three space-separated fields?
  3. Copy your public key to your partner's machine: ssh-copy-id username@partner_ip. Verify: SSH to the partner machine — are you prompted for a password?
  4. Verify permissions are correct: ls -la ~/.ssh/. What permissions does each file have? What happens if you set the private key to 644 and try to SSH? (Try it.)
  5. Create an SSH config entry for your partner's machine. Test that ssh partnername works.
  6. Update your backup script from session 62 to copy a file to the partner machine using scp with key authentication. Verify it runs without prompting for a password.
public key cryptography private key (~/.ssh/id_ed25519) public key (.pub) authorized_keys ssh-keygen -t ed25519 ssh-copy-id ~/.ssh/config key permissions (600/644/700) passphrase
Teacher note

Task 4 — setting the private key to 644 and observing the failure — is a critical demonstration. SSH refuses to use a private key with world-readable permissions. The error message says exactly why: "UNPROTECTED PRIVATE KEY FILE." This is SSH enforcing the security model: a private key that anyone can read is not private. The student sets it back to 600 and it works again. That experience — permission mismatch → clear error → fix → success — is the debugging cycle in miniature.

Session 68
Networking tools — curl, wget, and network diagnostics
5 mindemo
20 minconcept
30 minexercise
5 minclose

Live demo: curl https://wttr.in/London. A weather forecast renders in the terminal. No browser, no GUI. The terminal can talk to the internet. That is what this session is about — using the terminal as an HTTP client.

curl transfers data to or from a URL. It supports HTTP, HTTPS, FTP, and many other protocols. Basic usage: curl https://example.com prints the response body. Key flags: -o filename saves to file. -L follows redirects. -I fetches headers only. -X POST sends a POST request. -d '{"key":"value"}' sends data. -H "Authorization: Bearer TOKEN" adds a header. curl is how you call APIs from the terminal or from scripts.

wget downloads files from URLs. Simpler than curl for downloading: wget https://example.com/file.zip. wget -r recursively downloads. wget -c resumes interrupted downloads. wget is better than curl for bulk or recursive downloads; curl is better for API calls and custom HTTP requests.

Network diagnostics:

Diagnostic tools and what they show
ping host test reachability, measure round-trip time traceroute host show each hop between you and destination mtr host interactive combination of ping + traceroute nslookup/dig host DNS resolution details curl -I url HTTP headers only — useful for debugging ss -tulpn listening ports on THIS machine netstat -rn routing table (older tool, still common) ip addr network interfaces and their IPs
  1. Use curl to download a plain text file from the internet and save it to ~/downloads/. Show the command.
  2. Use curl -I https://google.com. Read the HTTP headers. What status code is returned? What does it mean? What is the Location header for?
  3. Use curl https://api.ipify.org. What does this return? What does it tell you about NAT?
  4. Use traceroute 8.8.8.8. Count the hops. What does each line tell you? What does a * mean on a line?
  5. Write a script that checks if a list of websites is reachable using curl, logs the HTTP status code for each, and reports any that return non-200 status or time out.
curlwget HTTP status codes curl -I (headers) curl -X POST traceroutemtr API call from terminal
Teacher note

Task 3 — curl api.ipify.org — returns the machine's public IP address as seen from the internet. This is the NAT concept made visible: the IP returned is different from ip addr show's local IP. Students who see this difference understand NAT instantly. Follow up: what does a server on the internet see when you connect? It sees the NAT IP, not your local machine's IP. Why does this matter for security and for hosting services?

Week 18
Processes and services
"Everything running on this machine is a process. You can see all of them."
Session 69
Processes — what is running and why
5 minreveal
20 minconcept
30 minexercise
5 minclose

Live demo: run htop. Ask: how many processes are running right now? Most students will guess 5–10. The actual number is usually 150–200 on a fresh Kubuntu desktop. Ask: what are they all doing? This session answers that question systematically.

Every program running on Linux is a process. Every process has a PID (Process ID) — a unique number assigned at creation. Every process (except PID 1) has a parent process — the process that started it. This forms the process tree. PID 1 is systemd. PID 2 is usually a kernel thread. Everything else descends from one of these.

Key process commands:

Process management commands
ps aux all processes, all users, detailed ps -ef similar, different format pgrep name find PIDs by name pidof name find PID of a specific program top real-time process monitor (interactive) htop better top — colours, mouse, kill menu kill PID send signal to process (default: SIGTERM) kill -9 PID SIGKILL — immediate kill, cannot ignore killall name kill all processes with this name nice -n 10 cmd start a command with lower priority renice +10 PID change priority of running process pstree show the process tree visually

Process states. R: running or runnable (on the CPU or waiting for it). S: sleeping (waiting for something — most processes are here most of the time). D: uninterruptible sleep (usually waiting for I/O — disk or network). Z: zombie (finished but parent hasn't acknowledged it yet). T: stopped (paused).

Signals. Signals are messages sent to processes. SIGTERM (15) — politely asks a process to terminate. The process can ignore it or clean up first. SIGKILL (9) — immediate, unignorable termination. The kernel kills the process without asking. SIGHUP (1) — originally "hang up", now commonly used to tell a daemon to reload its configuration. SIGINT (2) — what Ctrl+C sends. Use SIGTERM first; SIGKILL only when SIGTERM fails.

  1. Run ps aux | wc -l. How many processes? Run pstree | less. What is at the root? Find your terminal session in the tree.
  2. Open Konsole, Firefox (or another browser), and a text editor simultaneously. Run ps aux | grep -E 'konsole|firefox|kate'. What PIDs are assigned? What user do they run as?
  3. Start a long-running process: sleep 300 &. The & runs it in the background. Find its PID with pgrep sleep. Send SIGTERM: kill $(pgrep sleep). Verify it is gone with pgrep.
  4. Run htop. Sort by CPU, then by memory. Find the top 3 memory consumers. What are they? Press F10 or q to quit.
  5. Run cat /proc/$$/status — this reads the status of the current shell process. What information is there?
  6. Start a process: sleep 600 &. Change its priority: renice +15 $(pgrep sleep). Verify with ps -o pid,ni,comm -p $(pgrep sleep). What does a higher nice value mean?
process / PIDparent process process treeprocess state (R/S/D/Z) ps auxhtop kill / kill -9 SIGTERM / SIGKILL / SIGHUP nice / renice background (&) /proc filesystem
Teacher note

Task 5 — reading /proc/$$/status — introduces the /proc filesystem. Every running process has a directory in /proc named after its PID. These directories contain files that expose process information: memory usage, open files, CPU time, environment variables, etc. This is the "everything is a file" principle taken to its logical conclusion — even process state is a file. Students who explore /proc/self/ are looking at the current process's own metadata in real time.

Session 70
systemd — services and the boot chain
5 minrecall
20 minconcept
30 minexercise
5 minclose

Ask: in the boot sequence from session 55, what starts after the kernel? systemd. Ask: what does systemd do? Most students will remember "starts services." Ask: what is a service? How does systemd know what to start and in what order? This is what the session answers.

systemd is PID 1 — the first process the kernel starts. It is responsible for starting all other processes in the correct order, managing services that run in the background, handling system state transitions (boot, shutdown, sleep), and logging (via journald).

systemd uses unit files — configuration files that describe services, timers, mounts, and other system resources. Unit files live in /lib/systemd/system/ (system defaults) and /etc/systemd/system/ (local overrides). A service unit file has sections: [Unit] (description, dependencies), [Service] (what to run, how to run it), [Install] (when to enable it).

systemctl commands
systemctl status servicename show status, recent logs systemctl start servicename start now (not persistent) systemctl stop servicename stop now systemctl restart servicename stop then start systemctl reload servicename reload config without restart systemctl enable servicename start at boot systemctl disable servicename don't start at boot systemctl enable --now service enable AND start now systemctl list-units --type=service list all services systemctl daemon-reload reload unit files after editing

Reading service status: systemctl status ssh shows whether the SSH service is active, when it started, its PID, and the last few log lines. The "Active: active (running)" line is the most important — green dot means healthy, red means failed.

Writing a simple service unit: create a script, then create a .service file in /etc/systemd/system/ to run it automatically. This is how you turn a script into a system service that survives reboots.

  1. Run systemctl list-units --type=service --state=running. How many services are running? Find 3 you can identify and explain what they do.
  2. Run systemctl status ssh. Read the full output. Is it running? When did it start? What is its PID? Read the last 5 log lines.
  3. Stop the SSH service: sudo systemctl stop ssh. Try to SSH to this machine from your partner's machine — what happens? Start it again: sudo systemctl start ssh. Retry.
  4. Read the unit file for SSH: cat /lib/systemd/system/ssh.service. Read each section — what does After= mean? What does ExecStart= contain?
  5. Create a simple service: write a script ~/scripts/hello-service.sh that logs "Hello from service" with a timestamp every 5 seconds (use a while loop and sleep). Create /etc/systemd/system/hello.service to run it. Run sudo systemctl daemon-reload, then start and enable the service. Verify with journalctl -u hello.service -f (Ctrl+C to stop).
  6. Disable and stop your hello service when done: sudo systemctl disable --now hello.
systemdPID 1 unit fileservice unit systemctlenable / disable start / stop / restart systemctl status daemon-reload journalctl -u
Teacher note

Task 5 — creating a real service — is the session's most valuable exercise. Students who write a unit file and see their script running as a system service have crossed an important threshold: they can now run anything persistently on their machine. This skill is directly relevant to Phase 6, where they will run Ollama as a service. Preview that explicitly: "In Phase 6, Ollama will run as a service so it starts automatically and runs in the background. You now know how to do that."

Week 19
Logs, debugging, and archives
"The system writes everything down. Reading it is the skill."
Session 71
Logs — reading the system's diary
5 minscenario
20 minconcept
30 minexercise
5 minclose

Scenario: a service stopped working overnight. You have no idea why. There was no error message on screen. Where do you look first? The answer is logs. The system writes a record of everything it does — failed logins, service crashes, kernel events, package installations. Today we learn to read them.

Linux has two logging systems: the traditional text-based logs in /var/log/, and the systemd journal managed by journald.

Traditional logs in /var/log/:

Key log files in /var/log/
syslog general system events (main log) auth.log authentication: logins, sudo use, SSH kern.log kernel messages dpkg.log package installations and removals apt/history.log apt command history boot.log boot sequence messages (not all distros) Xorg.0.log X display server log (if using X11)

journalctl — systemd's log viewer:

journalctl usage
journalctl all logs (newest at bottom) journalctl -f follow — show new entries as they arrive journalctl -n 50 last 50 lines journalctl -u servicename logs for a specific service journalctl --since "1 hour ago" journalctl --since "2024-01-15" --until "2024-01-16" journalctl -p err only error level and above journalctl -b logs from current boot only journalctl -b -1 logs from previous boot journalctl --disk-usage how much disk space the journal uses

Log levels: debug, info, notice, warning, err, crit, alert, emerg. journalctl -p err shows errors and above — this is usually the right starting point when diagnosing problems.

  1. Run sudo tail -f /var/log/syslog in one terminal. In another, run sudo apt update. Watch the log entries appear in real time. What events does apt log?
  2. Find all failed SSH login attempts: sudo grep "Failed password" /var/log/auth.log | tail -20. Are there any? Where did they come from?
  3. Use journalctl to find all errors since yesterday: journalctl -p err --since "yesterday". How many are there? What are the common sources?
  4. Find the log entries from the last boot: journalctl -b. How long did the boot take? (Look for kernel timestamp at start vs login prompt timestamp.)
  5. Find when packages were last installed: cat /var/log/apt/history.log | tail -30. What was the last operation?
  6. Write a script that checks auth.log for failed login attempts in the last hour and sends a summary to the terminal (count of attempts, unique source IPs).
/var/log/syslog /var/log/auth.log journalctl journalctl -f journalctl -b journalctl -p err log levels failed login attempt
Teacher note

Task 2 — finding failed SSH login attempts — may produce surprising results. If the machine has been internet-accessible at any point (not just on a local network), there are likely automated brute-force attempts in auth.log. Even on a local network, previous attempts from session 66 and 67 may appear. This is a real security visibility moment: the system has been recording every login attempt, and you can read them. Students who see hundreds of attempts from unknown IPs understand immediately why SSH key authentication (session 67) and disabling password authentication are important.

Session 72
Debugging — finding the root cause
5 minmindset
15 minconcept
35 minexercise
5 minclose

Write on the board: "It doesn't work." Ask: what is wrong with this problem description? Everything. It contains no information about what was expected, what actually happened, when it started, or what changed. The debugging mindset starts with precise problem description. "The SSH service fails to start after I edited /etc/ssh/sshd_config" is debuggable. "SSH doesn't work" is not.

The debugging process is a method, not luck:

Debugging method
1. Describe the problem precisely What should happen? What actually happens? When did it start? What changed? 2. Reproduce it reliably If you cannot reproduce it, you cannot fix it. 3. Gather information Read the error message carefully — all of it. Check the relevant log file. Run journalctl -u servicename for services. Run bash -x for scripts. 4. Hypothesise What is the most likely cause? What else could cause this symptom? 5. Test one change at a time Change one thing. Test. Did it change the symptom? If not, revert and try something else. 6. Find the root cause, not the symptom A workaround fixes the symptom. A fix addresses the cause. Know which you are doing.

Useful debugging commands: strace -p PID traces system calls a process makes — shows exactly what it is doing. lsof -p PID shows all files open by a process. dmesg | tail shows recent kernel messages. journalctl -xe shows recent journal with context. systemctl status servicename shows the last log lines and current state.

Broken environment exercise. You will be given 4 deliberately broken scenarios to diagnose and fix. For each: describe the problem precisely, identify the root cause (not just the symptom), fix it, and verify the fix.

  1. Broken service: the hello.service from session 70 has been modified with a syntax error in the unit file. systemctl start hello fails. Diagnose using systemctl status and journalctl -u hello. Fix the unit file. Verify the service starts.
  2. Permission problem: a script exists at ~/scripts/diagnose.sh. Running it gives "permission denied." The file exists. The path is correct. Diagnose the specific cause. Fix it. What is the difference between "file not found" and "permission denied" as error messages?
  3. Missing command: a script calls jq to parse JSON output. Running the script gives "command not found." Diagnose: is jq installed? If not, where does it come from and how do you install it? Fix it.
  4. Configuration error: the SSH service will not start. systemctl status ssh shows a config file error. Use sshd -t to test the config. Find and fix the error in /etc/ssh/sshd_config.

For each scenario, write: the precise problem description, the debugging commands you ran, the root cause, the fix, and the verification command.

root causesymptom vs cause reproduce reliably stracelsof dmesgjournalctl -xe sshd -t (config test) bash -x
Teacher note

Prepare the four broken scenarios before the session. The SSH config error (scenario 4) is the most instructive — sshd -t tests the config and reports the exact line number of any syntax error. Students who discover this tool independently are learning to look for purpose-built diagnostic commands. Always check if a service has its own config-test mode before reading through the config manually.

Session 73
Archives — tar, gzip, zip
5 mincontext
20 minconcept
30 minexercise
5 minclose

Students already used tar in session 32 — they extracted a .tar.gz file without understanding it. Now we understand it. Ask: what did tar -xzf downloads.tar.gz actually do? Break down each flag. This is the "remember that?" moment that rewards the forward reference.

tar (tape archive) creates archives — a single file containing multiple files and directories, with their metadata (permissions, timestamps, ownership) preserved. tar itself does not compress — it just packs. Compression is added separately or via tar flags.

tar flag mnemonics
c — Create a new archive x — eXtract from archive t — lisT contents without extracting v — Verbose (show filenames as processed) f — File (next argument is the archive filename) z — gZip compression (.tar.gz or .tgz) j — bzip2 compression (.tar.bz2) — slower, better compression J — xz compression (.tar.xz) — slowest, best compression Create: tar -czf archive.tar.gz directory/ Extract: tar -xzf archive.tar.gz List: tar -tzf archive.tar.gz Extract single file: tar -xzf archive.tar.gz path/to/file

gzip / gunzip. Compresses individual files: gzip file.txt creates file.txt.gz and removes the original. gunzip file.txt.gz reverses. gzip -k keeps the original. gzip -d is equivalent to gunzip.

zip / unzip. ZIP format — cross-platform, compatible with Windows. zip -r archive.zip directory/ creates a zip. unzip archive.zip extracts. ZIP includes compression internally. Use when the recipient is on Windows or when you need ZIP specifically.

Choosing format: .tar.gz — Linux/macOS standard, preserves Unix permissions. .zip — cross-platform, no permission preservation on extraction. .tar.xz — maximum compression for distribution. For local backups and dotfiles: .tar.gz. For sharing with Windows users: .zip.

  1. Create a .tar.gz backup of your ~/scripts directory: tar -czf scripts-backup-$(date +%Y%m%d).tar.gz ~/scripts/. What does the $(date +%Y%m%d) do? Check the file size.
  2. List the contents of the archive without extracting it. Verify every file is present.
  3. Extract the archive to a different location: tar -xzf scripts-backup-*.tar.gz -C /tmp/. Verify permissions were preserved.
  4. Create a .zip of your ~/projects/linux-course directory for "sharing with a Windows user." Compare the size to a .tar.gz of the same directory.
  5. Update your backup script from session 62 to create a dated .tar.gz archive of the source directory instead of copying individual files. Add rotation: delete archives older than 7 days.
tar-czf (create gzip) -xzf (extract gzip) -tzf (list) gzip / gunzip zip / unzip -C (extract to path) archive rotation date +%Y%m%d
Teacher note

Task 5 — adding archive rotation to the backup script — introduces the find -mtime pattern for deleting old files. The command: find /backup/dir -name "*.tar.gz" -mtime +7 -delete. Students who add this to their backup script have a script that could genuinely run as a cron job and not fill the disk. Preview this: "In session 74, you will schedule this to run automatically. That is what a real backup system looks like."

Session 74
Cron — scheduling recurring tasks
5 minmotivation
20 minconcept
30 minexercise
5 minclose

The backup script is complete, tested, error-handled, and logs everything. One problem: you have to remember to run it. Cron solves this. Cron is the Linux scheduler — it runs commands at specified times, dates, or intervals, without human intervention. It is how real system maintenance works.

Cron reads a crontab (cron table) — a text file specifying when to run each command. Each user has their own crontab. The system also has crontabs in /etc/cron.d/ and /etc/cron.daily/ etc.

Crontab syntax:

Crontab format — five time fields + command
MIN HOUR DOM MON DOW command * * * * * every minute 0 3 * * * 3:00am daily 0 3 * * 0 3:00am every Sunday */5 * * * * every 5 minutes 0 9-17 * * 1-5 9am-5pm every weekday, hourly MIN: 0-59 HOUR: 0-23 DOM: 1-31 (day of month) MON: 1-12 (month) DOW: 0-7 (day of week, 0 and 7 = Sunday) *: any value */n: every n units

Edit your crontab: crontab -e — opens in $EDITOR. List current crontab: crontab -l. Remove: crontab -r (careful — removes everything).

Cron environment differences. Cron runs with a minimal environment — different from your interactive shell. PATH is limited. Home directory may differ. Variables from .bashrc are not loaded. Always use absolute paths in cron commands. Always redirect output: command >> /var/log/mycron.log 2>&1. Test your script with a minimal environment before scheduling it.

Verifying cron ran. Cron output (if not redirected) is emailed to the user — check /var/mail/username or the mail command. More usefully: log to a file in the script itself (you already do this with the log() function). Check with grep CRON /var/log/syslog to see when cron ran your job.

  1. Edit your crontab: crontab -e. Add an entry to run your backup script daily at 2am. Use absolute paths throughout. Redirect all output to a log file.
  2. Add a second entry to run a simple test every minute: * * * * * echo "cron ran at $(date)" >> /tmp/cron-test.log. Wait 2 minutes. Does the log file contain entries? Verify cron is running your job.
  3. Check when cron ran: grep CRON /var/log/syslog | tail -10. What information does syslog show about cron activity?
  4. Remove the test entry (the every-minute one) with crontab -e.
  5. For scripts that need root (like some system maintenance scripts), cron can run as root via sudo crontab -e or /etc/cron.d/. Add a system-level cron entry that runs apt update weekly on Sunday at 4am.
croncrontab crontab -e / -l / -r five time fields * (any) / */n (every n) cron minimal environment absolute paths in cron 2>&1 in cron /etc/cron.d
Teacher note

The "cron minimal environment" issue causes more confusion for new cron users than the syntax. Scripts that work perfectly in the terminal fail silently in cron because PATH does not include ~/scripts or /usr/local/bin. The solution is always: use absolute paths for every command and every file reference in scripts that will run via cron. The every-minute test job in task 2 is specifically designed to verify cron is actually running within 2 minutes — students who wait 10 minutes to check are too patient.

Week 20
Version control, disk management, and synthesis
"Track your work. Understand your storage. Build something complete."
Session 75
git — version control basics
5 minproblem
20 minconcept
30 minexercise
5 minclose

Ask: have you ever made a change to a script, saved it, and then wanted to go back to the version from yesterday? Or worked on the same file from two different machines and lost changes? These are the problems version control solves. Git is the tool. It is the most widely used software development tool in the world — and it is equally useful for managing dotfiles, scripts, and any text-based work.

Git tracks changes to files over time. Every time you commit, git takes a snapshot of the tracked files and stores it permanently with a message describing what changed. You can return to any previous snapshot at any time. You can see exactly what changed between any two points. You can work on experimental changes without affecting the stable version.

Core git workflow
git init create a new repository in current directory git status what has changed, what is staged git add filename stage a file for the next commit git add . stage all changed files git commit -m "msg" save staged changes as a snapshot git log show commit history git log --oneline compact history view git diff show unstaged changes git diff --staged show staged changes git show HASH show a specific commit

The three areas. Working directory: where you edit files. Staging area (index): where you prepare the next commit. Repository: where commits are stored permanently. git add moves changes from working directory to staging. git commit moves staged changes to the repository.

First time setup. Before first commit: git config --global user.name "Your Name" and git config --global user.email "[email protected]". These are stored in ~/.gitconfig — the dotfile from session 45.

  1. Configure git: set your name and email. Verify with cat ~/.gitconfig.
  2. Initialise a git repository in your ~/scripts directory: cd ~/scripts && git init.
  3. Check status: git status. What does it show? Stage all your scripts: git add .. Status again — what changed?
  4. Make your first commit: git commit -m "Initial commit: phase 3 and phase 5 scripts". View the log.
  5. Make a meaningful change to one of your scripts. Check status. Run git diff — read the output. What does + and - indicate? Stage and commit the change with a descriptive message.
  6. View the full history: git log --oneline. Use git show on one of your commits — what information does it display?
gitrepository commitworking directory staging areagit add git commit -m git log / git diff ~/.gitconfig git show HASH
Teacher note

The three-area model (working directory → staging → repository) is the conceptual key to git. Students who understand why there is a staging area between editing and committing can answer questions like "why do I git add before git commit?" without memorising a workflow. The staging area exists so you can prepare a coherent commit from a messy set of changes — you can stage only the relevant files, leaving in-progress work unstaged. This is the design philosophy behind it. Explaining the reason makes the workflow stick.

Session 76
git — branches, history, and dotfiles repo
5 minuse case
20 minconcept
30 minexercise
5 minclose

Branches. A branch is an independent line of development. The default branch is called main (or master on older git). Creating a branch: git branch feature-name. Switching: git checkout feature-name or git switch feature-name. Create and switch: git checkout -b feature-name. Changes on a branch do not affect the main branch until merged. Merging: git checkout main && git merge feature-name.

Recovering with git. See what changed between now and a commit: git diff HASH. Discard unstaged changes to a file: git checkout -- filename (note: destructive). Undo the last commit but keep changes staged: git reset --soft HEAD~1. View a previous version of a file without changing anything: git show HASH:path/to/file.

A dotfiles repository. The ~/dotfiles directory from session 45 is exactly right for a git repository. Initialise it, commit the current dotfiles, and from now on every change to your configuration is tracked. This is the professional approach to managing dotfiles. If you ever need to set up a new machine, clone the repository and apply the dotfiles.

  1. Initialise a git repository in ~/dotfiles. Add and commit your .bashrc, .inputrc, and .nanorc. View the log.
  2. Make a meaningful change to .bashrc (add a new alias). Commit it with a descriptive message.
  3. Create a branch called "experiment": git checkout -b experiment. Make a change to .bashrc. Commit it. Switch back to main — is the change there? Merge the branch. Now is it?
  4. Simulate accidentally breaking .bashrc — delete two lines. Run git diff to see what changed. Recover the file: git checkout -- .bashrc. Verify the lines are back.
  5. Use git log --oneline to view the history. Use git show on your first commit. Write: why is a dotfiles git repository more reliable than a manual backup directory?
branchmain / master git checkout -b git merge git checkout -- file git reset --soft HEAD / HEAD~1 dotfiles repo
Teacher note

Task 4 — recovering a broken file with git checkout — is one of the most satisfying git moments for students. The file is visibly broken, git diff shows exactly what changed, and git checkout restores it instantly. This experience — being able to undo any change with one command — changes how students feel about experimenting with configuration files. They will be bolder and more explorative because they know recovery is one command away. Name this explicitly: "git checkout on a file is your undo button for any tracked change."

Session 77
Disk management — df, du, lsblk, and storage health
5 minquestion
15 minconcept
35 minexercise
5 minclose

df -h (disk free) shows mounted filesystems and their usage. The Use% column is the one to watch — when it approaches 100%, the system will have problems writing files, logs, and temporary data. Check regularly: df -h / for the root filesystem specifically.

du -sh (disk usage) shows how much space a directory uses. du -sh * shows each item in the current directory. du -h --max-depth=1 /var shows space by subdirectory to one level deep. Finding what is filling your disk: du -sh /* 2>/dev/null | sort -rh | head -10 shows the 10 largest top-level directories.

lsblk lists block devices — disks, partitions, loop devices. Shows the hierarchy: disk → partitions → mountpoints. Add -f for filesystem type information.

Clearing space. sudo apt autoremove && sudo apt clean removes unused packages and cached package files. sudo journalctl --vacuum-size=500M limits journal size to 500MB. find /tmp -mtime +7 -delete removes old temp files. The journal and apt cache are the most common sources of unexpected disk usage on new installations.

  1. Run df -h. What percentage of the root filesystem is used? What are the other mounted filesystems?
  2. Find the 10 largest directories in /var: sudo du -h --max-depth=1 /var | sort -rh | head -10. What is using the most space?
  3. Find the 10 largest files on the entire system: sudo find / -type f -printf '%s %p\n' 2>/dev/null | sort -rn | head -10 | awk '{printf "%s MB %s\n", $1/1048576, $2}'. Are there any surprises?
  4. Check journal disk usage: journalctl --disk-usage. If it is large, vacuum it: sudo journalctl --vacuum-time=30d (keep only last 30 days).
  5. Clean apt cache: sudo apt clean && sudo apt autoremove. Run df -h again — did the usage change?
  6. Write a disk-check script that monitors the root partition and logs a warning if usage exceeds 80%. This script will be added to cron in the next session's cleanup exercise.
df -hdu -sh du --max-depth lsblk -f apt clean / autoremove journalctl --vacuum sort -rh (human-readable sort)
Session 78
Users and permissions depth — sudo, groups, ACLs
5 minreview
20 minconcept
30 minexercise
5 minclose

Creating and managing users. sudo adduser username creates a user interactively. sudo useradd -m -s /bin/bash username creates with specific options (less interactive). sudo passwd username sets password. sudo userdel -r username removes user and home directory. sudo usermod -aG groupname username adds a user to a group.

sudo configuration. The /etc/sudoers file controls who can use sudo. Edit it only with sudo visudo — this checks syntax before saving. A syntax error in sudoers locks everyone out of sudo. Pattern: username ALL=(ALL:ALL) ALL — full sudo access. Pattern: username ALL=(ALL) NOPASSWD: /usr/bin/apt — can run apt without password. Drop-in files in /etc/sudoers.d/ are safer than editing sudoers directly.

chown. Change file ownership: sudo chown user:group filename. sudo chown -R user:group directory/ recursively. Ownership changes usually require root (you cannot give a file to another user).

ACLs (Access Control Lists). Standard Unix permissions support only one owner, one group, and "others." ACLs allow finer control — granting specific permissions to specific users or groups beyond the standard model. getfacl filename shows ACL. setfacl -m u:username:rw filename grants read/write to a specific user.

  1. Create a user called "testuser2". Set a password. Switch to them with su - testuser2. What can they do? What can they not? Exit.
  2. Add testuser2 to the sudo group: sudo usermod -aG sudo testuser2. Switch to testuser2 and try a sudo command. Does it work now?
  3. Create a shared directory: sudo mkdir /shared. Create a group: sudo groupadd project. Add yourself and testuser2 to it. Set /shared to be owned by the project group and writable by the group (770). Verify both users can create files there.
  4. Use an ACL to grant testuser2 read access to one of your private files without changing its permissions: setfacl -m u:testuser2:r ~/scripts/greet.sh. Verify with getfacl.
  5. Clean up: sudo deluser --remove-home testuser2. Remove the project group. Remove the /shared directory.
adduser / useradd usermod -aG sudoers / visudo NOPASSWD chown / chown -R ACLgetfacl / setfacl groupadd / deluser
Teacher note

The visudo instruction needs emphasis: never edit /etc/sudoers with a regular text editor. If you introduce a syntax error and save, sudo stops working and you cannot fix the sudoers file without sudo. visudo prevents this by checking syntax before saving. This is a real sysadmin footgun that has locked people out of their own systems. The existence of visudo is the lesson — some files are dangerous enough to warrant a dedicated safe editor.

Session 79
Phase 5 synthesis — system health monitoring script
50 minbuild
10 mincode review

No new content. The synthesis project for Phase 5 is a system health monitoring script — a tool that checks the machine's key health indicators and produces a readable report. This script combines everything from weeks 16–20: scripting with error handling, disk monitoring, process checking, service status, log analysis, and scheduling via cron.

Write ~/scripts/health-check.sh. The script must report all of the following:

  • Uptime and load average
  • Memory usage (used / total / percentage)
  • Disk usage on root partition (with warning if over 80%)
  • Top 5 processes by CPU and by memory
  • Status of SSH and any other services you care about
  • Failed login attempts in the last 24 hours (count)
  • Disk I/O wait (from /proc/stat or iostat)
  • Count of packages with available updates

The script must also: use set -euo pipefail, log the run timestamp, accept a -v (verbose) flag for more detail, produce output that is readable without colour and usable in email, be scheduled to run daily at 8am via cron.

Code review pairs: exchange scripts with a partner. Review for: are all requirements met? Does it run cleanly with bash -x? Are errors handled? Is the output readable? Give written feedback.

Teacher note

The code review pairs format introduces peer code review — a professional practice that almost no beginner programming course teaches. Students who review someone else's script learn as much as students who wrote it. The written feedback requirement prevents the review from being perfunctory. Collect the feedback sheets — they reveal which requirements students understood deeply (they can critique them) vs which they just implemented mechanically.

Session 80
Phase 5 synthesis — demo day and Phase 6 preview
40 mindemos
20 minreflection + preview

Each student: 3 minutes. Show the health check script running. Show one interesting implementation choice. Show the cron entry that schedules it. The audience asks one question each round.

Group reflection questions: What broke during this phase that you had to debug? What was the most satisfying script to write? What would you add to the health check script if you had more time? What do you now understand about your machine that you did not understand before Phase 5?

Phase 6: "You have built the machine, installed the OS, customised it, automated it, and can administer it remotely. One thing is left: running a local AI model on it. Not in the cloud. On this machine. In session 82 you will install Ollama, pull a model, and talk to it from the terminal. In session 83 you will write a script that sends it prompts and processes the response. In session 88 you will build something with it — something of your own choice that solves a problem you actually have. That is the final project. Plan what you want to build."

Teacher note

The Phase 6 preview should be specific and concrete — not "you will learn about AI" but "you will type a command, a model will load, and you will have a conversation with it in the terminal on a machine you built yourself." Students who know exactly what is coming are better motivated for the final phase. The instruction to "plan what you want to build" is genuine — give them time to think about it over the next few days before session 81 starts.

Phase 5 of 6  ·  Linux seminar  ·  Kubuntu LTS  ·  Ages 15–18

Phase 6 — The frontier — begins next session