By the end of these six weeks, students should be able to operate the machine entirely from the terminal when needed. The GUI remains available but stops being necessary. They will navigate the filesystem, create and manipulate files, search for content, install and remove software, and connect commands into pipelines that do real work.
The core shift in this phase is from reading the machine to writing to it. Phase 1 was observation and understanding. Phase 2 is operation. Students will make things, break things, fix things, and build things — all from the command line. By session 40, each student has a machine configured with tools they chose, installed correctly, and can explain.
One principle governs every session in this phase: the mouse is not banned, but it is no longer the default. When a student reaches for Dolphin, the question to ask is "what would the terminal command be?" When they find it, they understand it better. When they use it, it becomes faster than the GUI. The transition from GUI-first to terminal-comfortable does not happen in one session — it accumulates across all six weeks.
Phase 1 gaps to address at the start of this phase: check that every student can navigate the filesystem from the terminal, read a permission string, and use sudo deliberately. These three skills are assumed throughout Phase 2. If anyone is shaky on them, the first session's warm-up will surface it.
Phase 2 opens with a distinction that seems pedantic but matters practically: the terminal emulator and the shell are two separate programs. Most people use "terminal" to mean both. In this course, the distinction will matter when students use SSH in Phase 5 — on a remote machine, there is no emulator, only a shell. Getting this right now prevents confusion later.
This session also addresses the "why bother" question directly. Students have been getting by with Dolphin for four weeks. The case for the terminal has to be made with concrete, demonstrable advantages — not philosophy.
Have Konsole open on the projector. Know where the settings are for changing font size, color scheme, and opacity. Have a second machine or SSH session available if possible — seeing bash run identically inside a remote connection reinforces the separation of emulator and shell.
Open Konsole on the projector. Ask: "What are you actually looking at right now?" Take all answers. Write them on the board. Expect: "the terminal," "the command line," "bash," "the black screen." All partially correct, none fully precise. Tell the class you are going to make this precise because it matters — and explain that you will come back to this distinction in Phase 5 when they connect to a machine that has no screen at all.
Konsole is the terminal emulator. It is a window application — it draws pixels on screen, handles keyboard input, manages fonts and colors, and creates a visual interface. Before personal computers, terminals were physical hardware devices connected to mainframes. Konsole emulates that hardware in software. It is called an "emulator" because it is pretending to be a machine that no longer exists. You could replace Konsole with any other terminal emulator — xterm, Alacritty, Terminator — and the behavior inside would be identical.
Bash is the shell. It runs inside the terminal emulator. Bash is the program that displays the prompt, reads what you type, interprets it as commands, asks the operating system to execute those commands, and shows the results. When you type ls and press Enter, it is bash that processes that input and calls the ls program. The prompt itself — the username@hostname:~$ — is produced by bash, not by Konsole.
Run echo $SHELL to see which shell is running. It will say /bin/bash. Run echo $TERM to see the terminal type being emulated. Run ps to see the running processes — you will see both Konsole and bash in the list. They are separate programs.
Why use the terminal at all? Three practical reasons. First, precision: every command does exactly what you say. There is no interpretation, no "are you sure you want to move this file?" ambiguity. Second, repeatability: a command you typed yesterday works identically today and will work identically on any Linux machine. Third, remote access: when you SSH into a server in Phase 5, there is no desktop, no Dolphin, no mouse. There is only a shell. Everything you learn in this phase works exactly the same over SSH. The terminal is not a more difficult way to do what the GUI does — it is access to a different and wider set of capabilities.
Part 1 — exploring Konsole (10 min). Open Konsole settings. Change the font size. Change the color scheme to something different. Open a second tab (Ctrl+Shift+T). Open a split view (View → Split View). Run a command in each pane simultaneously. Observe that two separate bash instances are running inside one Konsole window.
Part 2 — identifying the layers (15 min).
echo $SHELL. Write what it says. What does this variable contain?echo $TERM. Write what it says. Why "xterm-256color" rather than "konsole"?ps. Find both Konsole and bash in the process list. What do the numbers next to them mean?echo "hello" > /tmp/test.txt in one tab. Switch to the other tab and run cat /tmp/test.txt. Did the file appear in both? Why? What does this tell you about where files live versus where bash runs?chsh -s /bin/zsh changes your login shell. We will not do this in the course because bash is more universally documented, but it is a completely valid personal choice.Name the two programs, their jobs, and which layer each belongs to in the four-layer model from Phase 1. Preview: "Now that we know what we are looking at, next session we use it."
The emulator vs shell distinction will feel abstract to most students today. That is fine — it will become concrete in Phase 5 when they SSH into a machine and there is no Konsole, no KDE, no mouse. At that point students who understood this session will adapt immediately. Students who did not will be confused about what is missing. Reference this session explicitly when SSH appears.
The split-view exercise is practically useful beyond the lesson: students who discover split terminals start using them naturally during exercises, which makes the exercises go faster because they can have a man page open in one pane while working in the other.
Three commands — pwd, ls, cd — handle the vast majority of filesystem navigation. Students have used these in Phase 1 exercises, but not as the primary focus. This session formalises them: anatomy of a command, what each one does, and the navigational patterns they enable. By the end, every student should be able to navigate to any location on the system confidently from the terminal.
Quick Phase 1 recall: one student draws the filesystem tree from memory. Focus on the major directories. If anything is missing, fill it in together. This takes 5 minutes and confirms the mental model is intact before adding terminal-first navigation.
Command anatomy. Before the three commands, establish the grammar that applies to every command in Linux. Write on the board:
Options almost always start with a dash. Short options: -l, -a. Long options: --long, --all. Short options can usually be combined: -la is the same as -l -a. Arguments do not start with a dash — they are the thing the command operates on: a filename, a path, a piece of text.
pwd — print working directory. No options needed. No arguments needed. It always tells you exactly where you are. This is the "where am I?" command. Run it whenever you are uncertain. It costs nothing and confirms your position.
ls — list directory contents. With no argument, lists the current directory. With a path argument, lists that location: ls /etc. Options change what is shown and how. Without options, hidden files are not shown and no detail is given. We go deep on ls options in session 19.
cd — change directory. cd /path for absolute paths. cd foldername for relative paths. cd ~ or just cd with no argument returns to home. cd - goes back to the previous directory — the terminal equivalent of the browser back button. cd .. goes up one level. These patterns combine: cd ../../etc goes up two levels then into etc.
Navigation challenge. Starting from your home directory, reach each location using cd and verify with pwd. Write the exact command used each time.
/etc using an absolute path./etc, navigate to /var/log using a relative path./var/log, navigate to /usr/bin./tmp. Then use cd -. Where did you go? Why?/usr/bin using only cd .. and directory names — no absolute paths./etc/apt and run ls. What is inside?After completing navigation: run ls with at least one argument you choose. Then run ls /etc /var /usr all at once — what happens when you give ls multiple arguments?
Without looking at notes: what does cd - do? What does cd with no arguments do? What is the difference between running ls and ls /etc?
cd - is consistently the most useful surprise in this session. Students who discover it start using it immediately. Make sure the exercise forces everyone to use it at least once so the muscle memory starts forming. It is especially useful when you navigate deep into a directory, make a change, and need to return to where you were without retyping the original path.
The multi-argument ls command at the end — ls /etc /var /usr — shows that many commands accept multiple arguments and act on each in turn. This is a general principle that applies to cp, mv, rm and others. Students who notice the pattern are developing command-line intuition.
ls with no options is a rough sketch. ls with the right options is a complete picture. This session teaches students to read every column of ls -la output and to choose the right flags for different investigative purposes. By the end, ls is a diagnostic tool, not just a listing tool.
Run ls -la ~ on the projector. Ask: how many separate pieces of information are on each line? Count together. The answer is nine: permissions, link count, owner, group, size, month, day, time/year, name. Phase 1 covered permissions, owner, and group. This session covers the rest and adds flags that change what is shown.
Walk through the ls -la output line by line on the projector. Point at each column:
The link count is the number of hard links pointing to this file. For most regular files, this is 1. For directories, it counts the number of subdirectories plus 2 (for . and ..). This is rarely important in daily use but occasionally useful for diagnostics.
The size column shows bytes by default. 4096 bytes for a directory entry is normal — it is the size of the directory entry itself, not its contents. The -h flag makes sizes human-readable: bytes become K, M, G. Always use -h when size matters to you as a human.
The date column shows when the file was last modified. For files modified in the current year, it shows month, day, and time. For files modified in a previous year, it shows month, day, and year instead of time. This matters for diagnosing changes: if a config file was last modified today and you did not touch it, something else did.
Essential flags to know: -l long format. -a show all including hidden. -h human-readable sizes. -t sort by modification time, newest first. -S sort by size, largest first. -r reverse any sort order. -R list recursively into subdirectories. These combine freely: ls -laht shows all files in long format with human-readable sizes sorted by time. ls -lhS sorts by size with readable sizes.
Use ls with different flags to answer each question. Write the exact command you used.
/usr/bin?/etc?/var/log including hidden ones?.bashrc in human-readable form?/home recursively — press Ctrl+C after a few seconds to stop it. What does Ctrl+C do?/usr/bin. How do you identify it from the ls output alone? What does the arrow after the name tell you?/etc directory? What does that number represent?Ctrl+C — interrupt a running command — appears naturally in task 5 when ls -R runs too long. Do not pre-teach it. Let students sit with the runaway output for a moment, then ask "how would you stop this?" If nobody knows, tell them Ctrl+C. That discovery moment is more memorable than a pre-emptive explanation. Ctrl+C will come up again in tail -f, find /, and any long-running process.
Task 6 — finding a symbolic link — teaches a reading skill. The l at the start of the permission string, plus the -> after the filename, are both indicators. Students who can identify symlinks from ls output will not be confused by them in later sessions when they appear unexpectedly in /usr/bin or /etc.
Four tools for reading file contents. Each has a specific purpose. Using cat on a large log file is the terminal equivalent of opening a 10,000-page document in Word with no scroll bar — technically possible, practically useless. Choosing the right reading tool is as important as knowing they exist.
Live demo: run cat /var/log/syslog without warning the class. Let the output scroll. Do not stop it. After 10 seconds, press Ctrl+C. Ask: "Was that useful? What would have been better?" This creates the need that the session fills.
cat (concatenate) dumps the entire file to stdout instantly. It was originally designed to concatenate multiple files: cat file1 file2 file3 outputs all three in sequence. For reading, it is only appropriate for small files — config files, short scripts, files you know are a few lines long. For large files, it is the wrong tool.
less opens a file in a pager — a program that shows one screenful at a time. Navigation: arrow keys or j/k move line by line. Page Up/Down or Space/b move by screen. /searchterm searches forward. ?searchterm searches backward. n finds the next match, N the previous. G jumps to the end of the file. gg jumps to the beginning. q quits. less is the correct default for any file you do not know the size of.
head -n N shows the first N lines. With no argument, defaults to 10. Use it to quickly understand what kind of file you are dealing with — look at the first 5 lines before committing to reading the whole thing.
tail -n N shows the last N lines. For log files, the most recent events are at the bottom, so tail is usually what you want. tail -f filename follows the file — it keeps the file open and prints new lines as they are appended. This is indispensable for watching a log while an event is happening. To stop it, press Ctrl+C.
wc -l counts lines. Before opening a large file, run wc -l filename to know how big it is. A file with 50,000 lines should not be opened with cat.
For each situation, choose the right reading tool and explain why you chose it:
.bashrc — which tool? How long is the file first (use wc -l)?/var/log/syslog without it scrolling past — open it with less, search for the word "error", navigate between matches, then quit./etc/passwd./var/log/syslog.sudo tail -f /var/log/syslog. In the second, run sudo apt update. Watch the log entries appear in real time in the first pane as apt runs. What kind of events does apt log?/etc/passwd have? How many does /var/log/syslog have? Which one would you open with cat?When do you use cat vs less? What does tail -f do that tail alone does not? What does wc -l tell you before you open a file?
Task 5 — the live log demo — is visually compelling and hard to forget. Seeing log entries appear in real time as apt runs makes the system feel alive. Students who do this exercise properly start to understand that the system is constantly generating a record of its own activity, and that record is readable. This is the foundation for the debugging session in Phase 5 (session 50 approximately) where they will use logs to diagnose real problems. Reference this moment there.
Tab completion and history are not shortcuts — they are the correct way to use the terminal. Tab completion prevents typos and confirms that paths exist. History search retrieves complex commands instantly without retyping. Together they turn a slow, error-prone interface into a fast, reliable one. These two features more than anything else explain why experienced users prefer the terminal.
Live demo before explaining anything. Type cat /usr/sh and press Tab. Continue: are/doc/a and Tab. Continue: pt/ch and Tab. The full path appears in a few keystrokes. Time yourself. Then ask: how long would typing /usr/share/doc/apt/changelog.gz manually take? And how many opportunities for typos? This is the opening.
Tab completion. Press Tab once with a unique partial match: bash completes it. Press Tab once with an ambiguous match: nothing happens. Press Tab twice with an ambiguous match: bash shows all possibilities. Tab completion works for commands (type cat and Tab — nothing, because 'cat' is unique; type ca and Tab twice — shows cat, cal, and anything else starting with ca), for paths, for filenames, and with plugins for command options. An important side effect: if Tab does not complete, the path almost certainly does not exist. Tab completion is a silent correctness check.
Command history. Bash keeps every command you type in ~/.bash_history. Arrow up retrieves the previous command. Arrow up again retrieves the one before that. Arrow down goes forward again. This is sufficient for recent commands but slow for commands from hours ago.
Ctrl+R is the powerful version: reverse search. Press Ctrl+R and type any part of a previous command. Bash searches backwards through history and shows the most recent match. Press Ctrl+R again to find the next earlier match. Press Enter to run the found command. Press right arrow or any navigation key to edit it first. Press Ctrl+C to cancel without running.
History management: history shows your full command history with line numbers. !42 runs command number 42 from the history. !ls runs the most recent command that started with ls. !! runs the last command — this is the most useful: if you ran a command and got permission denied because you forgot sudo, type sudo !! to rerun it with sudo without retyping everything.
Tab completion drill. Complete each of these using only tab — never type the full path manually:
/usr/share/applications/etc/apt/sources.list with cat/usr/share/doc/bashls /etc/a — how many completions are there?History exercises:
history and find a command you ran earlier. Rerun it using !n with its line number.ls -la you ran. Describe what you typed and what appeared.sudo !! without retyping. Which command did you use for this test?Task 7 — sudo !! — is the session's high point. Engineer the moment: ask a student to run a command that requires sudo without it (for example, apt update). They get permission denied. Ask: "How do you fix this without retyping the command?" Most will start typing. Stop them. Show sudo !!. The reaction is consistent: surprise followed by immediate adoption. Students who learn this today use it for the rest of the course.
Task 8 — counting history entries — has the answer: history | wc -l. This is the first real pipeline students build without being explicitly taught pipes yet. If anyone finds it, name it and tell them that is exactly what week 9 is about. The preview primes them.
Every command in Linux follows the same grammar. A student who understands this grammar can read any command they have never seen before and make a reasonable inference about what it does, which options they can look up, and how to modify it. This session teaches the grammar rather than individual commands.
Write on the board: find /home -name "*.txt" -size +10k -type f. Ask: without knowing what find does, what can you guess about what this command is doing? Take answers. Most students will correctly identify /home as a location, .txt as a file pattern, and guess that it searches for something. That intuition is the grammar working — this session makes the grammar explicit.
The grammar. command [options] [arguments]. The command is always first. Options modify the command's behavior — they almost always start with one or two dashes. Arguments are the targets — what the command acts on.
Short vs long options. Short options: one dash, one letter. -l, -a, -r. Multiple short options can be combined after a single dash: -la, -lah. Long options: two dashes, a word. --long, --all, --human-readable. Long options cannot usually be combined — each needs its own --. Short and long options for the same behavior often both exist: -l and --long may do the same thing.
Options with values. Some options take a value immediately after them: -n 10 or --lines=10. Note the difference: short option, space, value vs long option, equals sign, value. This pattern appears in many commands: head -n 20, find -size +100M, grep -m 5.
The -- separator. Two dashes alone signal end-of-options. Everything after -- is treated as an argument, not an option. This is how you handle files whose names start with a dash: rm -- -filename. Without --, the shell would interpret -filename as an option.
Command dissection. For each command: identify (a) the command name, (b) all options and what each one does — use man to look them up, (c) the arguments. Then write a plain-English description of what the whole command does.
grep -in "error" /var/log/syslogcp -rv ~/documents /tmp/backupls -lhSt /usr/bintail -f -n 50 /var/log/syslogfind /etc -name "*.conf" -type f -size +1kBonus: write a command of your own using at least 2 flags and one argument. Swap with a partner and let them dissect it without you explaining it. Did they get it right?
The partner exercise at the end — writing a command and having someone else dissect it — reverses the usual flow. Students who have to write a command that is dissectable must think clearly about what they are doing. Students who dissect someone else's command learn to read unfamiliar syntax. Both skills transfer directly to real-world Linux use, where you constantly encounter commands written by others.
The -- separator is worth one solid example. Create a file named -test in ~/sandbox before class: touch -- -test. Then show them: ls -test gives an error. ls -- -test works. rm -- -test deletes it. This is a real edge case they will encounter with downloaded files from Windows systems that sometimes have unusual names.
Phase 1 introduced man pages in session 15. This session makes the full help system systematic. Three tools, three purposes, and a workflow for using them in the right order. Students who internalize this workflow can solve most terminal problems without an internet connection — a skill with lasting professional value.
Scenario: you need to find files larger than 100MB somewhere on the system. You know the command is probably find, but you do not know the flag for size. You could Google it — but what if you have no internet? What do you do? This is the session's premise.
--help is the fastest option. Almost every command supports commandname --help or commandname -h. It prints a condensed summary: what the command does, and a list of its options with brief descriptions. The output is one or two screens long, usually not paginated. Use it when you know the command and need a quick flag reminder. It is not a full reference — it is a memory aid.
man commandname is the complete reference. Every standard Linux command has a man page. It covers every option, every edge case, the command's history, its limitations, and often examples. Use it when you need to understand exactly what a command does, or when --help is not detailed enough. Navigation: arrow keys, Page Up/Down, /searchterm to search, n for next match, q to quit. Man pages are organized into sections: section 1 is user commands, section 5 is file formats, section 8 is system administration. man 5 passwd gives the format of the /etc/passwd file, not the passwd command.
apropos keyword (identical to man -k keyword) searches all man page titles and short descriptions for a keyword. Use it when you know what you want to do but not what command does it. apropos compress finds commands related to compression. apropos "disk usage" finds commands that report disk usage. The output lists command name, section, and a one-line description. This is where you start when you do not know where to start.
The workflow: Unknown task → apropos to find the right command → --help to see its flags quickly → man to understand deeply if needed.
Help system challenge — no internet. Use only man, --help, and apropos.
find that searches by file size. What exact flag lets you find files larger than 100MB?cp that shows verbose output — each file as it is copied.stat command do? Use --help to find out. Then run it on a file and read the output.Task 6 — finding a compression command with apropos — has multiple valid paths. Students who try "apropos compress" will find gzip, bzip2, xz, and tar. Students who try "apropos archive" may find tar directly. The variation in keyword choices produces a useful class discussion: which keyword was most productive? Why? This is the research process itself, not just the answer.
Task 4 — man section 5 — is the session's depth test. Students who find man 5 crontab and read the file format documentation are doing professional-level reference work. Crontab syntax in section 5 will be directly relevant in Phase 5 when they write scheduled tasks.
No new content. A timed, individual challenge using everything from weeks 5–6. Dolphin stays closed for the entire session. The goal is to consolidate the terminal as the natural mode of operation — not because the GUI is forbidden, but because the terminal is faster for these tasks once it is familiar.
Rules: No Dolphin. No file manager. Terminal only. Work individually. Write the exact command used for each task.
/usr/share. What is its name and size?/etc/passwd have?/var/log and list only files modified in the last 7 days./etc whose name contains the word "host"./var/log/syslog./usr/share/doc using tab completion only — no manually typing the full path./etc/hostname in less, search for the hostname text, then quit.After everyone finishes: compare command choices as a group. For any task where two students used different commands — discuss both. Which is more readable? Which is faster? Are both correct?
Which task was hardest? Which command did you use most? What would you do differently? Preview week 7: "Next week we stop just looking at things and start making them, moving them, and yes — deleting them permanently."
The comparison discussion at the end is the most valuable part. Task 1 can be solved with ls -lhS /usr/share | head -2 or with find /usr/share -type f -printf '%s %p\n' | sort -rn | head -1. Both are correct. The first is simpler. The second is more powerful and generalizable. Having students articulate why they chose their approach is higher-order learning — it is metacognition about tools, not just tool use.
Students who finish early: ask them to write a one-paragraph "cheat sheet" for someone who has never used the terminal, listing the 5 most important commands from weeks 5–6. This produces a useful artifact and forces them to prioritize, which is itself a skill.
Everything in weeks 5–6 was navigating and reading. Week 7 is the first time students write to the filesystem from the terminal. The shift is significant — not just technically but psychologically. Making something exist that did not before is satisfying in a way that reading files is not. Design the session to let that satisfaction land.
No long introduction. Tell the class: "Up to now we have been reading the machine. Today we write to it." Ask: what do you think the command is to create a new folder? A new empty file? Take guesses — the class will get mkdir and probably touch. Verify quickly and move into the concept.
mkdir directoryname creates a directory. It fails if the parent directory does not exist. mkdir -p path/to/new/directory creates the full path including all missing parent directories. The -p flag is the practical one — use it by default for anything more than one level deep. It also does not error if the directory already exists, making it safe to run repeatedly.
touch filename does two things: it creates an empty file if it does not exist, and it updates the modification timestamp if the file already exists. The creation behavior is the one students will use most. Touch is the quickest way to create a placeholder file, prepare a file for a script to write to, or create multiple files at once: touch file1.txt file2.txt file3.txt.
echo 'text' prints text to stdout. By itself it is a print statement. Combined with redirection: echo 'content' > filename creates a file containing that text (or overwrites it). echo 'more content' >> filename appends to the file without overwriting. The > and >> operators are redirection — covered fully in week 9, but useful enough to introduce here practically without full explanation.
One advanced preview worth mentioning: brace expansion. mkdir -p week{01..06} creates six directories — week01, week02, through week06 — in one command. Do not teach this formally yet; mention it exists and let students explore if curious. It returns properly in Phase 3 scripting.
Build your Phase 2 workspace. Create the following structure using only terminal commands — no Dolphin. Every command must be typed and understood.
~/projects/linux-course/
└── phase2/
├── week05/
├── week06/
├── week07/
├── week08/
├── week09/
├── week10/
├── notes/
└── README
log.txt containing the text "Week XX — started". Use echo and > for each.tree ~/projects/linux-course/phase2.The brace expansion solution — mkdir -p ~/projects/linux-course/phase2/week{05..10} — creates all six directories in one command. Do not teach this directly. Mention it exists, put it in the session guide, and let students who are curious find it. Students who discover brace expansion independently and share it with the class produce a better learning moment than you could engineer. Give them credit and tell them it is a powerful shell feature that reappears in scripting.
Task 6 — appending with >> vs > — is the practical lesson about not overwriting. This distinction will prevent a real mistake later in the course when students write scripts that log output. Make sure every student does both and understands what happened to the README in each case.
Ask: how do you rename a file in Linux? Take guesses. Most students expect a "rename" command. There is not one as a standard utility. mv does it. How? By moving a file to the same directory with a different name. That one fact reframes how mv works and makes it more intuitive.
cp source destination copies a file. The source remains. A new copy is created at destination. If destination is a directory, the file is copied into it with the same name. If destination is a new filename, the copy gets that name. cp -r source/ destination/ copies a directory recursively — without -r, cp refuses to copy directories and exits with an error. cp -v verbose: prints each file as it is copied. cp -p preserves timestamps and permissions. cp -i interactive: asks before overwriting. Warning without -i: cp overwrites silently. If the destination file exists, it is replaced without any confirmation. This is the dangerous default.
mv source destination moves a file. The source no longer exists after. mv is also rename: mv oldname.txt newname.txt renames the file in place. mv file.txt /other/location/ moves it. mv file.txt /other/location/newname.txt moves it and renames it simultaneously. Like cp, mv overwrites silently if the destination exists. Use mv -i for safety on anything important. mv -n never overwrites — it silently does nothing if destination exists, which can be worse. Know your flags.
Work in ~/sandbox. Create 6 test files: file1.txt through file6.txt, each with different content.
mv *.txt text/. Does this work? What does the * do here?Task 5 — the dangerous demo — must actually be done, not described. Students who watch file5.txt disappear without warning will remember the -i flag. Students who only hear about it will forget. Make the loss tangible by having them write content into file5.txt first ("this file has important content"), then overwrite it with cp, then look inside the result. The content is gone. That is the lesson.
Task 6 introduces wildcards before session 31 covers them formally. Let it land naturally — wildcard behavior is intuitive enough here that students will get it without a formal explanation. "*.txt means all files ending in .txt" is enough for now. The formalization in session 31 reinforces and extends what they already figured out here.
rm is the command you must teach most carefully. It has no safety net. No recycle bin. No undo. Files deleted with rm are gone. This session is as much about habit formation — think before you delete, verify the path, use -i on anything important — as it is about syntax. The rm session should be treated with the same seriousness as the sudo session.
Open question: "What is the most destructive command a regular user can run on a Linux system?" Take answers. Bring the class to rm -rf ~/ — which would delete everything in your home directory instantly, permanently, with no confirmation. That is the operating principle of rm: it trusts you mean what you say. The session is about using that trust deliberately.
rm filename deletes a file. It does not move it to trash. It removes the filesystem entry. The data is still on the disk until overwritten, which is why forensic recovery is sometimes possible — but not reliably, and never easily through normal means. Assume deleted means gone.
rm -r directory deletes a directory and everything inside it recursively. Without -r, rm refuses to delete directories. rm -f forces deletion — it does not prompt even for write-protected files. rm -rf combined: recursive and forced, no prompts. This is the most dangerous combination. It will delete anything you point it at. It is also the combination you will see in legitimate instructions for removing software, clearing caches, and cleaning builds. Use it deliberately, never by habit.
rm -i prompts before each deletion. Appropriate for anything you are not completely certain about. rmdir removes a directory — but only if it is completely empty. This is actually safer than rm -r for that reason: it will fail if you accidentally target a directory with content.
Before running rm on any path with wildcards, always run ls with the same argument first. rm *.txt — run ls *.txt first to see exactly what will be deleted. rm -rf /path/to/thing — run ls /path/to/thing first. The extra second is free. The mistake is permanent.
Never run rm -rf on a path you have not verified. Never run rm -rf with wildcards without checking what the wildcard expands to. And never run any rm command as root unless you are certain about what the path resolves to.
All work in ~/sandbox only.
chmod 444 protected.txt. Try to rm it. Read the error. Try rm -f. What is the difference?test-delete/ with 5 files inside. Before running any rm command, run ls to see exactly what is there. Write what ls shows. Then run rm -rf test-delete/. Write exactly what happened.ls *.txt first. Write what you see. Then run rm *.txt. Run ls again. Write what remains. What did * expand to?~/sandbox from scratch using mkdir. You just deleted everything in it — this is practice for recovering from mistakes.Task 4 — the deliberate rm -rf — should feel slightly uncomfortable. Students who write their prediction, run the command, and verify the result are learning the habit of intention before action. The discomfort of watching a directory vanish completely is the lesson. Do not soften it. Rebuild immediately after so they are not sitting with a broken environment — but let the moment of disappearance land.
If any student accidentally deletes something outside sandbox — their documents, their project files, anything important — stop the session. Help them check if it is recoverable (sometimes recently deleted files can be recovered with testdisk on ext4 filesystems if the disk has not been written to). Document what happened and use it as a real case study with their permission. Real mistakes are more instructive than simulated ones, and the empathy of the class helping a student recover is a good community moment.
Scenario: you are connected to a server over SSH. There is no desktop, no GUI, no mouse. You need to edit one line in a config file. What do you use? This is not hypothetical — it is Phase 5, which is coming. Nano is the answer for now. A different answer exists (vim, emacs) but the learning curve is a separate course. Nano handles everything needed in this course.
nano is a terminal text editor. Open a file: nano filename. If the file does not exist, nano creates it. The editor opens and the bottom two lines always show keyboard shortcuts — the ^ symbol means Ctrl. Students never need to memorize shortcuts because they are always visible.
Navigation is with arrow keys. There is no mouse mode by default. Ctrl+O saves — it prompts for the filename (press Enter to confirm the current name). Ctrl+X exits — if there are unsaved changes, nano asks whether to save. This saves students from the most common beginner mistake in terminal editors: not knowing how to exit.
Why learn a terminal editor at all? Three scenarios: remote server access via SSH (no GUI), system configuration files that require elevated privileges (GUI editors often cannot save to /etc even with sudo — sudo nano works cleanly), and scripting (writing bash scripts is fastest in a terminal editor when you are already in the terminal). Nano is not the best editor for professional code — VS Code, vim, and emacs all have advantages. For configuration and scripting in a terminal, nano is practical and sufficient.
~/projects/linux-course/phase2/README with nano. Add a new line: "Phase 2 — talking to it". Save with Ctrl+O and exit with Ctrl+X. Verify the change with cat.~/scripts/hello.sh with nano. Write exactly these two lines:
#!/bin/bash echo "Hello from my first script"Save and exit. Make it executable:
chmod +x ~/scripts/hello.sh. Run it: ~/scripts/hello.sh. What output appears?nano /etc/hostname. What happens? Can you save? Now try sudo nano /etc/hostname. Read it. Close without saving (Ctrl+X then N for No).~/.bashrc with nano. Use Ctrl+W to search for the word "alias". Do not change anything. Navigate to the line, read it, then exit without saving.Task 2 introduces the shebang line — #!/bin/bash — without a formal scripting lesson. Tell students: "This first line tells the system which interpreter to use. We will come back to this in Phase 3 when we write real scripts. For now, include it in every .sh file you create." Planting this habit now makes the scripting sessions easier because the shebang will already feel natural.
Task 3 — nano /etc/hostname without sudo vs with sudo — is a clean demonstration of why sudo nano is the correct pattern for editing system files. Some students will try to open Mousepad (KDE's GUI text editor) with sudo in future sessions. Redirect them: sudo nano is cleaner, more reliable, and works over SSH. Build the habit now.
Scenario: a config file called nginx.conf exists somewhere on the system. You need to edit it. You do not know where it is. What is your approach? Before this session: open Dolphin, search, hope. After this session: one command, immediate answer. That is the before and after this session delivers.
find starting-path [criteria] searches the filesystem in real time. It walks the directory tree from the starting path and applies criteria to every item it encounters. Results appear as it finds them — no database, no index, always current. The trade-off is speed: searching from / can take seconds or minutes on a full system.
Key criteria: -name 'pattern' matches filename (case-sensitive, supports wildcards). -iname 'pattern' case-insensitive. -type f files only. -type d directories only. -type l symlinks only. -size +100M larger than 100MB. -size -1k smaller than 1KB. -mtime -7 modified in the last 7 days. -mtime +30 not modified in the last 30 days. -user username owned by user. -perm 644 exactly these permissions. Criteria combine with implicit AND by default.
-exec flag: act on each result. find . -name "*.sh" -exec chmod +x {} \; — the {} is replaced by each found filename, the \; terminates the -exec. This is how you apply a command to every result of a find. The alternative is -exec command {} + which passes all results at once (faster).
locate filename searches a pre-built database. Much faster than find for simple name searches. The database is updated nightly by updatedb. Trade-off: files created or deleted today may not appear in locate results. sudo updatedb manually refreshes the database.
Write the exact command for each task:
find / -name bash 2>/dev/null. Which is faster? Which shows more results? Why might they differ?Task 6 — files in home owned by root — reliably surprises students. There are usually some, created by commands run with sudo that inadvertently wrote to ~/. This is a real system hygiene issue: files in your home that you do not own can cause problems when scripts or applications try to read them with your credentials. Name the principle: files in your home should be owned by you.
The 2>/dev/null in task 7 appears before the formal redirection session (34). Introduce it briefly: "this silences the permission denied errors so we can see real results." The full explanation comes in session 34 — this is just practical exposure.
Scenario: a server generated 50 log files overnight. One of them contains a "connection refused" error. You need to find which one. Without grep: open each file in less and search manually — 50 times. With grep: one command, instant answer. That is grep's purpose.
grep 'pattern' filename searches a file for lines matching the pattern and prints them. The pattern can be simple text or a regular expression. grep prints the entire line for each match, not just the matching part — this provides context.
Essential flags: -r recursive — search all files in a directory tree. -i case-insensitive. -n show line numbers. -v invert — show lines that do NOT match. -c count matching lines instead of showing them. -l list only filenames containing a match (not the match content). -w match whole words only — 'error' does not match 'errors'. -A n show n lines After each match. -B n show n lines Before. These provide context around matches.
Finding the right combination: to find all config files in /etc that configure a specific setting, use grep -r 'setting' /etc. To know which files contain it without reading all the matches: grep -rl 'setting' /etc. To count occurrences: grep -c 'pattern' file. To see context: grep -n -A 2 'error' logfile shows each error with the 2 lines that follow it.
/var/log/syslog for 'error'. How many lines match? Use -c./etc/passwd that do NOT contain '/bin/bash'. What shells do those accounts use?Task 3 — lines NOT containing /bin/bash — reveals the other login shells on the system: /bin/sh, /usr/sbin/nologin, /bin/false. Students who look up what /usr/sbin/nologin does are discovering service accounts — users that exist on the system but cannot log in interactively. This is a real security concept: the www-data user runs the web server but cannot be used to log in. Name this if it comes up.
Write on the board: ls *.txt. Ask: does ls know what *.txt means? Or does something else handle it? The correct answer is that bash expands *.txt into a list of matching filenames before ls ever sees the command. ls receives a list of files, not a pattern. This distinction — shell expansion, not command interpretation — is the session's central concept.
Wildcards (also called globs) are expanded by the shell before the command runs. The command never sees the pattern — it receives a list of matched filenames. This means wildcards work with any command: ls, rm, cp, mv, grep, find — any command that accepts filenames as arguments.
* (asterisk) matches any sequence of characters including none. *.txt matches all files ending in .txt. report* matches all files starting with report. *log* matches any file with log anywhere in the name.
? (question mark) matches exactly one character. file?.txt matches file1.txt, fileA.txt, file_.txt but not file10.txt (two characters where one was expected).
[abc] matches exactly one of the listed characters. [aeiou] matches one vowel. [a-z] matches any lowercase letter. [0-9] matches any digit. [!abc] or [^abc] matches any character NOT listed.
Important behaviors: * does not match files starting with a dot (hidden files). Use .* explicitly to match hidden files. If a pattern matches nothing, bash passes the literal pattern string to the command — which usually causes an error. Quote patterns to prevent expansion: grep '*.txt' file searches for the literal string *.txt inside the file, not filenames. This is the wildcard vs regex distinction.
Create these files in ~/sandbox: report1.txt, report2.txt, report10.txt, data.csv, data.json, photo.jpg, photo.png, photo.gif, README, .hidden
ls *.xyz when no .xyz files exist? Read the error carefully.grep '*.txt' README — does this search for files named *.txt or for the literal text "*.txt" inside README? Verify by putting the text *.txt inside README first.Task 3 — * not matching hidden files — is a real-world gotcha. A student who runs rm * to clean a directory and then discovers hidden files remain will be confused unless they understand this behavior. Make it explicit: * is not "everything." It is "everything not starting with a dot." For truly everything: rm * .* — but be careful, because .* also matches . and .. (the current and parent directory references), which on some systems causes problems.
Task 7 — grep with a quoted wildcard — is the glob vs regex moment. A shell wildcard *.txt and a regex *.txt mean completely different things. In glob: * means any sequence. In regex: * means zero or more of the previous character. Quoting prevents glob expansion, so grep receives the regex. This distinction trips up experienced users — establishing it clearly now saves confusion later.
No new content. A realistic, messy scenario that requires everything from weeks 7–8. Students organize a chaotic folder using the terminal alone. The scenario is designed to be ambiguous enough that different students will make different organizational decisions — the discussion afterward is as valuable as the task itself.
Prepare a tar archive called downloads.tar.gz containing 40 files with realistic messy names — a mix of .pdf, .jpg, .txt, .sh, .csv, .zip, named with dates, project names, random strings, and some with spaces. Include some files with "temp" or "tmp" in the name, some dated more than 30 days ago (set with touch -d), and some .sh files without execute permission. Distribute via USB or shared network location before the session.
Scenario brief (read this to the class): You have inherited a colleague's downloads folder. They organized nothing for two years. Your task is to make sense of it using only the terminal.
First task — extract the archive: tar -xzf downloads.tar.gz. This command extracts the archive. You have not learned tar yet — that is Phase 5. Just use it. We will come back to what it does.
Requirements:
After everyone finishes: two volunteers show their folder structure and README. Compare organizational choices. Which structure would be easiest to navigate in six months?
What commands did you use most? What would wildcards not handle that required individual mv commands? What would you automate with a script if you had to do this every week? That last question is Phase 3's premise.
The tar command appears here without explanation. That is intentional — students should be comfortable using a command based on a single instruction before they fully understand it. This mirrors real-world usage where you follow a command without complete understanding and verify the result. The full tar session is in Phase 5. When it arrives, students will already have a concrete experience of what tar does, which makes the explanation land better.
The commands.log requirement builds documentation habit before scripting makes it essential. Students who log every command are implicitly writing a script. In Phase 3, point back to their commands.log: "What you wrote here is almost a bash script. The next step is making it run automatically."
Write on the board: ls /usr/bin | wc -l. Ask: without running it, predict what this produces. Take predictions. Run it. The answer is the number of executable programs in /usr/bin. Ask: how would you get that answer without a pipe? The alternatives — ls and then manually count, or ls > file and then wc -l file — are both slower and more steps. Two commands, one character between them. That is a pipe.
The pipe | connects the stdout of the command on the left to the stdin of the command on the right. The left command does not know what receives its output. The right command does not know it is receiving piped input rather than a file — it behaves identically in both cases. They are independent programs connected at runtime by the shell.
This is the Unix philosophy: write programs that do one thing well and work with other programs. ls produces a list. grep filters it. wc counts it. sort orders it. head takes the first N. Each tool is simple individually. Combined, they are powerful:
Build pipelines incrementally: start with the first command and verify its output. Add the second and verify again. Add the third. Never build a complex pipeline in one shot and wonder why it does not work — build it piece by piece and understand each stage before adding the next.
Build each pipeline incrementally — run each stage separately first, then connect them.
Task 6 — build your own pipeline — is the diagnostic. A student who builds cat /etc/passwd | grep -v nologin | cut -d: -f1 | sort to get a sorted list of real user accounts has understood composability. A student who builds ls | ls has not. Peer-sharing these pipelines produces useful comparison. Ask the class: which pipeline solved a real problem? Which is the most elegant? Which is the most powerful?
The ps aux | grep bash | grep -v grep pattern in task 5 introduces a common grep idiom: when you grep for a process by name, grep itself appears in the results because it contains the search string. Piping through grep -v grep removes it. This is a small trick that becomes a reflex and demonstrates how pipelines can fix their own artifacts.
Live demo: run ls /etc > output.txt. Open output.txt with cat — output is in the file. Now run ls /fakepath > output.txt. The error appeared on screen, not in the file. Ask: why did the error go to the screen when everything else went to the file? Because they are different streams. That is today's topic.
Every Linux process has three standard streams, each with a number: stdin (0) — input, keyboard by default. stdout (1) — normal output, terminal by default. stderr (2) — error output, also terminal by default. By default, stdout and stderr both appear on screen. They look identical but are separate streams. Redirection controls where each one goes.
/dev/null is a special device file that discards everything written to it. It is the discard bin of the Linux filesystem. command 2>/dev/null runs the command silently — errors are discarded, normal output appears. command > /dev/null 2>&1 runs completely silently — nothing appears. Useful in scripts when you want to run a command for its side effect but not see its output.
The order of 2>&1 matters: command > file 2>&1 correctly sends stderr to the same place as stdout (the file). command 2>&1 > file does something different — it sends stderr to the terminal (where stdout currently goes) and then redirects stdout to the file. Always write > file 2>&1, not 2>&1 > file.
ls /etc to a file called etc-contents.txt. Verify it is there with cat.ls /var to the same file. Verify both sections are in it.ls /fakepath — error appears on screen. Redirect only the error to errors.txt. Does any output appear on screen?ls /etc /fakepath > output.txt 2> errors.txt. What ends up in each file?ls /etc /fakepath > combined.txt 2>&1. What ends up in combined.txt? How is it different from task 4?find / -name "*.conf" 2>/dev/null > conf-files.txt. How many .conf files does it find? Why are no errors shown?Task 6 — find with 2>/dev/null — is a professional pattern. "find / -name pattern 2>/dev/null" appears constantly in documentation, scripts, and Stack Overflow answers. Students who understand why the 2>/dev/null is there — not magic, but deliberately silencing permission denied errors that would otherwise flood the output — read those commands correctly. Students who treat it as boilerplate to copy paste do not.
The ordering issue — > file 2>&1 vs 2>&1 > file — is worth a brief demo but not a long lecture. The correct form is > file 2>&1. If students ask why, explain that the shell processes redirections left to right: by the time it reaches 2>&1, stdout has already been redirected to the file, so stderr follows it there. In the wrong order, stdout has not been redirected yet when 2>&1 is processed, so stderr goes to the terminal.
Show two grep commands side by side: grep "error" /var/log/syslog and grep "^error" /var/log/syslog. Ask: what is different? One finds 'error' anywhere in a line. The other finds only lines that start with 'error'. That ^ character is regex. Today is the minimal regex introduction — enough to be useful, not enough to be overwhelming.
Regular expressions (regex) are a pattern language for describing text. grep uses them by default for basic patterns. The essential subset:
Practical applications: grep '^#' file finds all commented lines. grep -v '^#' file | grep -v '^$' finds all non-commented non-empty lines — the active configuration. grep '^[0-9]' file finds lines starting with a number. These patterns appear constantly when reading config files, log files, and scripts.
grep -E enables extended regex with additional syntax: + (one or more), ? (zero or one), | (alternation: this or that), () grouping. grep -E "error|warning" file finds lines containing either word. This is the most useful extended pattern for log analysis.
Task 4 — non-commented non-empty lines in .bashrc — is the session's most practical result. It gives students a clean view of what is actually active in their bash configuration. This technique is applicable to any config file: Apache, SSH, nginx, PostgreSQL. The pattern grep -v '^#' file | grep -v '^$' is one to remember explicitly — it is used constantly in systems administration.
Keep the regex introduction minimal. The goal is functional literacy — enough to read patterns encountered in the wild and write simple ones. Full regex mastery is a separate topic that takes months. If students ask about more complex patterns (lookaheads, backreferences, non-greedy matching), acknowledge those exist and say they are beyond this course's scope but worth exploring in their own time.
Put on the board: cat /etc/passwd | cut -d: -f1 | sort | uniq | wc -l. Work through it left to right with the class before running it. What does each stage produce? What does the final number represent? Run it. Verify the prediction. This is the session in miniature — four tools, one pipeline, one useful answer.
sort sorts lines of text. Default: alphabetical. sort -n numeric sort (so 10 comes after 9, not after 1). sort -r reverse order. sort -u sort and remove duplicates simultaneously. sort -k2 sort by the second field. sort -t: -k3 -n sort on the third colon-delimited field numerically.
wc (word count) counts: wc -l lines, wc -w words, wc -c characters/bytes. Usually used to count lines in a pipeline.
uniq removes consecutive duplicate lines. Critical requirement: input must be sorted first, otherwise uniq only removes consecutive duplicates — if the same value appears at lines 1, 5, and 10, only the consecutive pairs are collapsed. Always sort before uniq unless you specifically want only consecutive deduplication. uniq -c counts occurrences — each line prefixed with how many times it appeared.
cut extracts columns. cut -d: -f1 uses colon as delimiter and extracts field 1. cut -d, -f2,4 extracts fields 2 and 4 from CSV. cut -c1-10 extracts characters 1 through 10. This is the tool for parsing structured text files like /etc/passwd (colon-delimited) or CSV exports.
Task 7 — finding the most-used shell — has the solution: cut -d: -f7 /etc/passwd | sort | uniq -c | sort -rn | head -1. Students who arrive at this independently have fully internalized the pipeline model. The answer is almost certainly /usr/sbin/nologin or /bin/false (service accounts), which is itself an informative result about how Linux manages system users. Let that result prompt a brief discussion of service accounts.
Ask: when you run sudo apt install tree — what actually happens? Where does the file come from? Who checked it was not malicious? How does apt know where to look? Let the class guess. All reasonable guesses go on the board. This session answers all of them.
A package is a compressed archive containing a program's files plus metadata: its name, version, description, dependencies, and installation scripts. On Debian-based systems, packages are .deb files. The metadata is as important as the files — it tells apt what else needs to be installed for this package to work.
A repository is a server that hosts a collection of packages, organized by release, architecture, and section. Each repository maintains an index — a list of all packages it contains, their versions, checksums, and dependencies. Repositories are cryptographically signed by their maintainer. apt verifies the signature before trusting anything in the index. This is how package managers prevent tampered software: every package's integrity can be verified against its signed checksum.
apt reads its repository list from /etc/apt/sources.list and the files in /etc/apt/sources.list.d/. Open sources.list and read one line together:
apt update downloads the package index from each repository — not the packages themselves, just the list of what is available and at what version. This is why apt update must be run before apt install on a fresh system: without the index, apt does not know what packages exist.
apt install reads the index, finds the package and its dependencies, downloads all of them, verifies their checksums, and installs in the correct order. The dependency resolution — figuring out what else needs to be installed — is one of apt's most important functions. You never need to manually hunt for library dependencies.
Why repositories beat downloading from the internet: every package is reviewed, built consistently, tested, and signed. Updates flow through the same channel. Security patches are distributed automatically. You trust the repository maintainer, not every individual software author.
/etc/apt/sources.list with less. Read every uncommented line. What releases and sections are configured? What is the difference between main and universe?sudo apt update and read the output carefully. What does "Hit:", "Get:", and "Ign:" each mean?apt list --upgradable 2>/dev/null. How many packages have updates available?apt show curl. Read the output: what are curl's dependencies? Who maintains it? What does it do?/var/cache/apt/archives/. What is stored there and why does it exist?The "Hit vs Get vs Ign" question in task 2 teaches students to read apt output as information rather than scrolling past it. Hit: index checked, no change. Get: new version of the index downloaded. Ign: line was skipped (often non-critical metadata like translation files). A system where every line says "Hit" after apt update has not changed since the last update — fully current. A system with many "Get" lines has new packages available.
Task 6 — the malicious repository question — is a critical thinking exercise. The safeguard is the GPG signature: apt requires a trusted signing key for each repository. Adding an unsigned repository requires explicitly overriding security. This is why guides that say "add this PPA" always include a step to add the signing key — without the key, apt refuses the repository's packages. Students who understand this are harder to trick into installing malicious software.
Ask: what is the difference between apt remove and apt purge? Take guesses. Most students will not know. This session makes the distinction clear and explains why it matters — especially when you reinstall a package and want to start clean.
apt search keyword searches package names and descriptions for the keyword. Use it when you know what you want to do but not the package name. The output can be long — pipe it through grep or less. apt search "text editor" returns many results; apt search "text editor" | grep "^nano" finds nano specifically.
apt show packagename displays full information about a package before installing: description, version, size after installation, dependencies, maintainer, homepage. Check this before installing anything you are not familiar with — it takes 5 seconds and prevents installing the wrong thing.
apt install packagename downloads and installs. It lists what will be installed (the package plus any dependencies) and asks for confirmation. Read the list before typing Y. If apt wants to remove something to resolve dependencies, understand why before confirming.
apt remove packagename removes the package but keeps its configuration files. If you reinstall the package later, your settings will still be there. apt purge packagename removes the package and its configuration files. Use purge when you are done with something permanently or when you want to reinstall with a clean configuration.
apt autoremove removes packages that were installed as dependencies and are no longer needed by any installed package. Run this after removing software to keep the system clean. apt list --installed shows everything currently installed.
apt search "text browser". Find lynx or w3m. Run apt show on it before installing. Then install it. Use it to visit a website. What is the experience like compared to a GUI browser?Task 6 — each student installs something they want — is intentionally open and slightly chaotic. Some will install games (cmatrix, nsnake). Some will install development tools. Some will install media players. All are valid. The point is that apt works the same way for all of them, dependencies are handled transparently, and the choice is theirs. This is a small but real experience of ownership. Let the demos run — five minutes of peer show-and-tell about personal installations builds more engagement than a structured exercise.
Ask: on Windows, when do you typically update? Most answers: when forced to, or never if avoidable. Why do people avoid updates? They restart at bad times, they slow things down, they occasionally break things. Linux updates are different in three specific ways — this session names those differences.
apt update vs apt upgrade — the most commonly confused pair. apt update refreshes the package index — it downloads the current list of available package versions. It does not install or change anything on the system. apt upgrade reads the updated index and installs newer versions of all currently installed packages where a newer version is available. It will not install new packages or remove existing ones to complete an upgrade — those situations are flagged as "held back."
apt full-upgrade is more aggressive — it will add new packages or remove old ones if necessary to complete upgrades. On a desktop system, full-upgrade is usually appropriate. On a production server, removing packages to satisfy dependencies is a decision that requires careful review. Know which you are running.
Why updates matter. Most updates are not new features — they are bug fixes and security patches. A CVE (Common Vulnerabilities and Exposures) is a documented security vulnerability. When a CVE is discovered and a patch is released, the patch reaches your system through apt upgrade. If you have not updated in months, you are running known vulnerabilities that are publicly documented. Not updating is a risk decision, not a neutral one.
When not to update immediately. On a production server during business hours: updates can restart services. Mid-project when you need stability: a package update could change behavior. Immediately after a major release: wait a few days for early adopters to find problems. LTS releases update less frequently precisely for stability — this is why we chose Kubuntu LTS for this course.
Unattended upgrades. Ubuntu enables automatic security updates by default via the unattended-upgrades package. Only security patches are applied automatically — feature updates are not. Show students the configuration file so they know the machine is not randomly updating itself without their knowledge — it is applying only the critical patches that do not require user decisions.
sudo apt update. Count how many package lists were refreshed. Count how many said "Hit" vs "Get".apt list --upgradable 2>/dev/null. For each upgradable package, run apt show to understand what it is. Are any security updates in the list?sudo apt upgrade. Do not confirm yet — read the complete list of what will change. Are any packages being held back? Confirm and observe the upgrade process.apt list --upgradable 2>/dev/null again. Is it empty now?/etc/apt/apt.conf.d/50unattended-upgrades in less. What categories of updates happen automatically? What is excluded?The apt update vs apt upgrade confusion is so common that even experienced Linux users sometimes say "run apt-get update" when they mean "run apt-get upgrade." Make the distinction absolutely explicit with the analogy: apt update is checking the newspaper for what is available in stores. apt upgrade is actually going to the store and buying the new versions. Checking the newspaper changes nothing in your house. Only the store run does.
Task 6c — why LTS — ties back to the distro discussion from Phase 1 session 3. Students who remember the stability vs cutting-edge trade-off discussion will recognize this as the practical consequence of that choice. LTS chose stability. That is why the classroom machine does not surprise you with breaking changes mid-course.
No new content. Each student designs and installs their own tool set, documents it, and demonstrates it. By the end of this session, every machine in the room is slightly different — each reflects the choices of the person sitting in front of it. That is the session's real outcome: not a list of installed packages, but the experience of making deliberate, informed choices about what the machine contains.
Brief: today you decide what goes on your machine. The only rule is that you must justify every choice.
~/projects/linux-course/phase2/my-setup.md containing:
Each student gives a two-minute demo: what they installed and why. After demos:
Group reflection: what was the most interesting thing someone else installed? What did you install that you will actually use? What would you install if you had more time?
Preview of Phase 3: "Your machine now has the tools you chose. Next phase, the machine starts looking and behaving the way you want it. Dotfiles, KDE Plasma customization, your first real script that does something useful. The difference between Phase 2 and Phase 3 is the difference between a machine that works and a machine that works the way you work. That difference is what Phase 3 is about."
The diversity of installations in this session is the point. Some students will install neofetch (shows system info in ASCII art). Some will install a Python environment. Some will install a game. Some will install a music player. All are right. All demonstrate apt working correctly. The comparison of choices during demos is genuinely interesting — it shows that Linux as a platform is neutral about what you use it for, which is itself part of the course's message.
Collect the my-setup.md files or ask students to share them. At the end of Phase 6, they will look back at what they installed in week 10 from the perspective of having built their own computer, customized their environment, written scripts, run a local AI model, and completed a final project. The contrast will be significant and worth naming.
Phase 2 of 6 · Linux seminar · Kubuntu LTS · Ages 15–18
Phase 3 — Making it yours — begins next session