# Why, Scott, WHY?!? > Full content of every published post. Personal tech blog by Scott Pierce. --- # Why AI Can't Make You a Beautiful Logo Date: 2026-05-15 URL: https://ddrscott.github.io/blog/2026/why-ai-cant-make-you-a-beautiful-logo/ Tags: ai, communication, design, llm, prompting Description: AI is a distillation of the internet. It cannot read your mind. The fastest way to better prompts is to strip subjective words and feed it objective context. **TL;DR** — AI is a distillation of the internet. It can't read your mind. The fastest way to get better results is to strip subjective words out of your prompts and feed it objective context instead. The same way you'd talk to a stranger you just met. ## "Make a beautiful logo" Say you ask an AI: *"Make a beautiful logo that reflects the values of this website."* That's it. You hit enter. You're trusting the model with a request that, in your brain, has a clear picture attached to it. Colors. Tone. The vibe of the site. Maybe a feeling you can't even put words to. The AI has none of that. It doesn't know if it should visit the website. It doesn't know what your conception of "beautiful" is. It doesn't even know if the domain name has any correlation to the content you actually care about. So it guesses. And then you're surprised when the guess is wrong. ## AI is a distillation of the internet Here's the framing that fixes this for me: **AI is a distillation of the internet.** It's an extremely broad, general-purpose guesser. When you ask it something subjective, it's averaging across every opinion, style, and aesthetic on the open web and handing you back the centroid. There is essentially zero hope of that centroid matching what's in your head. Try this thought experiment. Four people sit in a room. You say *"create a beautiful woman."* Each of those four people will draw a different person. Different hair, different build, different face, different everything. The word "beautiful" doesn't have a destination. It has four. Or four million. Now do the same thing with *"blonde."* How blonde? Strawberry? Platinum? Dirty? You've narrowed it, but only a little. The internet can't answer subjective questions because subjective questions don't have answers. They have preferences. And your preference is a thing only you can supply. ## The Google search rule still applies If you typed *"beautiful woman"* into Google, you'd get a top page of results that are mostly stock photos and whatever the algorithm thinks is safe to surface. You'd scroll, drill in, refine — *"curly hair, round face, this height, this style"* — and slowly converge on what you actually wanted. Same rule with AI. **Take every subjective word out.** Replace it with something measurable. - ~~Beautiful~~ → high contrast, warm tones, geometric sans-serif - ~~Modern~~ → flat color, no gradients, 8px rounded corners - ~~Professional~~ → muted palette, generous whitespace, serif headings This isn't because AI is dumb. It's because subjective words are coordinates without an origin. Objective words are coordinates with an origin. AI needs the origin. ## The trick: let AI teach you its language Here's the move that changed how I prompt. Don't pre-declare anything as good or bad. Don't say *"I like this style"* or *"this is the kind of thing I want."* That's all subjective and you're back to the same problem. Instead: grab a reference image — a logo, a screenshot, a piece of art you respond to — drop it into the model, and say: **"Analyze this. Give me the words you think are appropriate for what I just gave you."** The AI won't tell you if it's good or bad art. It'll do what a computer does. It'll tell you: - The dominant color codes - The tonal range (bright, dark, contrast ratios) - The font family or family characteristics - The composition (negative space, balance, weight) - The mood vocabulary it associates with those properties Now you have the AI's language for the thing you wanted. You can speak it back. *"Make me a logo that uses this palette, this kind of weight, this kind of negative space — but swap the serif for a geometric sans-serif and tighten the kerning by 5%."* You've stopped guessing what AI knows. You've made AI tell you what it knows, and then you're working in that vocabulary. ## Prompting is about context The whole thing reduces to two questions: 1. **Where are you starting?** 2. **Where do you need to get to?** If you can answer both with specifics, AI fills in the middle. That's its job. That's what it's good at. The failures happen when you give it the destination ("beautiful logo") without the starting point, or the starting point without a destination. Context isn't decoration on a prompt. It's the prompt. ## We do this with humans too This isn't an AI quirk. It's a communication quirk that AI happens to amplify. When I meet someone new — say I meet Timothy for the first time — we don't immediately get to business. We chat. We figure out what language we both speak. Where the common ground is. Whether we've got shared references — faith, family, profession, taste in music. Once we've calibrated, the actual conversation gets easier, faster, more precise. AI is the same, just a touch more computery. Large language models understand English pretty well now, so you can be a little less robotic than you had to be three years ago. But you still need to be specific. You still need to calibrate. You still need to take out the subjective words and put in the objective ones. ## The whole article in one line > Take subjective words out. Put objective things in. Let AI teach you its language before you try to speak it. That's it. That's the diatribe. Go make better prompts. --- # Postgres via Cloudflare Tunnel Date: 2026-02-19 URL: https://ddrscott.github.io/blog/2026/postgres-via-cloudflare-tunnel/ Tags: devops, postgres Description: Expose your local Postgres to the world without opening firewall ports. **Access your home Postgres from anywhere with Cloudflare Tunnels.** ## Start the PG Server using Docker ```bash docker run -d --name pg16 \ -e POSTGRES_PASSWORD=secret \ -p 15432:5432 \ -v pg16_data:/var/lib/postgresql/data \ pgvector/pgvector:pg16 ``` ## Test Local Connection using Docker ```bash docker run -it --rm \ -e PGPASSWORD=secret \ pgvector/pgvector:pg16 \ psql -h 192.168.86.250 -U postgres -p 15432 -c 'select version()' ``` > Replace `192.168.86.250` with your machine's private IP. ## Create the Tunnel Install `cloudflared` on the Docker host following the [official guide](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/get-started/create-remote-tunnel/). Configure the tunnel to point to `localhost:15432`. ## Connect from Remote On any remote machine: ```bash cloudflared access tcp --hostname pg16.dataturd.com --url localhost:15432 ``` Then connect your client to `localhost:15432` as if Postgres were local: ```bash # With Docker docker run -it --rm \ -e PGPASSWORD=secret \ --network host \ pgvector/pgvector:pg16 \ psql -h localhost -U postgres -p 15432 -c 'select version()' # Or with local psql PGPASSWORD=secret psql -h localhost -U postgres -p 15432 -c 'select version()' ``` --- # Live Code Highlighting: A Technical Deep Dive Date: 2026-02-10 URL: https://ddrscott.github.io/blog/2026/strudel-live-code-highlighting/ Description: How Strudel implements real-time code highlighting during music playback - a unique approach to connecting notation and sound in live coding environments. **How Strudel implements real-time code highlighting during music playback - a unique approach to connecting notation and sound in live coding environments.** ## Abstract Live coding environments for music have long grappled with a fundamental challenge: how to visually connect the code musicians write with the sounds they hear. While most live coding systems provide static syntax highlighting and flash feedback on evaluation, Strudel - a JavaScript port of TidalCycles - implements a sophisticated system that dynamically highlights code elements as they produce sound in real-time. This whitepaper examines Strudel's implementation in detail, surveys prior art in the field, and analyzes the architectural decisions that make this feature possible. ## 1. Introduction ### 1.1 The Problem of Liveness In traditional programming environments, there's a clear separation between writing code and executing it. The feedback loop consists of: write code, run it, observe output. Live coding environments compress this loop dramatically, but even in systems like TidalCycles or Sonic Pi, the visual connection between code and sound remains indirect. Consider a pattern like: ```javascript s("bd hh sd hh").note("c3 e3 g3 b3") ``` When this plays, a musician hears a sequence of sounds. But *which* part of the code is producing the current sound? Is it the `bd` (bass drum) or the `hh` (hi-hat)? The `c3` or the `g3`? Traditional syntax highlighting provides no answer - it colors keywords by their syntactic role, not their temporal activity. ### 1.2 Strudel's Solution Strudel solves this by implementing what we might call **temporal source mapping** - a system that maintains bidirectional links between: 1. **Source code positions** (character ranges in the editor) 2. **Musical events** (called "Haps" in Tidal terminology) When a Hap becomes active (i.e., its time span includes "now"), the corresponding source code is visually highlighted with a colored outline. This creates an immediate, visceral connection between notation and sound.  ## 2. Prior Art and Related Work ### 2.1 Gibber: Annotations and Visualizations Charlie Roberts' research on Gibber represents the closest prior art to Strudel's approach. In his 2015 NIME paper "Beyond Editing: Extended Interaction with Textual Code Fragments," Roberts describes techniques for: - **Inline annotations**: Displaying runtime values directly in source code - **Sparkline visualizations**: Showing continuous signals as miniature graphs - **Pattern playhead indicators**: Visual markers showing current position in sequences Roberts' key insight was that source code could serve dual purposes: as notation (what to play) and as visualization (what is playing). His 2018 web essay "Realtime Annotations & Visualizations in Live Coding Performance" catalogs these techniques extensively. However, Gibber's approach differs from Strudel's in a crucial way: Gibber's annotations are **injected widgets** that appear alongside or within code, whereas Strudel's highlighting **decorates the existing code** without modifying its visual structure. ### 2.2 Estuary: Collaborative Projectional Editing The Estuary project (Ogborn et al., 2017) explored collaborative live coding in the browser with TidalCycles-style patterns. While Estuary focused primarily on collaborative editing and "projectional" interfaces, it contributed to the ecosystem of browser-based live coding that Strudel builds upon. ### 2.3 Traditional Debugger Approaches Traditional debugging tools have long provided execution visualization: - **Breakpoint highlighting**: Current line indicators in step debuggers - **Coverage highlighting**: Showing which lines were executed - **Profiler heat maps**: Coloring code by execution frequency These approaches treat code as sequential instructions. Strudel's challenge is different: patterns are **declarative and cyclical**, not imperative and linear. A pattern like `"bd hh"` doesn't execute once - it continuously generates events across time. ### 2.4 CodeMirror Decoration System Strudel's implementation leverages CodeMirror 6's decoration system, which provides: - **Mark decorations**: Styling ranges of text - **Widget decorations**: Inserting DOM elements - **Line decorations**: Styling entire lines - **Computed decorations**: Dynamically generated based on state The key innovation in CodeMirror 6 is that decorations are computed from **immutable state**, enabling efficient updates without DOM thrashing. ## 3. Architecture Overview Strudel's live highlighting system consists of five interconnected components: ``` ┌─────────────────────────────────────────────────────────────────┐ │ USER CODE │ │ s("bd hh").note("c") │ └──────────────────────────────┬──────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────────┐ │ TRANSPILER │ │ Parses code, extracts miniLocations: [[3, 10], [17, 20]] │ └──────────────────────────────┬──────────────────────────────────┘ │ ┌───────────────┴───────────────┐ ▼ ▼ ┌──────────────────────────┐ ┌─────────────────────────────────┐ │ PATTERN ENGINE │ │ CODEMIRROR │ │ Attaches locations to │ │ Stores locations as invisible │ │ Haps via context │ │ decorations with IDs │ └────────────┬─────────────┘ └──────────────┬──────────────────┘ │ │ ▼ │ ┌──────────────────────────┐ │ │ SCHEDULER │ │ │ Queries pattern for │ │ │ active haps @ 60fps │ │ └────────────┬─────────────┘ │ │ │ ▼ │ ┌──────────────────────────┐ │ │ DRAWER │ │ │ Animation loop calls │───────────────────┘ │ highlight(haps, time) │ └────────────┬─────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────────┐ │ VISUAL HIGHLIGHTING │ │ CodeMirror computes decorations for active locations │ │ User sees: s("bd hh").note("c") │ │ ^^^^^^ ← outlined while sounding │ └─────────────────────────────────────────────────────────────────┘ ``` ## 4. Implementation Deep Dive ### 4.1 Source Location Extraction The transpiler (`packages/transpiler/transpiler.mjs`) uses Acorn for JavaScript parsing and Peggy for mini notation parsing. During transpilation, it captures the character positions of each pattern element: ```javascript export function transpiler(input, options = {}) { const { emitMiniLocations = true } = options; let miniLocations = []; const collectMiniLocations = (value, node) => { const leafLocs = getLeafLocations(`"${value}"`, node.start, input); miniLocations = miniLocations.concat(leafLocs); }; // Walk AST, collect locations for mini notation strings... return { output, miniLocations, widgets, sliders, labels }; } ``` The output format is an array of `[start, end]` tuples representing character ranges: ```javascript // For: s("bd hh").note("c3") miniLocations = [ [3, 5], // "bd" [6, 8], // "hh" [17, 19], // "c3" ] ``` ### 4.2 Location Propagation Through Patterns Patterns in Strudel carry a `context` object that propagates through all transformations. The `withLoc()` method attaches source positions: ```javascript // packages/core/pattern.mjs withLoc(start, end) { const location = { start, end }; return this.withContext((context) => { const locations = (context.locations || []).concat([location]); return { ...context, locations }; }); } ``` This is crucial: when patterns are composed (via `stack`, `cat`, `slow`, `fast`, etc.), the location context is preserved and combined. A Hap created from `s("bd").stack(s("hh"))` will carry locations from *both* source patterns. ### 4.3 The Hap Data Structure A Hap (Happening) is the fundamental event unit in Tidal/Strudel: ```javascript // packages/core/hap.mjs class Hap { constructor(whole, part, value, context = {}, stateful = false) { this.whole = whole; // TimeSpan: when the event "conceptually" occurs this.part = part; // TimeSpan: the portion being queried this.value = value; // The sound/note/control data this.context = context; // Includes locations! this.stateful = stateful; } isActive(time) { return this.whole.begin <= time && this.endClipped >= time; } } ``` The `context.locations` array enables reverse-mapping from event to source code. ### 4.4 CodeMirror State Management Strudel uses three CodeMirror 6 StateFields to manage highlighting: #### 4.4.1 `miniLocations` - Structural Storage ```javascript // packages/codemirror/highlight.mjs const miniLocations = StateField.define({ create() { return Decoration.none; }, update(locations, tr) { for (let e of tr.effects) { if (e.is(setMiniLocations)) { const marks = e.value.locations.map(([from, to]) => Decoration.mark({ id: `${from}:${to}`, // Key for matching! attributes: { style: `background-color: #00CA2880` }, }).range(from, to) ); locations = Decoration.set(marks, true); } } return locations; }, }); ``` Each decoration carries an `id` in the format `"start:end"`, enabling O(1) lookup when matching active Haps. #### 4.4.2 `visibleMiniLocations` - Active Hap Tracking ```javascript const visibleMiniLocations = StateField.define({ create() { return { atTime: 0, haps: new Map() }; }, update(visible, tr) { for (let e of tr.effects) { if (e.is(showMiniLocations)) { const haps = new Map(); for (let hap of e.value.haps) { if (!hap.context?.locations || !hap.whole) continue; for (let { start, end } of hap.context.locations) { const id = `${start}:${end}`; // Keep earliest-starting hap for each location if (!haps.has(id) || haps.get(id).whole.begin.lt(hap.whole.begin)) { haps.set(id, hap); } } } visible = { atTime: e.value.atTime, haps }; } } return visible; }, }); ``` This builds a Map from location IDs to their currently active Haps. #### 4.4.3 `miniLocationHighlights` - Computed Decorations ```javascript const miniLocationHighlights = EditorView.decorations.compute( [miniLocations, visibleMiniLocations, displayMiniLocationsState], (state) => { const shouldDisplay = state.field(displayMiniLocationsState); if (!shouldDisplay) return Decoration.none; const iterator = state.field(miniLocations).iter(); const { haps } = state.field(visibleMiniLocations); const builder = new RangeSetBuilder(); while (iterator.value) { const { from, to, value: { spec: { id } } } = iterator; if (haps.has(id)) { const hap = haps.get(id); const color = hap.value?.color ?? 'var(--foreground)'; const style = hap.value?.markcss || `outline: solid 2px ${color}`; builder.add(from, to, Decoration.mark({ attributes: { style } })); } iterator.next(); } return builder.finish(); }, ); ``` This is the heart of the system: it iterates through all stored locations, checks which have active Haps, and generates visible outline decorations for those. ### 4.5 The Animation Loop The Drawer class (`packages/draw/draw.mjs`) runs at `requestAnimationFrame` speed (~60fps): ```javascript export class Drawer { constructor(onDraw, drawTime) { this.framer = new Framer(() => { const phase = this.scheduler.now() + lookahead; // Query the pattern for currently active haps const haps = this.scheduler.pattern.queryArc( Math.max(this.lastFrame, phase - 1/10), phase ); // Maintain visible haps with lookbehind for visual continuity this.visibleHaps = (this.visibleHaps || []) .filter(h => h.whole && h.endClipped >= phase - lookbehind - lookahead) .concat(haps.filter(h => h.hasOnset())); // Call the callback with current state onDraw(this.visibleHaps, phase - lookahead, this, this.painters); }); } } ``` The StrudelMirror class connects this to CodeMirror: ```javascript // packages/codemirror/codemirror.mjs this.drawer = new Drawer((haps, time, _, painters) => { const currentFrame = haps.filter((hap) => hap.isActive(time)); this.highlight(currentFrame, time); // Updates CodeMirror! this.onDraw(haps, time, painters); }, drawTime); ``` ## 5. Design Decisions and Trade-offs ### 5.1 Location IDs vs. Direct References Strudel uses string-based location IDs (`"start:end"`) rather than direct object references. This enables: - **Serialization**: Locations can be stored, transmitted, and compared - **Immutability**: CodeMirror state remains pure - **Efficiency**: Map lookups are O(1) The trade-off is potential collisions if two distinct code elements share identical positions (rare in practice). ### 5.2 Outline vs. Background Highlighting Strudel uses CSS `outline` rather than `background-color` for active highlighting: ```javascript const style = hap.value?.markcss || `outline: solid 2px ${color}`; ``` Outlines don't affect layout, don't obscure text, and can overlap without visual confusion. This is important when multiple elements are active simultaneously. ### 5.3 60fps Update Rate The animation loop runs at display refresh rate. This provides smooth visual updates but requires careful performance optimization: - **Query window limiting**: Only query a small time window - **Hap filtering**: Remove old haps from the visible list - **Efficient state updates**: CodeMirror's computed decorations only recompute when dependencies change ### 5.4 Color Propagation Hap colors come from the pattern's `value` object: ```javascript const color = hap.value?.color ?? 'var(--foreground)'; ``` This enables patterns like: ```javascript s("bd hh").color("red blue") ``` Where different elements of the pattern can have different highlight colors. ## 6. Comparison with Alternative Approaches ### 6.1 Widget Injection (Gibber-style) **Approach**: Insert DOM elements alongside or within code to show state. **Pros**: - Can display rich information (graphs, sliders) - Doesn't require pattern-to-source mapping **Cons**: - Changes visual layout - Requires custom parser/renderer - Can clutter the editor ### 6.2 Line-Level Highlighting (Debugger-style) **Approach**: Highlight entire lines that are "active." **Pros**: - Simple to implement - Works with any language **Cons**: - Too coarse for pattern-based music - Patterns often span multiple elements on one line ### 6.3 Cursor Following (Sequencer-style) **Approach**: Move a cursor/playhead across the code as time progresses. **Pros**: - Intuitive for linear sequences - Familiar from DAWs **Cons**: - Doesn't work for polyphonic patterns - Assumes left-to-right temporal ordering ### 6.4 Strudel's Approach (Temporal Source Mapping) **Approach**: Map musical events to source positions, highlight active positions. **Pros**: - Works for polyphonic, polyrhythmic patterns - Preserves code readability - Scales to complex compositions **Cons**: - Requires deep integration with pattern engine - Performance overhead from continuous updates - Limited to pattern elements (not arbitrary expressions) ## 7. Performance Characteristics ### 7.1 Time Complexity | Operation | Complexity | |-----------|------------| | Location extraction (transpile) | O(n) where n = AST nodes | | Location storage (CodeMirror) | O(m) where m = locations | | Active hap lookup | O(k) where k = active haps | | Decoration computation | O(m) worst case, typically O(k) | ### 7.2 Space Complexity | Data Structure | Space | |----------------|-------| | miniLocations array | O(m) | | CodeMirror decorations | O(m) | | Active haps Map | O(k) | ### 7.3 Observed Behavior In practice, with typical live coding patterns (10-50 locations), the system maintains smooth 60fps updates. Performance degrades with: - Very long code (thousands of locations) - High-density patterns (hundreds of simultaneous events) - Low-powered devices ## 8. Extensibility and Customization ### 8.1 Custom Styling Users can customize highlight appearance via `.markcss()`: ```javascript s("bd hh").markcss("background: rgba(255,0,0,0.3); border-radius: 4px") ``` ### 8.2 Toggle Control Highlighting can be enabled/disabled: ```javascript // Via CodeMirror effect view.dispatch({ effects: displayMiniLocations.of(false) }); ``` ### 8.3 Integration with Other Visualizations The same Hap data drives other visualizations (pianoroll, waveform) via the Drawer's `onDraw` callback, enabling synchronized multi-modal feedback. ## 9. Future Directions ### 9.1 Expression-Level Tracking Currently, only mini notation elements are tracked. Extending to arbitrary JavaScript expressions would enable highlighting for: ```javascript note(Math.random() * 12) // Which random value is playing? ``` ### 9.2 Historical Visualization Showing recently-played elements with fading opacity could provide temporal context beyond the current instant. ### 9.3 Bidirectional Editing Clicking a highlighted element during playback could select that Hap for modification, enabling direct manipulation of playing sounds. ### 9.4 Cross-Editor Support The architecture could be ported to other editors (Monaco, Ace) by abstracting the CodeMirror-specific decoration system. ## 10. Conclusion Strudel's live code highlighting represents a significant advancement in live coding interfaces. By maintaining bidirectional links between source code and musical events, it creates an immediate visual connection between notation and sound that enhances both performance and learning. The key architectural insights are: 1. **Capture locations during parsing** - not execution 2. **Propagate locations through pattern composition** - maintaining context 3. **Use efficient lookup structures** - location IDs enable O(1) matching 4. **Leverage editor decoration systems** - CodeMirror's computed decorations 5. **Run at animation frame rate** - smooth visual updates This approach is generalizable to other domains where code produces time-varying output: animation, simulation, robotics, or any system where "which code is active now?" is a meaningful question. ## References 1. Roberts, C., Wright, M., & Kuchera-Morin, J. (2015). "Beyond Editing: Extended Interaction with Textual Code Fragments." NIME 2015. 2. Roberts, C. (2018). "Realtime Annotations & Visualizations in Live Coding Performance." Web Essay. https://charlieroberts.github.io/annotationsAndVisualizations/ 3. Ogborn, D., Beverley, J., del Angel, L. N., Tsabary, E., & McLean, A. (2017). "Estuary: Browser-based Collaborative Projectional Live Coding of Musical Patterns." ICLC 2017. 4. Roos, F. & McLean, A. (2023). "Strudel: Live Coding Patterns on the Web." ICLC 2023. 5. CodeMirror. "Decoration Example." https://codemirror.net/examples/decoration/ 6. Roberts, C. & Wakefield, G. (2017). "gibberwocky: New Live-Coding Instruments for Musical Performance." NIME 2017. 7. McLean, A. (2014). "Making Programming Languages to Dance to: Live Coding with Tidal." FARM 2014. 8. Xambó, A. & Roma, G. (2024). "Human-machine agencies in live coding for music performance." Journal of New Music Research. ## Appendix A: Key Source Files | File | Purpose | |------|---------| | `packages/transpiler/transpiler.mjs` | Source location extraction | | `packages/core/pattern.mjs` | `withLoc()` method, context propagation | | `packages/core/hap.mjs` | Hap class with `isActive()` | | `packages/core/repl.mjs` | Evaluation and location passing | | `packages/draw/draw.mjs` | Animation loop | | `packages/codemirror/highlight.mjs` | StateFields and decorations | | `packages/codemirror/codemirror.mjs` | StrudelMirror integration | ## Appendix B: Minimal Implementation For developers interested in implementing similar functionality, here's a minimal example: ```javascript // 1. During parsing, capture source positions const locations = []; ast.walk(node => { if (isPatternElement(node)) { locations.push([node.start, node.end]); } }); // 2. Attach locations to events class Event { constructor(time, value, locations) { this.time = time; this.value = value; this.locations = locations; } isActive(now) { return this.time <= now && now < this.time + this.duration; } } // 3. Animation loop function animate() { const now = scheduler.currentTime(); const activeEvents = events.filter(e => e.isActive(now)); // 4. Update editor decorations const activeLocations = new Set( activeEvents.flatMap(e => e.locations.map(l => `${l[0]}:${l[1]}`)) ); editor.updateDecorations(activeLocations); requestAnimationFrame(animate); } ``` --- *This whitepaper is based on analysis of Strudel version 1.3.0, hosted at [strudel.cc](https://strudel.cc) and [codeberg.org/strudel](https://codeberg.org/strudel).* --- # How to Avoid Shark Bites: A Data-Driven Approach Date: 2026-01-26 URL: https://ddrscott.github.io/blog/2026/shark-bite-prevention/ Tags: data, humor Description: An infographic exploring the most effective shark bite prevention strategy, backed by data from the International Shark Attack File. **The data is in. The results are conclusive.** After reviewing global shark attack statistics, I've discovered the single most effective method for avoiding shark bites: **stay out of the water**. ## The Numbers According to the [International Shark Attack File (ISAF)](https://www.floridamuseum.ufl.edu/shark-attacks/) maintained by the Florida Museum, which contains over 6,800 individual investigations dating back to the 1500s: - **2024**: 47 unprovoked shark attacks worldwide - **2023**: Unprovoked attacks increased with an uptick in fatalities - **2015**: Record year with 98 unprovoked attacks globally Meanwhile, [Shark Spotters](https://sharkspotters.org.za/safety/shark-safety-advice/shark-bite-stats/) reports these regional statistics: - **South Africa since 1905**: 248 unprovoked attacks total - **Cape Peninsula since 1960**: ~28 attacks (roughly 1 every 2 years) - **Primary species involved**: Great white, tiger, and bull sharks ## The Prevention Strategy | Location | Annual Shark Bites | Risk Level | |----------|-------------------|------------| | Swimming in ocean | ~50-100 | Moderate | | Standing on land | 0 | None | The math is simple: **100% of shark bites occur in water**. ## Context Matters Before you cancel your beach trip, consider the actual risk. The ISAF provides comparative data showing shark attacks are statistically less likely than: - Lightning strikes - Tornadoes - Rip currents - Alligator attacks The increase in reported attacks over time correlates more with increased ocean recreation and better reporting systems than with changes in shark behavior. ## Conclusion Yes, this is tongue-in-cheek. Sharks aren't hunting humans—we're not on their menu. But if you want a 100% guaranteed method to avoid shark bites, the data supports one foolproof strategy. Stay dry, friends. --- **Sources:** - [International Shark Attack File - Florida Museum](https://www.floridamuseum.ufl.edu/shark-attacks/) - [Shark Spotters - Shark Bite Statistics](https://sharkspotters.org.za/safety/shark-safety-advice/shark-bite-stats/) --- # Browser Agent - Claude Code Plugin for Playwright Date: 2026-01-06 URL: https://ddrscott.github.io/blog/2026/browser-agent-plugin/ Tags: claude-code Description: Stop fighting your browser automation. A context-friendly Playwright plugin for Claude Code. **Stop fighting your browser automation. Start controlling it.** I built [browser-agent](https://github.com/ddrscott/wiz-marketplace) because existing solutions frustrated me: - **Microsoft's [playwright-mcp](https://github.com/microsoft/playwright-mcp)** dumps ~16k tokens of accessibility tree on every call - **Other Playwright skills** open and close the browser constantly, losing your session I wanted something that felt natural—where I could tell Claude to browse Amazon, take over manually to poke around, then hand control back without losing my session or burning context. ## The Solution A persistent Chrome process that survives script execution. Claude connects, does work, disconnects. The browser stays open. ``` ┌─────────────────┐ ┌──────────────────────┐ │ /browser cmd │──────────│ Chrome (persistent) │ │ or Task agent │ CDP │ --remote-debugging │ └─────────────────┘ └──────────────────────┘ │ ▲ ▼ │ script runs stays open then disconnects user can interact ``` ## Quick Demo ```bash # Navigate /browser go to amazon.com # Search /browser search for VR headsets # Extract info /browser what's the top pick on this page? # Take over manually, browse around... # Hand back to Claude /browser click Add to Cart ``` The browser window stays visible. You can interact with it anytime. Claude reconnects to the same session—cookies, logins, everything persists. ## Why It Works | Problem | Old Way | Browser Agent | |---------|---------|---------------| | Context bloat | 16k tokens per call | Only script results | | Browser lifecycle | Opens/closes constantly | Persistent via CDP | | User handoff | Locked out during automation | Take over anytime | | API limitations | Predefined commands only | Full Playwright API | Claude writes custom Python scripts for each task using the complete Playwright API. No artificial limits. ## Architecture The plugin uses Claude Code's native patterns: - **Agent** (`agents/browser.md`) - System prompt with Playwright API reference - **Skill** (`skills/browser.md`) - Delegation patterns for the main agent - **Command** (`commands/browser.md`) - The `/browser` slash command - **Script** (`scripts/browser.py`) - Thin executor managing Chrome lifecycle The script is intentionally minimal—it handles the boring infrastructure (starting Chrome, CDP connection, PID tracking) so Claude's generated scripts stay focused on the actual task. ## Get It ```bash # Clone the marketplace git clone https://github.com/ddrscott/wiz-marketplace # Install the plugin claude plugin install wiz-marketplace/browser-agent ``` Prerequisites: Chrome, Python 3.11+, [uv](https://docs.astral.sh/uv/) ## What's Next This is part of [wiz-marketplace](https://github.com/ddrscott/wiz-marketplace), a collection of Claude Code plugins built with context efficiency in mind. More plugins coming. If you build something cool with it, let me know. --- # md-to-print: Because Screen Reading Killed My Eyes Date: 2026-01-03 URL: https://ddrscott.github.io/blog/2026/md-to-print-markdown-pdf-generator/ Tags: cli, python Description: A markdown to PDF converter that actually looks good on paper. 2-column layout, large fonts, automatic file watching, and a launchd setup that makes it invisible. ## The Problem With Reading on Screens I read a lot of markdown. Documentation, notes, LLM output, technical specs. And I'm tired of squinting at screens. My eyes are getting older, and backlights are merciless. "Just print it," I thought. So I did. And it looked like garbage. Most markdown-to-PDF tools are designed for web developers who want their docs to look exactly like GitHub renders them. Small fonts. Single column. Infinite scroll mentality. That's great for screens. It's terrible for paper. ## What I Actually Wanted Paper has different constraints than screens: 1. **Fixed width** - no responsive design, just 8.5 inches 2. **Two eyes** - humans can scan across columns efficiently 3. **No backlight** - contrast matters, but you can go bigger 4. **Physical presence** - a printed doc sits on your desk demanding attention So I built `md-to-print` with these opinions baked in: - **2-column layout** like a newspaper or textbook - **Large fonts** for aging eyes (9.5pt base, generous leading) - **High contrast** cream paper with dark ink - **Smart pagination** that doesn't split code blocks or list items mid-page ## The CSS Magic WeasyPrint gives you real CSS `@page` rules, which most web devs have never touched. Here's the good stuff: ```css @page { size: letter portrait; margin: 0.75in 0.6in 0.85in 0.6in; @top-center { content: string(doc-title); font-family: var(--font-mono); } @bottom-center { content: counter(page); } } ``` The `string-set` property lets you pull content from the document into running headers: ```css h1 { string-set: doc-title content(); } ``` So the first `
In 2017, I set out to create the **fastest** CSV viewer known to the Internet. I searched for alternatives to Excel to make sure the my problem wasn't already solved. I found some that were IDE specific, for IDE's that I don't use. I found some that were Electron based, but Electron's startup time was slow back then. And there were some open source options, but they didn't scratch the itch. Many of the tools still required me to know the structure of the file before opening it.
## My Deepest Needs
- **Speed!** I need to view the data before it finishes loading it.
- **Good Guessing**. I don't want to pick the column separator,line delimiter, or quoting before preview.
- **Fault tolerance**. Let me see the data even if the guessed format is bad so I can be more informed for the next attempt.
## Command Line Junky
Did you know CSVs can be read from command line? This is the way I lived before Golumn. If you're in a POSIX (Linux, Mac, Unix) environment, you can try:
```sh
head -100 /path/to/file.csv | column -ts, | less -S
```
**Command Breakdown**:
* `head -100 /path/to/file` get the first 100 lines of a file
* `column -ts,` print in table `-t` format separated by comma `-s,`
* `less -NS` paginate the result with line numbers `-N` and horizontal scrolling `-S`
The command line way, checks all 3 needs. It's fast, it let's me see results, and it's fault tolerant. It _may_ look like crap if the delimiter is wrong, but that's easy to remediate with different flags, and because I can see something, it allows for quick adjustments. For instance, `column -t-s\|` would split on pipe instead of comma.
This is nice and all, but every once in a while, a graphical interface is needed to get a better feel for the data. GUIs provide for smooth scrolling and higher resolution than the terminal. It's also nice to have numbers aligned properly and clickable things.
## Golumn is Born
Golumn is a Python project built with wxPython built on wxWidgets. It leverages Python's CSV sniffer and SQLite for data processing.
https://pypi.org/project/golumn/
The command, `golumn /path/to/file.csv`, will open a GUI Window showing data in 0.1 seconds with a simple grid of data scrollable till my fingers get tired.
Overtime, I've added keyboard shortcuts to quick filter/unfilter data based on the current selection, sort columns, and search and filter any column for a piece of text, but at it's core, it's a sharp tool that does 1 thing well: **View CSV files**
## Open Source and Legacy
[Golumn](https://github.com/ddrscott/golumn) is open source. One day, someone other than myself can contribute to it. Someone other than myself could have the same impatience for CSV load times. Someone could have similar hotkey preferences. Or maybe not. I don't put it out there for fame or fortune. I but it out there because it's my favorite pet project and everyone should have a change to play with it. `Golumn` is buddy that has served me well for 7+ years, and I want it to have a home beyond me. History is made by sharing, not hoarding.
## Conclusion
Thanks for getting this far. I hope you find Golumn useful, and reach out to me if you have any questions or
suggestions. I'm always looking for ways to make it better.
---
# Bowling Scores the SQL Way
Date: 2024-10-09
URL: https://ddrscott.github.io/blog/2024/sql-bowling/
Tags: sql
Description: How to calculate bowling scores using SQL. No Store Procedures allowed!
The scoring rules for American Bowling are simple on paper, but can be a chore to calculate by hand. This is a fun exercise to calculate the scores in mass using SQL. Because if you can't do it in SQL, you're life is too easy!
If you don't know the rules, here's a quick summary: https://www.kidslearntobowl.com/how-to-keep-score/
## Data Preparation
Making a table to store the raw data for the individual games.
We're not going into full data modelling by adding players and timestamps. We're keeping it simple for this example.
```sql
DROP TABLE IF EXISTS game_rolls;
CREATE TABLE game_rolls (
game int,
frame int,
roll int,
pins int
);
```
### Add a Perfect Game
A perfect game is 12 strikes in a row. 10 strikes in the first 9 frames and 2 strikes in the 10th frame.
```sql
-- Perfect Game
-- Total: 300
INSERT INTO game_rolls (game, frame, roll, pins)
VALUES
(1, 1, 1, 10),
(1, 2, 1, 10),
(1, 3, 1, 10),
(1, 4, 1, 10),
(1, 5, 1, 10),
(1, 6, 1, 10),
(1, 7, 1, 10),
(1, 8, 1, 10),
(1, 9, 1, 10),
(1, 10, 1, 10),
(1, 10, 2, 10),
(1, 10, 3, 10);
```
### Add an Unlucky Second Game
We managed to knock down 1 pin each frame and missed the second roll. Repeat this for all 10 frames.
```sql
-- 1 pin each frame.
-- Total: 10
INSERT INTO game_rolls (game, frame, roll, pins)
VALUES
(2, 1, 1, 1),
(2, 1, 2, 0),
(2, 2, 1, 1),
(2, 2, 2, 0),
(2, 3, 1, 1),
(2, 3, 2, 0),
(2, 4, 1, 1),
(2, 4, 2, 0),
(2, 5, 1, 1),
(2, 5, 2, 0),
(2, 6, 1, 1),
(2, 6, 2, 0),
(2, 7, 1, 1),
(2, 7, 2, 0),
(2, 8, 1, 1),
(2, 8, 2, 0),
(2, 9, 1, 1),
(2, 9, 2, 0),
(2, 10, 1, 1),
(2, 10, 2, 0);
```
### Add Gutter Ball and Spare
Now, we're showing off our skills. Throw the first ball into the gutter on purpose and take down the rest in the second roll for a spare.
```sql
-- 0 then 10 each frame
-- Total: 100
INSERT INTO game_rolls (game, frame, roll, pins)
VALUES
(3, 1, 1, 0),
(3, 1, 2, 10),
(3, 2, 1, 0),
(3, 2, 2, 10),
(3, 3, 1, 0),
(3, 3, 2, 10),
(3, 4, 1, 0),
(3, 4, 2, 10),
(3, 5, 1, 0),
(3, 5, 2, 10),
(3, 6, 1, 0),
(3, 6, 2, 10),
(3, 7, 1, 0),
(3, 7, 2, 10),
(3, 8, 1, 0),
(3, 8, 2, 10),
(3, 9, 1, 0),
(3, 9, 2, 10),
(3, 10, 1, 0),
(3, 10, 2, 10);
```
### Add Realistic Game with Strikes and Spares
This is my typical night out, never quite getting halfway to perfection (300). If I spent more time bowling and less time writing SQL maybe I'd get better!
```sql
INSERT INTO game_rolls (game, frame, roll, pins)
VALUES
(4, 1, 1, 7), -- First frame: 7 pins
(4, 1, 2, 2), -- Spare with 2 pins => = 9
(4, 2, 1, 10), -- Second frame: Strike => 9+10+5+4=28
(4, 3, 1, 5), -- Third frame: 5 pins
(4, 3, 2, 4), -- Open frame with 4 pins => 28+9=37
(4, 4, 1, 8), -- Fourth frame: 8 pins
(4, 4, 2, 1), -- Open frame with 1 pin => 37+9=46
(4, 5, 1, 10), -- Fifth frame: Strike => 46+10+6+3=65
(4, 6, 1, 6), -- Sixth frame: 6 pins
(4, 6, 2, 3), -- Open frame with 3 pins => 65+9=74
(4, 7, 1, 9), -- Seventh frame: 9 pins
(4, 7, 2, 0), -- Open frame with 0 pins => 74+9=83
(4, 8, 1, 10), -- Eighth frame: Strike => 83+10+9=102
(4, 9, 1, 7), -- Ninth frame: 7 pins
(4, 9, 2, 2), -- Open frame with 2 pins => 102+9=111
(4, 10, 1, 10), -- Tenth frame: Strike
(4, 10, 2, 10), -- Bonus roll: Strike
(4, 10, 3, 8); -- Bonus roll: 8 pins => 111+10+10+8=139
```
## Query to Calculate Scores
There are many ways to calculate the scores.
We're going to make use of [Window Functions](https://www.postgresql.org/docs/current/tutorial-window.html) to look ahead for bonus pins and look behind for spares.
```sql
SELECT
*,
COALESCE(LEAD(pins, 1) OVER (w), 0) AS next_pins,
COALESCE(LEAD(pins, 2) OVER (w), 0) AS next_next_pins,
roll = 1 AND pins = 10 AS is_strike,
roll = 2 AND (pins + LAG(pins, 1) OVER (w) = 10) AS is_spare
FROM game_rolls
WINDOW w AS (PARTITION BY game ORDER BY game, frame, roll)
;
```
**Results**:
|game|frame|roll|pins|next_pins|next_next_pins|is_strike|is_spare|
|----|-----|----|----|---------|--------------|---------|--------|
| 4| 1| 1| 7| 2| 10|false |false |
| 4| 1| 2| 2| 10| 5|false |false |
| 4| 2| 1| 10| 5| 4|true |false |
| 4| 3| 1| 5| 4| 8|false |false |
| 4| 3| 2| 4| 8| 1|false |false |
| 4| 4| 1| 8| 1| 10|false |false |
| 4| 4| 2| 1| 10| 6|false |false |
| 4| 5| 1| 10| 6| 3|true |false |
| 4| 6| 1| 6| 3| 9|false |false |
| 4| 6| 2| 3| 9| 0|false |false |
| 4| 7| 1| 9| 0| 10|false |false |
| 4| 7| 2| 0| 10| 7|false |false |
| 4| 8| 1| 10| 7| 2|true |false |
| 4| 9| 1| 7| 2| 10|false |false |
| 4| 9| 2| 2| 10| 10|false |false |
| 4| 10| 1| 10| 10| 8|true |false |
| 4| 10| 2| 10| 8| 0|false |false |
| 4| 10| 3| 8| 0| 0|false |false |
That looks right. The future pins are correct and the strikes and spares are correctly identified.
If we wanted, we could make a view, but we're going to continue with the query by using [Common Table Expressions
(CTE)](https://www.postgresql.org/docs/current/queries-with.html).
We'll call the first CTE `stats` and build on it with the next CTE to calculate the bonus based on the rules of
strikes and spares. Strikes get the next two rolls and spares get the next roll.
There is a special condition for the last frame. We accept the pins as final since there are no future frames to
look at. We don't have to 'wait' to score the last frame. We can score it immediately.
> I have to admit, this was the hardest part to figure out and I was stuck on it for longer than I'd like to admit.
> The perfect game scenario was totalling 320 for the longest time! (Perfect games of 12 strikes in a row are 300 points, not 320)
```sql
WITH stats AS (
SELECT
*,
COALESCE(LEAD(pins, 1) OVER (w), 0) AS next_pins,
COALESCE(LEAD(pins, 2) OVER (w), 0) AS next_next_pins,
roll = 1 AND pins = 10 AS is_strike,
roll = 2 AND (pins + LAG(pins, 1) OVER (w) = 10) AS is_spare,
frame = (max(frame) OVER (PARTITION BY game)) AS is_last_frame
FROM game_rolls
WINDOW w AS (PARTITION BY game ORDER BY game, frame, roll)
)
SELECT
*,
CASE
WHEN is_last_frame THEN pins
WHEN is_strike THEN pins + next_pins + next_next_pins
WHEN is_spare THEN pins + next_pins
ELSE
pins
END
AS score
FROM stats
;
```
**Results with pins per frame**:
|game|frame|roll|pins|next_pins|next_next_pins|is_strike|is_spare|is_last_frame|score|
|----|-----|----|----|---------|--------------|---------|--------|-------------|-----|
| 4| 1| 1| 7| 2| 10|false |false |false | 7|
| 4| 1| 2| 2| 10| 5|false |false |false | 2|
| 4| 2| 1| 10| 5| 4|true |false |false | 19|
| 4| 3| 1| 5| 4| 8|false |false |false | 5|
| 4| 3| 2| 4| 8| 1|false |false |false | 4|
| 4| 4| 1| 8| 1| 10|false |false |false | 8|
| 4| 4| 2| 1| 10| 6|false |false |false | 1|
| 4| 5| 1| 10| 6| 3|true |false |false | 19|
| 4| 6| 1| 6| 3| 9|false |false |false | 6|
| 4| 6| 2| 3| 9| 0|false |false |false | 3|
| 4| 7| 1| 9| 0| 10|false |false |false | 9|
| 4| 7| 2| 0| 10| 7|false |false |false | 0|
| 4| 8| 1| 10| 7| 2|true |false |false | 19|
| 4| 9| 1| 7| 2| 10|false |false |false | 7|
| 4| 9| 2| 2| 10| 10|false |false |false | 2|
| 4| 10| 1| 10| 10| 8|true |false |true | 10|
| 4| 10| 2| 10| 8| 0|false |false |true | 10|
| 4| 10| 3| 8| 0| 0|false |false |true | 8|
That looks good. The scores are correct and the last frame is scored correctly.
We could simply `SUM` everything at this point:
```sql
WITH stats AS (
SELECT
*,
COALESCE(LEAD(pins, 1) OVER (w), 0) AS next_pins,
COALESCE(LEAD(pins, 2) OVER (w), 0) AS next_next_pins,
roll = 1 AND pins = 10 AS is_strike,
roll = 2 AND (pins + LAG(pins, 1) OVER (w) = 10) AS is_spare,
frame = (max(frame) OVER (PARTITION BY game)) AS is_last_frame
FROM game_rolls
WINDOW w AS (PARTITION BY game ORDER BY game, frame, roll)
),
scores AS (
SELECT
*,
CASE
WHEN is_last_frame THEN pins
WHEN is_strike THEN pins + next_pins + next_next_pins
WHEN is_spare THEN pins + next_pins
ELSE
pins
END
AS score
FROM stats
)
SELECT
game,
SUM(score)
FROM scores
GROUP BY game
;
```
**Results final `SUM`**:
|game|sum|
|----|---|
| 1|300|
| 2| 10|
| 3|100|
| 4|139|
Notice the use of CTEs again so we can build on the previous step. This is an important pattern that I follow
strictly when building up queries. It allows me to extract parts into views in the future and provides a good
place for meaningful names.
In Bowling, we normally to see a running score frame by frame. We can do this by adding a cumulative score to each frame with yet another Window Function.
```sql
WITH stats AS (
SELECT
*,
COALESCE(LEAD(pins, 1) OVER (w), 0) AS next_pins,
COALESCE(LEAD(pins, 2) OVER (w), 0) AS next_next_pins,
roll = 1 AND pins = 10 AS is_strike,
roll = 2 AND (pins + LAG(pins, 1) OVER (w) = 10) AS is_spare,
frame = (max(frame) OVER (PARTITION BY game)) AS is_last_frame
FROM game_rolls
WINDOW w AS (PARTITION BY game ORDER BY game, frame, roll)
),
scores AS (
SELECT
*,
CASE
WHEN is_last_frame THEN pins
WHEN is_strike THEN pins + next_pins + next_next_pins
WHEN is_spare THEN pins + next_pins
ELSE
pins
END
AS score
FROM stats
),
sums AS (
-- Note: This CTE will get optimized away by the query planner.
-- Nothing is calling it, and we can choose which to call in the final query.
SELECT
game,
SUM(score)
FROM scores
GROUP BY game
),
cumulative_scores AS (
SELECT
*,
SUM(score) OVER (w) AS cumulative_score,
CASE
WHEN is_strike THEN 'X'
WHEN is_spare THEN ' /'
ELSE REPEAT(' ', roll - 1) || pins
END display
FROM scores
WINDOW w AS (PARTITION BY game ORDER BY game, frame, roll)
)
SELECT
game,
frame,
display,
cumulative_score
FROM cumulative_scores
;
```
**Results Game 1**:
|game|frame|display|cumulative_score|
|----|-----|-------|----------------|
| 1| 1|X | 30|
| 1| 2|X | 60|
| 1| 3|X | 90|
| 1| 4|X | 120|
| 1| 5|X | 150|
| 1| 6|X | 180|
| 1| 7|X | 210|
| 1| 8|X | 240|
| 1| 9|X | 270|
| 1| 10|X | 280|
| 1| 10| 10 | 290|
| 1| 10| 10 | 300|
**Results Game 2**:
|game|frame|display|cumulative_score|
|----|-----|-------|----------------|
| 2| 1|1 | 1|
| 2| 1| 0 | 1|
| 2| 2|1 | 2|
| 2| 2| 0 | 2|
| 2| 3|1 | 3|
| 2| 3| 0 | 3|
| 2| 4|1 | 4|
| 2| 4| 0 | 4|
| 2| 5|1 | 5|
| 2| 5| 0 | 5|
| 2| 6|1 | 6|
| 2| 6| 0 | 6|
| 2| 7|1 | 7|
| 2| 7| 0 | 7|
| 2| 8|1 | 8|
| 2| 8| 0 | 8|
| 2| 9|1 | 9|
| 2| 9| 0 | 9|
| 2| 10|1 | 10|
| 2| 10| 0 | 10|
**Results Game 3**:
|game|frame|display|cumulative_score|
|----|-----|-------|----------------|
| 3| 1|0 | 0|
| 3| 1| / | 10|
| 3| 2|0 | 10|
| 3| 2| / | 20|
| 3| 3|0 | 20|
| 3| 3| / | 30|
| 3| 4|0 | 30|
| 3| 4| / | 40|
| 3| 5|0 | 40|
| 3| 5| / | 50|
| 3| 6|0 | 50|
| 3| 6| / | 60|
| 3| 7|0 | 60|
| 3| 7| / | 70|
| 3| 8|0 | 70|
| 3| 8| / | 80|
| 3| 9|0 | 80|
| 3| 9| / | 90|
| 3| 10|0 | 90|
| 3| 10| / | 100|
**Results Game 4**:
> I'm putting this in a preformatted text block so you can see the formatting. Markdown tables aren't doing
> this justice.
```
|game|frame|display|cumulative_score|
|----|-----|-------|----------------|
| 4| 1|7 | 7|
| 4| 1| 2 | 9|
| 4| 2|X | 28|
| 4| 3|5 | 33|
| 4| 3| 4 | 37|
| 4| 4|8 | 45|
| 4| 4| 1 | 46|
| 4| 5|X | 65|
| 4| 6|6 | 71|
| 4| 6| 3 | 74|
| 4| 7|9 | 83|
| 4| 7| 0 | 83|
| 4| 8|X | 102|
| 4| 9|7 | 109|
| 4| 9| 2 | 111|
| 4| 10|X | 121|
| 4| 10| 10 | 131|
| 4| 10| 8 | 139|
```
I narrowed the columns a little to make it easier to follow. In the final query, I would include all the columns
to let the client decide how to display it.
## Conclusion
If you wanted to use this using a Business Intelligence (BI) tool, you should create a view for each CTE and then query the final view.
Should bowling be calculated in SQL? I'll leave that up to you. It was a fun exercise to figure out the logic and I hope you enjoyed the ride!
---
# Karaoke
Date: 2024-09-24
URL: https://ddrscott.github.io/blog/2024/tts-comparision/
Tags: life
Description: What it's like when robots campaign for your Site Reliability position
I've been doing a lot of comparisons between Text to Speech (TTS) engines recently and run into a this post
from [Julien Bicknell](https://www.linkedin.com/in/juliensydney) who had ChatGPT "write a job ad in the style of Donald
Trump". I couldn't resist passing the generated script through to the Text to Speech engines in front of me.
**WARNING**: This is NOT a comprehensive analysis.
I'm simply posting these recordings so I don't forget about them, and can listen to them in the future for a laugh. If someone wants a thorough analysis make me an offer. In the meantime, enjoy the robots trying to get an SRE job.
## Audio Outputs from Various Vendors
| Open AI - Nova | |
| Google Vertex AI - Casual K | |
| Google Vertex AI - Journey-O | |
| Play.HT - Play v3 | |
| Microsoft Copilot |
> Listen, folks, I’ve talked to many, many people, and they all agree – we need the BEST Site Reliability Engineers.
>
> This is going to be the most incredible team, and we’re going to win. No one knows more about uptime and reliability than we do – nobody.
>
> What you’ll do: We’re talking cloud infrastructures, folks – AWS, Azure, GCP – you know it better than anyone. Automation? We’re going to automate like never before. Monitoring? It’s going to be beautiful. We’re going to watch everything, 24/7, and it’s going to work. Some people don’t know how to do this, but we do.
>
> You’ll need:
> Cloud platforms experience. AWS, Azure, Google Cloud – we’re the best with all of them.
> Automation skills. We’re talking Ansible, Terraform, Python – and folks, we automate more than anyone, believe me.
> Kubernetes expertise – everyone’s talking about it, folks. And we’re doing more with it than anyone.
> Monitoring tools like Prometheus and Grafana. We see everything – everything.
> CI/CD pipelines? You build them, you run them, you win with them.
>
> Some people say it can’t be done. But let me tell you something – it can. And we’re going to do it better than anyone ever has. We’re going to make reliability so good, you’ll get tired of being reliable!
>
> Here’s the deal:
> Tremendous salary – no one pays better, believe me.
> Amazing benefits. You’ll say, “Mr. SRE Manager, I’ve never seen benefits like these before!”.
> Work from the best places – remote, flexible, we’ve got it all. No one works better.
>
> Folks, this is the kind of role where people are talking about it.
>
> I was at a debate the other day, and let me tell you, everyone’s talking about how important it is to keep systems running. We’re going to bring back uptime. We’re going to bring back stability. And we're going to make systems so reliable, you’ll get tired of being reliable.
>
> Believe me!
>
---
# Karaoke
Date: 2024-09-23
URL: https://ddrscott.github.io/blog/2024/karaoke-podcast/
Tags: life
Description: Surprisingly Good AI Generated Podcast Summary of Karaoke Artcle
This podcast was created using [Google's Notebook LM](https://notebooklm.google.com)
I feed in my Karaoke Article and this was the results.
[Audio File](https://cdn.dataturd.com/audio/karaoke-article.mp3)
## Feedback
### Pros
- The speakers are lively and engaging.
- Interesting conclusions
- Good story telling
- Unique perspective on the original article, which provides some feedback to me, the author, for how my readers could interpret the post.
### Cons
- Misinterpreted the complications I had with the "middle" channel
- Wrong conclusion about the contest. I did share he got a silver medal.
## Text Transcription (provided by [AssemblyAI](https://www.assemblyai.com))
**Alex**: Ever been listening to music and think, man, I wish I could take the lead singer out of this so I could sing it at karaoke night.
**Pat**: Happens all the time.
**Alex**: Well, that's what we're diving into today. We found this blog post by a self proclaimed, get this, recovering ten x developer, okay. Who used his coding powers to, well, take karaoke night to a whole new level.
**Pat**: Wow.
**Alex**: Helping his son prep for a singing competition using AI. The blog post is called Karaoke y. Scott. W h y. Catchy.
**Pat**: I like it.
**Alex**: So Scott son has this competition. They need to isolate vocals from these pre recorded tracks.
**Pat**: Okay.
**Alex**: And, I mean, I guess most people would just try to find a karaoke version, right?
**Pat**: Or use some audio software, you know, try to minimize the vocals.
**Alex**: Yeah, just like reduce the vocal track or something.
**Pat**: Exactly.
**Alex**: But not Scott. This guy goes all out.
**Pat**: Oh, I see this coming. I can tell.
**Alex**: He's like, if it's worth doing, it's worth overdoing.
**Pat**: Love it.
**Alex**: Which I thought was hilarious. So he decides to, you know what? I am going to build a whole karaoke system from scratch using AI, of course.
**Pat**: Of course. Why not? Why not?
**Alex**: And it's interesting because it is a good example of why those simpler solutions, they don't always work, especially for something. If you want to be serious about it, if you have a vocalist who actually wants to use this to train.
**Pat**: Exactly. And those traditional methods, they kind of rely on just removing certain frequencies, which for casual singing along, it's fine. But if you really want to hear yourself, it just makes everything muddy. It's not good.
**Alex**: So Scott jumps into this whole world of AI music source separation with this tool called splitter. Have you heard of this?
**Pat**: Yeah. So spleeter uses a neural network, basically, it's trained on tons and tons of songs to learn to identify and separate all the different parts of the music.
**Alex**: Okay.
**Pat**: To vocals, instruments, even drums.
**Alex**: Wow. So we're talking about Scott, who's a coder, not like an audio engineer or anything, right? Diving into collab, notebooks, APIs, static site building, all to just, you know, wow, give his son a shot.
**Pat**: That's the part I think is so fascinating. Right?
**Alex**: Yeah.
**Pat**: Not even just the tech, but that he was willing to just go for it. Like it's such a specific and kind of personal goal.
**Alex**: Totally.
**Pat**: I don't know, it just shows you how accessible these tools are becoming. That anyone can use them.
**Alex**: Yeah. And that you can use them for such, you know, exactly. Weird, specific things.
**Pat**: Totally.
**Alex**: Okay. But like, anything involving AI, it's not.
**Pat**: Always, you know, it's not gonna be easy.
**Alex**: Smooth sailing. And Scott's actually really honest about, like, the challenges he ran into.
**Pat**: Of course. Yeah. I mean, it can be amazing, this AI stuff, but it still needs that human touch.
**Alex**: Yeah.
**Pat**: You know, especially if you want something that sounds really polished.
**Alex**: Right. It can't do everything for you.
**Pat**: Not yet, at least.
**Alex**: Yeah. One of the things that tripped him up was this whole thing, this middle channel.
**Pat**: Oh, yeah.
**Alex**: What is that? And why did it, like, totally derail his whole AI karaoke machine?
**Pat**: Okay, so the middle channel. Imagine listening to a song, right? Like, all the instruments and the vocals, they're kind of laid out in front of you, almost like a 3d map of the sound.
**Alex**: Okay.
**Pat**: And the middle channel is that center part. It's where you'll find a lot of the stuff that blends everything together.
**Alex**: Okay.
**Pat**: It's not just about, like, panning things to the center. It's reverb vocal harmonies, instruments that are kind of meant to glue the whole mix together.
**Alex**: I'm picturing it.
**Pat**: You might not really notice them on their own, but they really add to the overall feel. And that's where these AI tools, like splitters, that's where they can sometimes struggle. Cause it all gets a little intertwined with the main vocals in that middle channel.
**Alex**: So it's like you baked a cake, and now you're trying to take out the sugar.
**Pat**: Yeah, exactly. Good luck with that.
**Alex**: Yep.
**Pat**: So that's what happened with Scott. The AI was doing its thing, doing great, but it really needed a human to come in and fine tune to make sure those vocals were, you know, crystal clear. Especially for a competition.
**Alex**: Yeah, for a competition, you need it to be perfect.
**Pat**: Right.
**Alex**: And he actually says in the blog that the final product was, like, 80% AI generated. 20% human stitched.
**Pat**: Yeah.
**Alex**: Which, I don't know, that just seems like AI these days, you know?
**Pat**: Totally.
**Alex**: It can do a lot, but it still needs us.
**Pat**: Yeah. It's like it can do the heavy lifting, but we still need to, you know, kind of guide it.
**Alex**: It's like when I'm prepping a presentation.
**Pat**: Okay.
**Alex**: Like, I might use AI to help me get my thoughts down.
**Pat**: Yeah.
**Alex**: Maybe even design some slides or whatever. But it's still me, you know? It's my voice.
**Pat**: Exactly.
**Alex**: My perspective that makes it all work together.
**Pat**: It's that human element, not just in the final product, but in the whole process.
**Alex**: Yeah.
**Pat**: Think about Scott. I mean, diving into all that new tech. He wouldn't have learned all that if he just downloaded a karaoke track.
**Alex**: That's a really good point. So, okay, all this tech talk, did it actually work? Did his son nail the audition? Was it a success?
**Pat**: Well, that's the thing. He doesn't actually say if he won or not.
**Alex**: Oh, really?
**Pat**: He kind of leaves us hanging, so.
**Alex**: We don't know if it was a standing. After all that work, he really focuses.
**Pat**: More on the journey, what he learned along the way.
**Alex**: Interesting.
**Pat**: I think it says a lot about how the project was about more than just winning. It was about supporting his kid, doing something cool with technology, maybe learning something new himself.
**Alex**: Yeah, yeah, for sure.
**Pat**: And speaking of which, his big takeaway from all this is 99% practice, 1% technology, which I love, because it's so easy to get caught up in the fancy new thing.
**Alex**: Totally.
**Pat**: But at the end of the day, whether you're singing or coding or, you know, podcasting, it's all about putting in the time, the work.
**Alex**: He even jokes that next time, instead of building a whole karaoke system, he'll just sneak into his son's teacher's suitcase and record practice directly. Well, there you go, hiding in a suitcase. I mean, talk about extreme lengths for your kid's karaoke career.
**Pat**: That's dedication for you. Maybe a little over the top.
**Alex**: He's got a good sense of humor about it, though.
**Pat**: Definitely.
**Alex**: Yeah.
**Pat**: But it's a cool story, right?
**Alex**: Yeah, it really shows how technology can bring families together. You know, like how many dads would build a whole AI karaoke system just for their kid? It's pretty sweet.
**Pat**: Totally. And it's easy to focus on the big, impressive AI stuff, but I think it's these smaller stories that are more interesting.
**Alex**: Yeah.
**Pat**: More personal.
**Alex**: It's not even just about the end result, you know?
**Pat**: No, exactly.
**Alex**: Like all that learning and problem solving, the whole process can be just as important.
**Pat**: I think it goes back to what we were saying about AI and human input, that 80 20 split. AI isn't going to replace us. It's more like it helps us do things we couldn't do before.
**Alex**: And it's like Scott said, 99% practice, 1% technology. You still got to put in a work.
**Pat**: It's a good reminder for all of us.
**Alex**: It is. So I think this deep dive taught us a lot more than how to make a karaoke track. It's about using technology in creative ways.
**Pat**: Definitely, and not being afraid to experiment, even if it seems impossible.
**Alex**: And who knows? Maybe it even inspired some people listening to, I don't know, dust off their old karaoke machines or maybe build their own or build their own. There you go.
**Pat**: That's a wrap.
---
# Karaoke
Date: 2024-09-19
URL: https://ddrscott.github.io/blog/2024/karaoke/
Tags: life
Description: How to leverage technology to win at singing!
TL;DR - Karaoke!
My 10-year-old had a singing competition coming up in 2 weeks. He had two songs to perfect, and one was in Latin. His teacher provided an audio recording of herself performing to use as a reference when she wasn't around. The recording was good enough, but I thought we could do better. Karaoke!
Traditional karaoke machines perform brute force audio filtering by clipping middle frequencies, clipping identical frequencies found in the left and right channel, or removing the center channel, but that's not good enough for my son. I need AI magic!
> If it's worth doing, it's worth overdoing!
> - Me
## Get It Done!
1. **Get a recording with accompaniment of each song.** ✅ (His teacher sent this before the project started)
2. **Find an open-source project that uses AI to segment music files.** [Spleeter](https://github.com/deezer/spleeter) ✅
3. **Create a notebook in Google Colab to take the library out for a spin.** ✅
4. **Glue the separate wav files back together into a single MP3 file.** ✅
5. **Create an API for the project.** (Screenshot below. I can't publish it since it costs $$$) ✅
6. **Create a [static site audio player](https://ddrscott.github.io/prompt-games/singing) that allows my son to mix in or out vocals as needed.** ✅
7. **PRACTICE!**
## Lessons Learned
The whole project stemmed from the thought: What can AI do for me? It taught me about the middle channel and filter
technique. It tried its best to implement some Python code using [librosa](https://librosa.org/doc/latest/index.html), but the output was never satisfactory. A simple web search discovered the [Spleeter project](https://github.com/deezer/spleeter), and ChatGPT was fairly good at implementing some snippets I could copy/paste.
I was impressed with its implementation of the JavaScript visualization in the audio player. It used [Flexbox](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_flexible_box_layout) predictably, but I had to interject to get the player on the bottom as a different component. The audio slider to mix the vocals and accompaniment was not trivial, and AI failed many, many times. I ultimately had to read the JS Audio API to figure out the nuances.
Overall, it was probably 80% AI-generated and 20% human-stitched and corrected. Time-wise, I think it would have been a wash regardless of AI. AI forced me to spend time troubleshooting unfamiliar code vs. traditionally reading unfamiliar docs.
- **Useful:** 5 of 5
- **Productizable:** 3 of 5
- **Bug-Free:** 2 of 5
- **Fun:** 5 of 5
## Screenshots
### Frontend for audio splitter
_I can't provide the URL since it costs $$$ to keep it alive_
**TL;DR** - The goal is to ~understand how~ make a website. Enter Github and Codespaces!
## Steps
### Create a Github Account
I made one for my friend: TheChickenCow.
### Start a Codespaces
Use `Codespaces` instead of `Create Respository`.
We'll eventually use Codespaces to push to the repository. Otherwise, it'll take more steps to import the repository into Codespaces. Trust us!
### Create the Content for the Website
#### docs/index.html
This is the primary content of the webpage.
```html
All files in the project will get automatically selected and hit `OK` to confirm the upload.
### Open on Github
Notice the notifications at the bottom of Codespaces. It will show the repository has been published to Github.
Select `Open on Github` to see the source code.
### Publish the Final Site
1. Select the `Settings` tab near the top.
2. Select the `Pages` tab in the left sidebar.
3. In the Branch section, choose `main`
4. Next to it, choose `/docs`
5. Finally, click `Save`
The publish will kick off a deployment Action.
### Wait about 2-minute
The initial deployment of the site take about 2 minutes.
Actions are observable in the top `Actions` tab.
1. Select `Actions` from the tab tab.
2. Select `pages build and deployment`
### Get the Link
When the deployment is finished, the `deploy` step
will reveal the final website link that you can share with friends
and family or the rest of the Internet.
https://ddrscott.github.io/dune-scroller/
## Conclusion
We hope this step by step tutorial helped give you a start on your web development journey. We know this probably gave you more questions than answers, but the goal was to get you started and have a site at the end.
To learn more about Codespaces try their documentation: https://docs.github.com/en/codespaces
### Languages Used
- html
- css
- javascript
Learn about all 3 from https://www.w3schools.com/default.asp
## Connect
Let us know what you're working on or publish or stuck on in the comments below. Happy coding!
---
# Larry Pierce, aka Dad
Date: 2023-02-01
URL: https://ddrscott.github.io/blog/2023/larry-pierce-aka-dad/
Tags: life
Description: Lawrence Roy Pierce age 76, of Lake in the Hills, Illinois passed away on February 1, 2023, at Northwestern Medicine - Huntley Hospital.
```sh
cowsay "Best Dad I ever had. I'll miss you"
# ____________________________________
# < Best Dad I ever had. I'll miss you >
# ------------------------------------
# \ ^__^
# \ (oo)\_______
# (__)\ )\/\
# ||----w |
# || ||
```
## Computers Are Easy, Feelings Are Hard
Today, I received the news my Dad had passed away.
Two days ago he was hospitalized for falling.
When he retired 10 years ago, he got his first computer and a non-work email address so we could email him pictures from time to time.
He stayed on top of the news and current events through the TV and was just getting use to reading articles on [MSN](https://www.msn.com).
He used a landline as his primary number and had a voicemail machine.
His favorite feature of the landline was how the Caller-ID showed up on the TV before he answered it.
He always knew where the phone was located because it was attached to the wall.
Phones maybe smart, but no phone is wiser than Dad.
## Humor
My Dad loved to laugh, but couldn't tell a joke.
He had a boisterous one-of-a-kind laugh that went higher pitch during the inhale phase. (Note to self: turn his laugh into a ringtone)
He was a literal person that kept things simple.
All my jokes would go over his head. Sometimes he would look up to catch them.
## Adoption
We can't choose our parents, but my Dad chose me. I was 4 years old at the time, and after logistics, I finally came over at 5. He chose my sister, too.
Here we are around 1985.
He would take us bike riding around [Lake Opeka in Des Plaines, IL](https://web.archive.org/web/20230205141050/https://www.dpparks.org/parks-facilities/lake-park/). We would feed the ducks with stale bread. He'd encourage us to be outside by being outside with us.
## Fiscally Responsible
My Dad loved to save money. He kept the same wardrobe from his 40s and would only replace articles when they were overwarn, never to have more than what could fit into a small closet. I believe his only real source of new clothing were from gifts.
He would try to fix everything himself and with moderate success. Our Internet was "The Readers Digest Fix-It Yourself Manual" and "Complete Do-It Yourself Manual".
Our Youtube was [This Old House hosted by Bob Vila](https://en.wikipedia.org/wiki/This_Old_House).
## Hard Worker
My Dad was a hard worker. He got early and came home late. He went back to school after realizing a B.A. in Political Science was a dead end. We worked at companies until they [closed down](https://en.wikipedia.org/wiki/A._B._Dick_Company) or he [retired](https://www.packaging-by-design.com). Loyal to the end, he even came out of retirement to help train the next generation.
## Retirement
In his retirement, he preferred to stay around town, visit grand kids, make sandwiches, and catch up on TV shows, movies, and the news. A simple man with simple pleasure.
Here we are 30+ years later in Harvard, IL:
And here's the final picture I have with him and Morgan.
## Timeline
* 1946-11-15 - Born
* 2002-12-31 - Baptized in the name of the Father, Son, and Holy Spirit
* 2023-02-01 - Meets Jesus
## Official Obituary
Lawrence Roy "Larry" Pierce age 76, of Lake in the Hills, Illinois passed away on February 1, 2023, at Northwestern Medicine – Huntley Hospital.
Larry was a supportive loving father that enjoyed hikes, bike riding and traveling. He lived a simple life and indulged in simple pleasures like books and movies. Although he couldn't tell a joke, he had a one-of-a-kind laugh.
He was born in Chicago, Illinois on November 15, 1946, the loving son of Archie Roy and Grace (Thoring) Pierce. Larry is survived by his children, Scott (Karen) Pierce, Mark Pierce, and Julie Pierce; his grandchildren, Nathan, Morgan, Jeremy, and Kaili; his brother, Steve (Nancy) Pierce.
## Not The End
This is only one side of my Dad's story and from one narrow perspective.
I'm planning on having a more historical/objective write up soon.
But in the meantime, please comment if Larry has impacted your life in anyway.
Cheers to the best Dad I ever had!
---
# Wordy Passwords
Date: 2023-01-05
URL: https://ddrscott.github.io/blog/2023/wordy-passwords/
Tags: security
Description: How to Generate a Wordy Password
## TL;DR
```sh
grep -E '^[a-z]{3,}$' /usr/share/dict/words | shuf -n4 | paste -sd- -
```
## Intro
It is 2023 and passwords are still a thing. They are hard to remember and easy to crack.
Many sites talk about using words instead of letters, numbers and symbols:
- https://www.useapassphrase.com/
- https://www.nexcess.net/web-tools/secure-password-generator/
- https://xkpasswd.net/s/
Most of them were inspired by this comic:
We recently needed to generate hard to guess, but recognizable URLs to share
some data analysis with clients. We also planned to speak the URLs to our clients via video conferencing.
GUIDs (Globally Unique Identifiers) are nearly impossible to comprehend and speak accurately:
`f95b9390f8fe5d7a6aecd9d563b5f8a5` and `b987370e-6906-498b-a5d4-84c1c92c6e64`,
so Wordy Passwords™ (TM cause you heard it here first!) came to mind to meet all the requirements.
There are sites that can generate these things which we could scrape as needed, but we thought there's
got to be a simpler way.
## Simpler Way
If you're on Mac or Linux, the simpler way is already in your terminal.
```sh
grep -E '^[a-z]{3,}$' /usr/share/dict/words | shuf -n4 | paste -sd- -
```
Let's explain:
- `/usr/share/dict/words` is a file with a list of all English words, about 100k of them.
- `grep -E '^[a-z]{3,}$' /usr/share/dict/words` filters for lowercase words with 3 or more letters.
- `shuf -n4` picks 4 random entries from the filtered list. Change `4` to whatever is needed for your use case.
- `paste -sd- -` joins all the lines with `-` symbol. This can also be changed to `-sd ' '` for spaces or `-sd ','` for commas, etc.
Note, the last `-` tells the command to read from `/dev/stdin` and is necessary on Mac.
Use `man grep`, `man shuf`, or `man paste` to get additional details about each command.
Example Output:
- `mentions-grouses-datelines-mouses`
- `cellulite-sukiyaki-ardner-rusted`
- `naturalness-indicate-dependent-beset`
- `irony-excursion-flashily-crawl`
We can then use these in our data generation URLs to get something like:
- `https://your.dataturd.com/mentions-grouses-datelines-mouses`
- `https://your.dataturd.com/cellulite-sukiyaki-ardner-rusted`
- `https://your.dataturd.com/naturalness-indicate-dependent-beset`
- `https://your.dataturd.com/irony-excursion-flashily-crawl`
In case `/usr/share/dict/words` doesn't exist, try installing with `apt install wamerican`.
To pick another language try `apt search wordlist` which will return a bunch of other options for specific languages.
### Windows PC
If you're in Windows, we'd recommend using [WSL](https://learn.microsoft.com/en-us/windows/wsl/install).
This will provide access to a Linux shell which has all the aforementioned commands.
If you're not sure where to type in these fancy commands, try searching for "learn linux terminal". We'll eventually post additional
articles regarding our love for the terminal and why commands like these keep us in it.
## Alternative Word List
We experimented with other word lists. [doyle.txt](https://www.dataturd.com/words/doyle.txt) contains words extracted from books
featuring Sherlock Holmes. Source code for how the word list was built is located at https://github.com/ddrscott/wordy-passwords .
Feel free to search the Internet for other word lists.
## Closing
The terminal is a magical place where mountains of code can be replaced with a few razor sharp commands.
`shuf` and `paste` can make short work of many tasks. Let us know what you come up with in the comments below!
---
# How to Setup Airflow in its own Kubernetes namespace using Helm
Date: 2022-01-01
URL: https://ddrscott.github.io/blog/2022/how-to-airflow-with-helm/
Create a Helm values file to contain settings specific to your needs.
The main settings we listed demonstrate a few things:
1. pull dags from a git repository
2. enable Airflow API endpoints
3. enable Airflow to use KubernetesPodOperator on itself
4. use simple database setup
5. disable Redis and enable KubernetesExecutor
```yaml
git:
dags:
enabled: true
repositories:
- repository: https://ro_user:seeeecret@bitbucket.org/some-project/dags-repo.git
name: some-project
branch: main
scheduler:
extraEnvVars:
- name: PYTHONPATH
value: /opt/bitnami/airflow/dags/git_some-project/dags
web:
extraEnvVars:
- name: PYTHONPATH
value: /opt/bitnami/airflow/dags/git_some-project/dags
- name: AIRFLOW__WEBSERVER__ENABLE_PROXY_FIX
value: "True"
- name: AIRFLOW_BASE_URL
value: https://airflow.example.com
- name: AIRFLOW__API__AUTH_BACKEND
value: "airflow.api.auth.backend.default"
- name: AIRFLOW__API__ENABLE_EXPERIMENTAL_API
value: "True"
worker:
extraEnvVars:
- name: PYTHONPATH
value: /opt/bitnami/airflow/dags/git_some-project/dags
# begin https://artifacthub.io/packages/helm/bitnami/airflow#kubernetesexecutor
executor: KubernetesExecutor
redis:
endable: false
serviceAccount:
create: true
rbac:
create: true
# end https://artifacthub.io/packages/helm/bitnami/airflow#kubernetesexecutor
postgresql:
# standalone is default, but its nice to call it out in case we forget.
architecture: standalone
```
Install it using Helm
```sh
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm upgrade --install --namespace funland \
airflow bitnami/airflow \
--version 12.0.14 \
--values deploy/h2m-airflow-prod.yml
```
Get the admin password
```sh
echo $(kubectl get secret --namespace "ilabs" airflow -o jsonpath="{.data.airflow-password}" | base64 --decode)
```
## References
- https://docs.bitnami.com/kubernetes/infrastructure/apache-airflow/
- https://artifacthub.io/packages/helm/bitnami/airflow
---
# How to use Kubernetes Namespaces and Service Accounts for Fun and Profit
Date: 2022-01-01
URL: https://ddrscott.github.io/blog/2022/how-to-k8s-service-account/
We'll create a namespace named `funland` and show how to use a token to access it.
## Create a Namespace
```sh
kubectl create namespace funland
```
## Create a Service Account
This creates a new service account, an "owner" role, and assigns the owner to the new service account _and_ to the `default`
service account.
```sh
kubectl -n funland apply -f- <
**Problem**:
The Sales Team would like a list of people's names with their regions.
Create a query with each person correlated to the zip code `region`?
!!! success "Solution"
```sql
SELECT
people.id,
people.name,
people.zip_code,
zip_codes.region
FROM people
JOIN zip_codes ON zip_codes.code = people.zip_code;
```
| id | name | zip_code | region
|----|-------|----------|--------
| 1 | Candy | 60001 | North
| 2 | Mandy | 70001 | South
| 3 | Randy | 80001 | East
**Problem**:
List all the people who belong to an unknown zip code?
!!! success "Solution"
```sql
SELECT
people.id,
people.name,
people.zip_code,
zip_codes.region
FROM people
LEFT JOIN zip_codes ON zip_codes.code = people.zip_code
WHERE zip_codes.code IS NULL;
```
id | name | zip_code | region
-- | ---- | -------- | ------
4 | Andy | 89991 |
### JOIN Notes
- `INNER JOIN` and `JOIN` are interpreted identically. The later saves typing 6 characters.
- `INNER JOIN` is great for finding correlation, but bad for finding unmatched records.
- `LEFT JOIN` tells the query engine we want to return all data on the `LEFT` or earlier tables even though the correlated value doesn't exist.
- `RIGHT JOIN` is the opposite of `LEFT`. They retain rows from tables mentioned later in the query.
- Mixing `LEFT JOIN` with `RIGHT JOIN` in the same query should be avoided.
- The `WHERE` condition states we only want the uncorrelated rows, the `people` missing a `zip_codes` record.
## UNION Problems
The `UNION` clause allows us to conform 2 or more unrelated results into a single uniform results.
**Problem**
The search department wants a list of all keywords for their search index.
Return a list of keywords from both the people and the zip code tables.
!!! success "Solution"
```sql
SELECT
name AS keyword
FROM people
UNION
SELECT
region
FROM zip_codes;
```
| keyword |
| --------- |
| East |
| North |
| South |
| Mandy |
| Candy |
| Randy |
| Andy |
| West |
**Problem**
The search department likes the results, but wants to be able to correlate the row back to its source table and row.
!!! success "Solution"
```sql
SELECT
'people' AS source,
id::varchar AS key,
name AS keyword
FROM people
UNION
SELECT
'zip_codes',
code,
region
FROM zip_codes
ORDER BY 1, 2, 3;
```
source | key | keyword
-----------|-------|---------
people | 1 | Candy
people | 2 | Mandy
people | 3 | Randy
people | 4 | Andy
zip_codes | 60001 | North
zip_codes | 70001 | South
zip_codes | 80001 | East
zip_codes | 90001 | West
### UNION Notes
- The `UNION` clause requires all the columns from its sub queries to be the same number and type. This is why `::varchar` casting was added.
- `UNION` defaults to only returning distinct rows between the result sets. If you want all rows regardless of duplicates try `UNION ALL`.
- `ORDER BY` with `UNION` works on the final results. When ordering is needed within a sub query use parenthesis. Ex: `UNION (SELECT ... ORDER BY ... LIMIT 1)`
## Bonus Problem
Return the people with invalid zip codes __without__ using `JOIN`.
Hints:
- `EXCEPT` is the inverse of a `UNION`. `EXCEPT` removes the results from the source query instead of appending them.
- `IN` can be used as a matching clause in a `WHERE` filter.
!!! success "Solution"
```sql
SELECT
people.*
FROM people
WHERE zip_code IN (
SELECT
zip_code
FROM people
EXCEPT
SELECT
code
FROM zip_codes
);
```
id | name | zip_code
----|------|----------
4 | Andy | 89991
## Closing
!!! success "Clean Up"
```sql
DROP TABLE people;
DROP TABLE zip_codes;
```
`JOIN` and `UNION` are an essential part of a SQL users tool belt. On a day to day basis, `JOIN` is used more often than `UNION`, so we would recommend understanding it first. `UNION` gets it power when we realize our data is more alike than originally intended. Both clauses should be withheld at dinner parties and other social events.
Happy SQL-ing!
---
# Move to MkDocs
Date: 2018-05-04
URL: https://ddrscott.github.io/blog/2018/move-to-mkdocs/
Tags: blog
I've upgraded to [MkDocs][] from [Octopress][]. The [Octopress][] installation I've been using was from 2014 (4 years old) and was missing features for search, long form writing, multi-part posts, and books.
!!! note "http://www.mkdocs.org/"
MkDocs is a **fast**, **simple** and **downright gorgeous** static site
generator that's geared towards building project documentation. Documentation
source files are written in Markdown, and configured with a single YAML
configuration file.
## Features
To see all the features of [MkDocs][] and [mkdocs-material] jump to their respective sites.
Here's the features really care about:
[octopress]: http://octopress.org/
[mkdocs]: http://www.mkdocs.org/
[pandoc]: http://pandoc.org/
[mkdocs-material]: https://squidfunk.github.io/mkdocs-material/
## Transition
The migration process wasn't as easy as I'd hoped. I wanted all the old posts to move over, but there wasn't an easy path for my content prior to 2014. That's because some of those posts are in pure html and [MkDocs][] only wants to deal in Markdown. I tried [Pandoc][], but that created Markdown which isn't compatible with [MkDocs][]. It'll take some time before all that old stuff gets through. Fortunately, because it's _so_ old, most of the material isn't relevant anymore and gets very few hits.
[MkDocs][] doesn't build the site hierarchy automatically like [Octopress][]. [Octopress][] stores all article in a flat directory with file names prefixed with a date. [MkDocs][] uses the file name as part of the URL, so I needed to rename the [Octopress][] files and put them in a directory structure to match the URL format I had in the original site.
**Octopress Posts Directory**:
```
├── source
│ ├── _posts
│ │ ├── 2016-05-28-negative-modulo.markdown
│ │ ├── 2016-07-11-photography-lightening-talk.markdown
│ │ ├── 2016-07-26-ansi-codes-with-character.markdown
│ │ ├── 2017-02-07-how-to-get-better-at-anything.markdown
│ │ ├── 2017-03-08-what-the-sql-lateral.markdown
│ │ ├── 2017-03-15-what-the-sql-recursive.markdown
│ │ ├── 2017-03-22-what-the-sql-window.markdown
│ │ ├── 2017-04-10-vim-send-text.markdown
│ │ ├── 2017-04-13-base16-shell.markdown
│ │ ├── 2017-06-01-gnu-screen.markdown
│ │ ├── 2017-06-12-fzf-dictionary.markdown
│ │ ├── 2018-03-04-getting-rusty-with-vim.markdown
│ │ ├── 2018-03-12-stream-stats-in-rust.markdown
│ │ └── 2018-03-22-blog-setup.markdown
```
**MkDocs Docs Directory**:
```
├── docs
│ ├── blog
│ │ ├── 2016
│ │ │ ├── negative-modulo.markdown
│ │ │ └── yank-without-jank.markdown
│ │ ├── 2017
│ │ │ ├── base16-shell.markdown
│ │ │ ├── fzf-dictionary.markdown
│ │ │ ├── gnu-screen.markdown
│ │ │ ├── how-to-get-better-at-anything.markdown
│ │ │ ├── vim-send-text.markdown
│ │ │ ├── what-the-sql-lateral.markdown
│ │ │ ├── what-the-sql-recursive.markdown
│ │ │ └── what-the-sql-window.markdown
│ │ └── 2018
│ │ ├── blog-setup.markdown
│ │ ├── getting-rusty-with-vim.markdown
│ │ ├── move-to-mkdocs.markdown
│ │ └── stream-stats-in-rust.markdown
```
After getting the files to the correct directory, the site hierarchy needs to be configured in `mkdocs.yml`. That file contains much more than the page structure. You'll need to read more about it from their [documentation][mkdocs].
**mkdocs.yml**:
```yaml
# .. stuff before this section ..
pages:
- Home: index.md
- Projects: projects.md
- '2018':
- 'Move to MkDocs': blog/2018/move-to-mkdocs.markdown
- 'A Rustic Journey Through Stream Stats': blog/2018/stream-stats-in-rust.markdown
- 'Getting Rusty with Vim': blog/2018/getting-rusty-with-vim.markdown
- 'Dev Blog Tools :: A Quick Tour of My Setup': blog/2018/blog-setup.markdown
- '2017':
- 'Base16 Shell': blog/2017/base16-shell.markdown
- 'How to Get Better At Anything': blog/2017/how-to-get-better-at-anything.markdown
- 'FZF + WordNet = Dictionary': blog/2017/fzf-dictionary.markdown
- 'GNU Screen': blog/2017/gnu-screen.markdown
- 'What the SQL?!? Lateral Joins': blog/2017/what-the-sql-lateral.markdown
- 'What the SQL?!? WINDOW': blog/2017/what-the-sql-window.markdown
- 'What the SQL?!? Recursive': blog/2017/what-the-sql-recursive.markdown
- 'Vim Send Text': blog/2017/vim-send-text.markdown
# .. more stuff ..
```
Once `mkdocs.yml` is configured we can run:
```sh
# start a web server to preview the site
mkdocs server
# or generate the site files for deployment
mkdocs build
# or deploy to GitHub Pages
mkdocs gh-deploy
```
## Conclusion
Here's a breakdown of the pros and cons that I've experienced so far:
### Pros
+ Full site search without using external service (Google, Angola, etc.)
+ Site hierarchy.
+ Automatic table of contents on all pages.
+ Various code blocks and annotations.
+ Less magic.
### Cons
+ Older posts don't translate well, because they have to be in Markdown format.
+ [MkDocs][] is a Python project, a language I'm not familiar with, yet.
+ Less magic.
Hopefully this the catalyst to more content. Stay tuned!
Questions? Comments? Trolls?!? Let me know! Tweet
---
# Dev Blog Tools :: A Quick Tour of My Setup
Date: 2018-03-22
URL: https://ddrscott.github.io/blog/2018/blog-setup/
Description: Tools I use to create posts, screenshots, code snippets, gifs, screencasts, and other interesting things.
I've been asked to share about my blogging setup a few times, so in the spirit of keeping things [DRY][], it's time to make a post about it.
**TL;DR** -- Mac, iTerm, NeoVim, LICEcap, Octopress, Base16, Input Font, Skitch, OBS
> **WARNING** This is an atypical post. I normally prefer to go over a single feature, but this time I'm going to under explain a lot of features. Here's a short list of typical posts in case chaos is not your thing:
>
> + https://ddrscott.github.io/blog/2017/vim-send-text/
> + https://ddrscott.github.io/blog/2017/what-the-sql-lateral/
> + https://ddrscott.github.io/blog/2016/negative-modulo/
## Computer
I do all my writing and coding on a MacBook Pro Retina 15 inch. It's pretty maxed out, but I don't think it needs to be that way. The only stats I really care about are the size and the clarity of the screen.
## Terminal
I like to live in a terminal. I choose [iTerm2][] for it's split panes and independent font size per pane. At times I wish it was as fast as the native Terminal.app, but it doesn't support splits. I've been told many times that [tmux][] supports splits, too, but I can't change the text size independently between splits. I'd also rather use [GNU screen][screen] anyway.
> Old timers like old things™
## Editor
[NeoVim][] is always loaded in one of my terminal panes. It's the most efficient editor for me. I switched away from Vim 7 because it couldn't do background jobs. Prior to Vim I used [RubyMine][] and [IntelliJ][] which were great for code completion and navigating projects, but felt heavy for notes, free form writing, and editing system files. I'll save the rest of the Vim sermon for another post.
When composing posts, I use [vim-markdown][] to get syntax highlighting, folding, TOC, and other goodies.
My entire NeoVim config can be found in its [Github repository][config-nvim]. I don't recommend folks using it outright, but borrow parts if it and slowly integrate it into their own setup. One size doesn't fit all. It barely fits me!
## Colors
I lived in [Solarized Dark](http://ethanschoonover.com/solarized) for many years. It is so common place in development shops it became the [new green screen][] of the 70's and 80's. So when folks see yellow and green highlights against an off-black chalkboard beaming from my terminal, there's pause and self reflection. They wonder how they were nestled into a monotonous monoculture. They ponder when they traded emotional delight for ocular comfort. They realize depression encroached silently on them like mold in a damp attic. But I digress…
Here's how Base16 describes itself:
> An architecture for building themes based on carefully chosen syntax highlighting using a base of sixteen colours. Base16 provides a set of guidelines detailing how to style syntax and how to code a builder for compiling base16 schemes and templates.
>
> -- https://github.com/chriskempson/base16
[base16-ocean][] is the color scheme I use. It's scheme #74, type `j` or `k` to change the theme). The scheme is perfectly in sync between shell and Vim due to [base16-shell][] and [base16-vim][]. I wrote a [longer post about using Base16 while back](/blog/2017/base16-shell/).
## Font
I've been using the same code font for as long as I can remember: Input Mono Condensed.
> Input is a flexible system of fonts designed specifically for code by David Jonathan Ross. It offers both monospaced and proportional fonts, all with a large range of widths, weights, and styles for richer code formatting.
>
> -- Font Bureau
I enjoy the fancy 'a' and 'g' characters along with easy to distinguish 'l', '1', and 'I'.(Helvetica can't do it justice.) It also has several different character width options so I can squeeze more code into one eye shot.
Speaking of shots…
## Screenshots
I have a few ways of taking screenshots and it's mainly the Mac way:
+ To snap a small portion of the screen, I use `⇧+⌘+4`, then select region to snap.
+ To snap a window with the shadow, I use `⇧+⌘+4`, then `space` and select a window.
[Apple's support page][apple1] has the gruesome details on both.
When I want to annotate a screenshot, I use [Evernote's Skitch App][skitch]. The featured image was created using a combination of OSX screenshot and then editing in [Skitch][skitch].
## Animated Gifs and Videos
For short demos, I like to use [LICEcap][] to record an animated Gif. I wouldn't use Gifs for anything longer than a few seconds since it doesn't support video playback controls without extra magic.
Here's an example [LICEcap][] Gif:
And here's a video of how I produced it:
For full screen video record, I use [OBS Studio][obs]. (You might have noticed me stopping the recording at the end of the video. Should have used a hotkey.) It produces small file sizes and has a lot of features including webcam overlays, filters, transforms, etc. It deserves a whole book of its own.
> **Note to Self**
>
> 1. write book about OBS Studio
> 2. retire
## Blog Generator
I use [Octopress][octopress1] to generate the static HTML pages you're reading now. I haven't updated it since 2014 ([SHA 71e4d40b][octopress2]) and I'm terrified to do so. The setup Just Works™. I write in [Markdown][] and it does the rest.
The source code for this exact post is [here](https://github.com/ddrscott/octopress/blob/master/source/_posts/2018-03-22-blog-setup.markdown).
That being said, I'm strongly considering switching to [Mkdocs][] to have more structure and better search capabilities built-in.
## Closing
You made it to the end! I hope at least one of these will benefit your daily computing life.
Questions? Comments? Trolls?!? Let me know! Tweet
[Hacker News](https://news.ycombinator.com/item?id=16648381)
**Links**
+ iTerm2 -- https://www.iterm2.com/
+ Tmux -- https://github.com/tmux/tmux/wiki
+ GNU Screen -- https://www.gnu.org/software/screen/
+ RubyMine -- https://www.jetbrains.com/ruby/
+ IntelliJ -- https://www.jetbrains.com/idea/
+ Vim -- https://www.vim.org
+ NeoVim -- https://neovim.io/
+ vim-markdown -- https://github.com/plasticboy/vim-markdown
+ My NeoVim Config -- https://github.com/ddrscott/config-nvim
+ base16-shell -- https://github.com/chriskempson/base16-shell
+ base16-vim -- https://github.com/chriskempson/base16-vim
+ base16-ocean -- http://chriskempson.com/projects/base16/
+ Input Font -- http://input.fontbureau.com/
+ Skitch -- https://evernote.com/products/skitch
+ LICEcap -- https://www.cockos.com/licecap/
+ OBS Studio -- https://obsproject.com/
+ Octopress -- http://octopress.org/
+ Mkdocs -- http://www.mkdocs.org/
**Edits**
+ added Hacker News link
[DRY]: https://en.wikipedia.org/wiki/Don%27t_repeat_yourself
[iTerm2]: https://www.iterm2.com/
[tmux]: https://github.com/tmux/tmux/wiki
[screen]: https://www.gnu.org/software/screen/
[rubymine]: https://www.jetbrains.com/ruby/
[intellij]: https://www.jetbrains.com/idea/
[vim]: https://www.vim.org
[NeoVim]: https://neovim.io/
[vim-markdown]: https://github.com/plasticboy/vim-markdown
[config-nvim]: https://github.com/ddrscott/config-nvim
[new green screen]: https://en.wikipedia.org/wiki/Monochrome_monitor#/media/File:IBM_PC_5150.jpg
[base16-shell]: https://github.com/chriskempson/base16-shell
[base16-vim]: https://github.com/chriskempson/base16-vim
[base16-ocean]: http://chriskempson.com/projects/base16/
[input_font]: http://input.fontbureau.com/
[apple1]: https://support.apple.com/en-us/HT201361
[skitch]: https://evernote.com/products/skitch
[licecap]: https://www.cockos.com/licecap/
[obs]: https://obsproject.com/
[octopress1]: http://octopress.org/
[octopress2]: https://github.com/imathis/octopress/tree/71e4d40ba7aef73da65936bc9a77e432609811b2
[mkdocs]: http://www.mkdocs.org/
[markdown]: https://en.wikipedia.org/wiki/Markdown
---
# A Rustic Journey Through Stream Stats
Date: 2018-03-17
URL: https://ddrscott.github.io/blog/2018/stream-stats-in-rust/
Tags: rust
Description: How I made a CLI program in Rust which combines tee, wc -l, and watch.
After playing [Guessing Game](/blog/2018/getting-rusty-with-vim/) from the [Rust Book](https://doc.rust-lang.org/book/first-edition/guessing-game.html) a few times, it was time to make something a little more substantial. We're going to create `stream_stats`, a CLI program which prints throughput statistics from `stdin` while redirecting through `stdout`. Think `tee` + `wc -l` + `watch` all at the same time.
**TL;DR** - `cargo install stream_stats`
Here is a quick demo of the program:
Today, I we'll build this program up in 6 steps smallish steps. The minimum requirement of this program was the live feedback as seen in the demo _and_ minimal impact on the overall performance.
## Step 1 - Reproducing `cat` Inefficiently
First step is to replicate `cat`. We'll do it as demonstrated by Rust's own [documentation](https://doc.rust-lang.org/std/io/struct.Stdin.html#method.read_line).
```rust
use std::io;
fn main() {
let mut buffer = String::new();
while io::stdin().read_line(&mut buffer).unwrap() > 0 {
print!("{}", buffer);
buffer.clear();
}
}
```
> I'm using `unwrap` to keep our program short and sweet.
Save the code as `stream_stats.rs` and build it using `rustc -O stream_stats.rs`. This will
compile the program into `stream_stats`. We can now run the program with
`./stream_stats < stream_stats.rs` or `cat stream_stats.rs | stream_stats`. This should output the source code we just wrote.
The program is sufficient for small streams, but will perform horribly on large files.
## Step 2 - Reproducing `cat` Efficiently with Buffering
> It can be excessively inefficient to work directly with a Read instance. For example, every call to read on TcpStream results in a system call. A BufReader performs large, infrequent reads on the underlying Read and maintains an in-memory buffer of the results.
>
> -- https://doc.rust-lang.org/std/io/struct.BufReader.html
Lets add some buffer use to increase performance and get it near the speed of `cat`. Replace the contents of `stream_stats.rs` with the following, recompile, and run the program.
```rust
use std::io::{self, BufRead, BufReader, BufWriter, Write};
static READ_BUF_SIZE: usize = 1024 * 1024;
fn main() {
let mut reader = BufReader::with_capacity(READ_BUF_SIZE, io::stdin());
let mut writer = BufWriter::new(io::stdout());
let mut buffer = vec![];
while reader.read_until(b'\n', &mut buffer).unwrap() > 0 {
writer.write(&buffer).unwrap();
buffer.clear();
}
writer.flush().unwrap();
}
```
The exact difference is viewable on Github.
Here's a one-liner which to help with the build/run cycle:
```sh
rustc -O ./stream_stats.rs && ./stream_stats < stream_stats.rs
```
For a few extra lines, we get a lot of performance. There are ways to get even more
performance, but it won't be worth the code complexity at this time.
## Step 3 - Count the Lines
We're ready to start counting lines. We'll introduce a `struct` to hold a start
time and line count.
```rust
use std::io::{self, BufRead, BufReader, BufWriter, Write};
use std::time::Instant;
static READ_BUF_SIZE: usize = 1024 * 1024;
struct Stats {
started: Instant,
lines: usize,
}
fn main() {
let mut reader = BufReader::with_capacity(READ_BUF_SIZE, io::stdin());
let mut writer = BufWriter::new(io::stdout());
let mut buffer = vec![];
let mut stats = Stats {
started: Instant::now(),
lines: 0,
};
while reader.read_until(b'\n', &mut buffer).unwrap() > 0 {
writer.write(&buffer).unwrap();
stats.lines += 1;
buffer.clear();
}
writer.flush().unwrap();
eprintln!("lines: {}, {:?}", stats.lines, stats.started.elapsed());
}
```
Again the exact difference is viewable on Github.
## Step 4 - Write to `/dev/tty`
Using `eprintln!` is easy, but bad practice for non-error output. The next step is moving the output to `/dev/tty`. As a reminder, we're also not using `println!` because we're reserving it for the original content piped from `stdin`.
```rust
use std::fs::{File, OpenOptions};
use std::io::{self, BufRead, BufReader, BufWriter, Write};
use std::time::Instant;
static READ_BUF_SIZE: usize = 1024 * 1024;
struct Stats {
started: Instant,
lines: usize,
tty: File,
}
impl Stats {
fn new(tty: &str) -> Stats {
Stats {
started: Instant::now(),
lines: 0,
tty: OpenOptions::new()
.write(true)
.append(true)
.open(tty)
.expect("Cannot open tty for writing!"),
}
}
}
fn main() {
let mut reader = BufReader::with_capacity(READ_BUF_SIZE, io::stdin());
let mut writer = BufWriter::new(io::stdout());
let mut buffer = vec![];
let mut stats = Stats::new("/dev/tty");
while reader.read_until(b'\n', &mut buffer).unwrap() > 0 {
writer.write(&buffer).unwrap();
stats.lines += 1;
buffer.clear();
}
writer.flush().unwrap();
writeln!(
stats.tty,
"lines: {}, {:?}",
stats.lines,
stats.started.elapsed()
).expect("Could not write to tty!");
}
```
Exact difference is viewable on Github.
## Step 5 - Beautify Stats Output
The display logic is going to get a little more complex. We want to move the string formatting code to a `fmt::Display` trait. We'll also add the kilobytes to the displayed stats.
```rust
use std::fmt;
use std::fs::{File, OpenOptions};
use std::io::{self, BufRead, BufReader, BufWriter, Write};
use std::time::Instant;
static READ_BUF_SIZE: usize = 1024 * 1024;
static CLEAR_LINE: &str = "\x1B[1G\x1B[2K";
struct Stats {
started: Instant,
lines: usize,
bytes: usize,
tty: File,
}
impl Stats {
fn new(tty: &str) -> Stats {
Stats {
started: Instant::now(),
lines: 0,
bytes: 0,
tty: OpenOptions::new()
.write(true)
.append(true)
.open(tty)
.expect("Cannot open tty for writing!"),
}
}
}
impl fmt::Display for Stats {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let elapsed = self.started.elapsed();
let seconds: f64 = elapsed.as_secs() as f64 + elapsed.subsec_nanos() as f64 * 1e-9;
if seconds == 0.0 {
return write!(f, "");
}
let kb = self.bytes as f64 / 1024 as f64;
let kb_per_sec = kb / seconds;
let lines_per_sec = self.lines as f64 / seconds;
write!(
f,
"{}{:.1} sec | {:.0} kb [ {:.1}/s ] | {} lines [ {:.0}/s ]",
CLEAR_LINE,
seconds,
kb,
kb_per_sec,
self.lines,
lines_per_sec
)
}
}
fn main() {
let mut reader = BufReader::with_capacity(READ_BUF_SIZE, io::stdin());
let mut writer = BufWriter::new(io::stdout());
let mut buffer = vec![];
let mut stats = Stats::new("/dev/tty");
while reader.read_until(b'\n', &mut buffer).unwrap() > 0 {
writer.write(&buffer).unwrap();
stats.lines += 1;
stats.bytes += &buffer.len();
buffer.clear();
}
writer.flush().unwrap();
writeln!(&stats.tty, "{}", &stats).expect("Could not write to tty!");
}
```
Exact difference is viewable on Github.
## Step 6 - Display the stats 10 times per second
We're finally at the most useful part of the program. Viewing the stats while
the stream is still going.
For this task, we introduce a thread which loops forever sleeping a little and
waking to output the stats. Because of the thread, we need to use `Arc` to
safely tell Rust another thread is going to have a pointer to the stats object.
To be honest, I don't fully understand why I need to use `AtomicUsize`. I tried
to keep the `usize` variables would get errors regarding mutability. If someone
out there can remove the `AtomicUsize` without introducing `unsafe` please let
me know!
Here's the final code listing:
```rust
use std::fmt;
use std::fs::{File, OpenOptions};
use std::io::{self, BufRead, BufReader, BufWriter, Write};
use std::sync::Arc;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::thread::{self, sleep};
use std::time::{Duration, Instant};
static READ_BUF_SIZE: usize = 1024 * 1024;
static CLEAR_LINE: &str = "\x1B[1G\x1B[2K";
static UPDATE_INTERVAL_MS: u64 = 100;
struct Stats {
started: Instant,
lines: AtomicUsize,
bytes: AtomicUsize,
tty: File,
}
impl Stats {
fn new(tty: &str) -> Stats {
Stats {
started: Instant::now(),
lines: AtomicUsize::new(0),
bytes: AtomicUsize::new(0),
tty: OpenOptions::new()
.write(true)
.append(true)
.open(tty)
.expect("Cannot open tty for writing!"),
}
}
fn add(&self, buffer: &Vec@rustlang responded quickly with insightful information. They're totally getting the Raving Fan Award this weekend! ## Conclusion Rust is worth pursing with or without Vim. The feedback from the compiler and runtime errors is clear. The [racer-rust](https://github.com/racer-rust/vim-racer) completion utility gives all IDE super powers. And finally, I hear it's a pretty good language, too. https://www.rust-lang.org --- # FZF + WordNet = Dictionary Date: 2017-06-12 URL: https://ddrscott.github.io/blog/2017/fzf-dictionary/ `FZF + WordNet = Dictionary`. FZF is a fuzzy finding command line tool. WordNet is a dictionary structured for developers. When married together, we can get a snappy dictionary to help us find just the right word for any occasion. ## Install Required Program Before making our new shell function, lets install the required programs. 1. https://github.com/junegunn/fzf 2. http://wordnetweb.princeton.edu/perl/webwn These directions are for Max OSX with `homebrew` installed. If you're on a different system, read the docs from the sites above to get the programs for your operating system. ```sh brew install fzf brew cask install xquartz brew install wordnet ``` ## FZF FZF stands for Fuzzy Finder. It is a program which enables the user to filter a set of lines from standard in and feed those line back to standard out. A basic example is: `find . | fzf`. This presents a list of all files in the current working directory and prompts the user for input. As you type letters, the list will narrow, keeping only the items matching the search criteria. After selecting an entry from the list the line or lines chosen is printed to standard out. It provides a nifty argument `--preview` which can execute a program and display its output as an aside in the terminal. We'll write more about FZF in the future. ## WordNet > WordNet is a large lexical database of English. Nouns, verbs, adjectives and > adverbs are grouped into sets of cognitive synonyms (synsets), each expressing > a distinct concept. Synsets are interlinked by means of conceptual-semantic > and lexical relations. The resulting network of meaningfully related words and > concepts can be navigated with the browser. WordNet is also freely and > publicly available for download. WordNet's structure makes it a useful tool > for computational linguistics and natural language processing. > > WordNet superficially resembles a thesaurus, in that it groups words together > based on their meanings. However, there are some important distinctions. > First, WordNet interlinks not just word forms—strings of letters—but specific > senses of words. As a result, words that are found in close proximity to one > another in the network are semantically disambiguated. Second, WordNet labels > the semantic relations among words, whereas the groupings of words in > a thesaurus does not follow any explicit pattern other than meaning > similarity. Using WordNet we can find information similar to what is in a dictionary and thesaurus combined. Example output of looking up `happy` is: ```text % wn happy -over Overview of adj happy The adj happy has 4 senses (first 2 from tagged texts) 1. (37) happy -- (enjoying or showing or marked by joy or pleasure; "a happy smile"; "spent many happy days on the beach"; "a happy marriage") 2. (2) felicitous, happy -- (marked by good fortune; "a felicitous life"; "a happy outcome") 3. glad, happy -- (eagerly disposed to act or to be of service; "glad to help") 4. happy, well-chosen -- (well expressed and to the point; "a happy turn of phrase"; "a few well-chosen words") ``` `xquartz` is needed since the WordNet package also works in GUI mode. The GUI app has a useful interface and could be more intuitive to use if you don't mind clicking around. The GUI app can be started with `wnb`, but we're not here for GUI stuff. Let's move on to making our terminal script. ## Shell Script The following 3 functions can be used individually and are helpful all on their own. Add these directly into your shell profile or in a separate file and source it from the profile. `fold` is normally a built in command. By default it adds newlines when text overflows the terminal, but it doesn't do it in an easy to read fashion. The default behavior can break in the middle of a word and assumes a terminal width of 80 columns. Our `fold` function breaks at spaces and passes the whole terminal width to it when no other arguments are provided. ```sh # Default `fold` to screen width and break at spaces function fold { if [ $# -eq 0 ]; then /usr/bin/fold -w $COLUMNS -s else /usr/bin/fold $* fi } ``` `spell` is the FZF portion of our script. This fuzzy matches the built in Mac dictionary with a preview window containing the WordNet overview of the selected word. ```sh # Use `fzf` against system dictionary function spell { cat /usr/share/dict/words | fzf --preview 'wn {} -over | fold' --preview-window=up:60% } ``` The `dic` script uses `spell` to help find a word then outputs WordNet's definition. ```sh # Lookup definition of word using `wn $1 -over`. # If $1 is not provided, we'll use the `spell` command to pick a word. # # Requires: # brew install wordnet # brew install fzf function dic { if [ $# -eq 0 ]; then wn `spell` -over | fold else wn $1 -over | fold fi } ``` Here's another demo of the `dic` function:Rust has clever tuple indexing, but square brackets would have been fine. What's wrong with `tuple[0]`? #rustlang pic.twitter.com/E0VY70zxuV
— Scott Pierce (@_ddrscott_) March 3, 2018
## Conclusion
Gluing programs together with `fzf --preview` is fun. Let us know what other
recipes you come up with in the comments below.
---
# GNU Screen
Date: 2017-06-01
URL: https://ddrscott.github.io/blog/2017/gnu-screen/
> Screen is a full-screen window manager that multiplexes a physical terminal
> between several processes, typically interactive shells.
TL;DR - Screen keeps your ssh sessions alive on a host.
## Installation
Most servers have `screen` installed already. If they don't it can be installed
via `apt-get install screen`, `yum install screen`, `brew install screen`. Sorry
Windows, try Remote Desktop.
## Startup
Get a terminal on a remote host (or local) then run `screen`
```sh
screen
```
If you're not brave, try `man screen` to read more about.
Once `screen` has started, you'll want to remember `
## TMUX
A strong competitor to `screen` is `tmux`. It has a more modern code base and is
actively maintained. The reason I personally don't use it is out of habit and
it's not installed everywhere. `screen` just works for my work flow.
## References
- https://www.gnu.org/software/screen/
- http://www.pixelbeat.org/lkdb/screen.html
- http://aperiodic.net/screen/quick_reference
---
# Base16 Shell
Date: 2017-04-13
URL: https://ddrscott.github.io/blog/2017/base16-shell/
After many years using the excellent Solarized color scheme, it has
started to feel stale. Sometimes I think the dark blueish tint brings
down my mood. Other times, I wonder what life could be like if I stared at more
cheerful colors. Thus starts my farewell from Solarized, and hello to
Base16.
From Base16's [Github README](https://github.com/chriskempson/base16):
> Base16 provides carefully chosen syntax highlighting and a default set of
> sixteen colors suitable for a wide range of applications. Base16 is not a
> single theme but a set of guidelines with numerous implementations.
Which means after integrating into Base16 once, I'll have access to an
unlimited supply of themes in the future!
## Installation
Base16 has perfect iTerm and shell integration. Once the repo was installed
locally, I called `base16_ocean` and was greeted by brand new palette. No iTerm
tweaking, no downloading this other thing and importing stuff into iTerm. It was
literally 2 steps performed in shell and then pick a theme.
Here's what you do. (FYI. This is pretty much copy/paste from their repo)
```sh
# 1. clone the repo to `~/.config/base16-shell`
git clone https://github.com/chriskempson/base16-shell.git ~/.config/base16-shell
# 2. update ~/.bashrc or ~/.zshrc
cat >> ~/.zshrc <<'SH'
BASE16_SHELL=$HOME/.config/base16-shell/
[ -n "$PS1" ] && [ -s $BASE16_SHELL/profile_helper.sh ] && eval "$($BASE16_SHELL/profile_helper.sh)"
SH
```
After you're done with those steps, start a new terminal session or source the
file, and start choosing a theme. Try `base16_ocean` to see what I'm seeing. Try
`base16_
After pairing with some Sublime users, I noticed a neat feature. Or more
accurately, they were rubbing it in my face that their cute editor was better
than mine. The feature was [SendText](https://github.com/wch/SendText). Well, I
couldn't let Sublime users have all the fun, and apparently neither could a few
other people.
## History
There have been a few other implementations at this feature. These
implementations sent the text to a screen or tmux split. Since I don't use
either, I couldn't use them a la carte.
- https://github.com/vim-scripts/tslime.vim
- https://github.com/jpalardy/vim-slime
- https://github.com/ervandew/screen
This next implementation was good. It's only flaw, IMHO, was it's mappings and
naming. The naming "ISlime2" is impossible for me to type on the first try. The
mappings overlapped my existing mappings.
[ISlime2](https://github.com/matschaffer/vim-islime2) did all the hard work
AppleScript work and provides the Vim function to pass into the AppleScript.
Enter [vim-sendtext](https://github.com/ddrscott/vim-sendtext).
[vim-sendtext](https://github.com/ddrscott/vim-sendtext) is a fork of [ISlime2](https://github.com/matschaffer/vim-islime2).
My fork removes all the mappings, exposes useful internal functions, and adds
recommended mappings to the README.md.
## Recommended Mappings
```vim
" Send current line
nnoremap
## That doesn't look very DRY. Finally, a `WINDOW`
The `WINDOW` keyword allows us to alias the options of the `OVER` clause. Namely
the expression `(...)` between and including the parenthesis.
In the following example we add the use of `RANGE` to provide additional
direction to the windowing clause.
```sql
SELECT
*,
(count(1) OVER w) - 1 AS others,
first_value(name) OVER w AS first_moved,
last_value(name) OVER w AS last_moved
FROM sample_moves
WINDOW w AS (
PARTITION BY address ORDER BY moved_at
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
);
```
#### Results
| id | name | address | moved_at | others | first_moved | last_moved
|----|---------|-----------|------------|--------|-------------|------------
| 1 | Alice | 1 Main St | 2017-01-01 | 0 | Alice | Alice
| 2 | Bob | 2 Main St | 2017-02-01 | 0 | Bob | Bob
| 3 | Cat | 2 Main St | 2017-03-01 | 1 | Bob | Cat
| 4 | Dan Sr | 3 Main St | 1970-04-01 | 0 | Dan Sr | Dan Sr
| 5 | Dan Jr | 3 Main St | 2001-04-01 | 1 | Dan Sr | Dan Jr
| 6 | Dan 3rd | 3 Main St | 2017-04-01 | 2 | Dan Sr | Dan 3rd
```sql
-- Previous and Next Record
SELECT
*,
(count(1) OVER w) - 1 AS others,
first_value(name) OVER w AS first_moved,
last_value(name) OVER w AS last_moved,
lag(id) OVER (ORDER BY id) AS prev_id,
lead(id) OVER (ORDER BY id) AS next_id
FROM sample_moves
WINDOW w AS (
PARTITION BY address
ORDER BY moved_at
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
)
ORDER BY address;
```
#### Results
| id | name | address | moved_at | others | first_moved | last_moved | prev_id | next_id
|----|---------|-----------|------------|--------|-------------|------------|---------|---------
| 1 | Alice | 1 Main St | 2017-01-01 | 0 | Alice | Alice | | 2
| 2 | Bob | 2 Main St | 2017-02-01 | 1 | Bob | Cat | 1 | 3
| 3 | Cat | 2 Main St | 2017-03-01 | 1 | Bob | Cat | 2 | 4
| 4 | Dan Sr | 3 Main St | 1970-04-01 | 2 | Dan Sr | Dan 3rd | 3 | 5
| 5 | Dan Jr | 3 Main St | 2001-04-01 | 2 | Dan Sr | Dan 3rd | 4 | 6
| 6 | Dan 3rd | 3 Main St | 2017-04-01 | 2 | Dan Sr | Dan 3rd | 5 |
## List Window Functions
Here is a list from [Postgres docs](https://www.postgresql.org/docs/9.3/static/functions-window.html)
of all the window functions. In addition to these, *any regular aggregate function* can be use within a window.
| Function | Description
| -------- | -----------
| row_number() | number of the current row within its partition, counting from 1
| rank() | rank of the current row with gaps; same as row_number of its first peer
| dense_rank() | rank of the current row without gaps; this function counts peer groups
| percent_rank() | relative rank of the current row: (rank - 1) / (total rows - 1)
| cume_dist() | relative rank of the current row: (number of rows preceding or peer with current row) / (total rows)
| ntile | integer ranging from 1 to the argument value, dividing the partition as equally as possible
| lag() | returns value evaluated at the row that is offset rows before the current row within the partition; if there is no such row, instead return default (which must be of the same type as value). Both offset and default are evaluated with respect to the current row. If omitted, offset defaults to 1 and default to null
| lead() | returns value evaluated at the row that is offset rows after the current row within the partition; if there is no such row, instead return default (which must be of the same type as value). Both offset and default are evaluated with respect to the current row. If omitted, offset defaults to 1 and default to null
| first_value() | returns value evaluated at the row that is the first row of the window frame
| last_value() | returns value evaluated at the row that is the last row of the window frame
| nth_value() | returns value evaluated at the row that is the nth row of the window frame (counting from 1); null if no such row
## References
- Postgres Window Tutorial: https://www.postgresql.org/docs/9.3/static/tutorial-window.html
- Postgres Window Functions: https://www.postgresql.org/docs/9.3/static/functions-window.html
- Postgres Window Syntax: https://www.postgresql.org/docs/9.3/static/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS
---
# What the SQL?!? Recursive
Date: 2017-03-15
URL: https://ddrscott.github.io/blog/2017/what-the-sql-recursive/
Tags: sql
Today's "What the SQL?!?" features the keyword `RECURSIVE`. This clause allows
us to elegantly select results from the previous results from the previous results
from the previous results...
Please note, our target database is PostgreSQL. These examples may work with
other databases, but might need some massaging to get them to work properly.
Search online for the specific vendor's documentation if errors pop up.
Try searching for "RECURSIVE queries $DB_VENDOR_NAME". Not all database vendors
support the keyword `RECURSIVE`.
## Fibonacci Sequence
According to [Wikipedia](https://en.wikipedia.org/wiki/Fibonacci_number):
> In mathematics, the Fibonacci numbers are the numbers in the following integer
> sequence, called the Fibonacci sequence, and characterized by the fact that
> every number after the first two is the sum of the two preceding ones:
>
> 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 ...
## SQL Solution
Our SQL solution will make use of the `RECURSIVE` [CTE](https://www.postgresql.org/docs/9.3/static/queries-with.html)
keyword.
```sql
WITH RECURSIVE t(i, fi, fib) AS (
SELECT
1,
0::NUMERIC,
1::NUMERIC
UNION ALL
SELECT
i + 1,
fib,
fi + fib
FROM t
WHERE i < 10
)
SELECT
i,
fib
FROM t
```
## The Ins and Outs
Here's some inline colorful comments to explain the sections:
Maybe arrows will help a little more with the flow of data:
## Fibonacci Results
When you run the query, you'll get the following results:
```
i | fib
----+-----
1 | 1
2 | 1
3 | 2
4 | 3
5 | 5
6 | 8
7 | 13
8 | 21
9 | 34
10 | 55
(10 rows)
```
If you want to see the results for a high number, update `i < 10` to a higher
value. If you go above `i < 793`, Postgres gives up and returns `Nan` which means
`Not a number` which means the computed value is larger than your computer can
handle and still treat like a number. Sorry, get a new computer or work with
numbers less than 166 digits long.
## A Real World Example with Hierarchical Data
Fibonacci sequence is nice and all, but you have real data concerns. You're
thinking, "Show me the DATA!". So here's the data...
```sql
-- Build `sample_people` table
CREATE TABLE sample_people AS
SELECT
column1::int AS id,
column2::varchar AS name,
column3::int AS parent_id
FROM (
VALUES
(0, 'Root' , null),
(1, 'Alice', 0),
(2, 'Bob' , 1),
(3, 'Cat' , 1),
(4, 'Dan' , 3),
(5, 'Evan' , 0),
(6, 'Frank', 5)
) as t
;
SELECT * FROM sample_people;
-- id | name | parent_id
-- ----+-------+-----------
-- 0 | Root |
-- 1 | Alice | 0
-- 2 | Bob | 1
-- 3 | Cat | 1
-- 4 | Dan | 3
-- 5 | Evan | 0
-- 6 | Frank | 5
```
Our `sample_people` table represents a person by name and that person may have a
parent. The parent of all the parents is `Root`.
And finally our recursive query to get a nice display of the hierarchy.
```sql
WITH RECURSIVE tree -- `tree` is the table alias.
-- It must be used as part of the `UNION` statement.
AS (
-- 1) Initialize table with all the top level rows.
-- Anything without a parent is a parent. Is that apparent?
SELECT
0 AS level, -- 2) Set the level to 0.
sample_people.* -- 3) Return the initial row
FROM sample_people
WHERE parent_id = 0 -- 4) Top level doesn't have a parent.
UNION ALL
-- 5) Union all the parents with their children.
SELECT
tree.level + 1, -- 6) Increment the level every time we loop.
sample_people.* -- 7) Return the current row - the child row.
FROM tree -- 8) `tree` is populated with the previous results.
-- Every loop gets a new record from current result.
JOIN sample_people ON sample_people.parent_id = tree.id
)
SELECT
repeat(' ', level * 4) || name AS display
FROM tree
ORDER BY level, name
;
-- display
-- -------------
-- Alice
-- Evan
-- Bob
-- Cat
-- Frank
-- Dan
-- (6 rows)
```
## Bait and Switch
`RECURSIVE` is not actually recursive. It isn't a function calling itself.
Sorry, not sorry. It's much closer to a `while` loop. Here's what Postgres has to say about it:
> Note: Strictly speaking, this process is **iteration** not recursion, but RECURSIVE
> is the terminology chosen by the SQL standards committee. [emphasis added]
## Closing
So the next time you try to crawl a hierarchy of data, we hope `RECURSIVE` comes
to mind. It's a great way to save round trips to the database and query what is
needed based on the data's structure. Think of all the nested subqueries we can
save together!
## References
- Postgres WITH Queries: https://www.postgresql.org/docs/9.3/static/queries-with.html
- Wikipedia Fibonacci number: https://en.wikipedia.org/wiki/Fibonacci_number
---
# What the SQL?!? Lateral Joins
Date: 2017-03-08
URL: https://ddrscott.github.io/blog/2017/what-the-sql-lateral/
Tags: sql
Today's "What the SQL?!?" features the keyword `LATERAL`. A prerequisite to
understanding lateral joins are regular joins and subqueries. I'll explain those
briefly to see how `LATERAL` can simplify a complicated SQL query.
Please note, our target database is PostgreSQL. These examples may work with
other databases, but might need some massaging to get them to work properly.
Search online for the specific vendor's documentation if errors pop up.
Try searching for "lateral joins $DB_VENDOR_NAME". Not all database vendors
support the keyword `LATERAL`.
## A Problem to Solve
We have a table with system uptimes. The table records a start timestamp and an
end timestamp. If the service is still running, the end timestamp is left null
because it hasn't ended. We want a query to display an overview this data.
Our final solution will return a row per day and 24 columns containing an uptime
percentage for each hour in the day. It will look like the following.
```
cal_date | hour_0 | hour_1 | hour_2 | hour_3 | ... | hour_21 | hour_22 | hour_23
------------+--------+--------+--------+--------+-----+---------+---------+---------
2017-03-01 | 0 | 0.75 | 0.25 | 0 | ... | 0 | 0 | 0
2017-03-02 | 0 | 0 | 0 | 0 | ... | 1 | 1 | 1
(2 rows)
```
Please note we'll use `...` abbreviate some of the results. All queries are
schema independent and should be copy/paste-able into any `psql` session.
## Sample Uptime Data
The sample uptime data is derived from a virtual table built from the following query:
```sql
SELECT
*
FROM (
VALUES
('2017-03-01 01:15:00-06'::timestamp, '2017-03-01 02:15:00-06'::timestamp),
('2017-03-01 08:00:00-06', '2017-03-01 20:00:00-06'),
('2017-03-02 19:00:00-06', null)
) AS t(start_ts, end_ts)
```
The data looks like:
```plain
start_ts | end_ts
---------------------+---------------------
2017-03-01 01:15:00 | 2017-03-01 02:15:00
2017-03-01 08:00:00 | 2017-03-01 20:00:00
2017-03-02 19:00:00 |
(3 rows)
```
We want to plot the time against a time sliced table representing all the
effective hours in the uptime window. We'll make use of another virtual table to
build up all the time slices:
```sql
SELECT
start_ts,
start_ts + interval '1 hour' AS end_ts
FROM generate_series('2017-03-01'::date,
'2017-03-03'::timestamp - interval '1 hour',
interval '1 hour'
) AS t(start_ts)
```
This we make use of PostgreSQL's [generate_series](https://www.postgresql.org/docs/9.3/static/functions-srf.html)
to return all the hours between a time range. The data looks like:
```plain
start_ts | end_ts
---------------------+---------------------
2017-03-01 00:00:00 | 2017-03-01 01:00:00
2017-03-01 01:00:00 | 2017-03-01 02:00:00
2017-03-01 02:00:00 | 2017-03-01 03:00:00
-- ... many more rows ...
2017-03-01 03:00:00 | 2017-03-01 04:00:00
2017-03-02 22:00:00 | 2017-03-02 23:00:00
2017-03-02 23:00:00 | 2017-03-03 00:00:00
(48 rows)
```
## Left Join
We use a left join to glue together overlapping time ranges between these two
data sets. We want all the data on the `LEFT` side in the `FROM` clause to return
regardless of an uptime record existing within its time slice.
```sql
SELECT
*
FROM (
-- build virtual table of all hours between
-- a date range
SELECT
start_ts,
start_ts + interval '1 hour' AS end_ts
FROM generate_series(
'2017-03-01'::date,
'2017-03-03'::timestamp - interval '1 hour',
interval '1 hour'
) AS t(start_ts)
) AS cal
LEFT JOIN (
-- build virtual table of uptimes
SELECT *
FROM (
VALUES
('2017-03-01 01:15:00-06'::timestamp, '2017-03-01 02:15:00-06'::timestamp),
('2017-03-01 08:00:00-06', '2017-03-01 20:00:00-06'),
('2017-03-02 19:00:00-06', null)
) AS t(start_ts, end_ts)
) AS uptime ON cal.end_ts > uptime.start_ts AND cal.start_ts <= coalesce(uptime.end_ts, current_timestamp)
```
The result set shows we have some variety in our sample data. With 3 slices
up time and 3 slices of downtime.
```plain
start_ts | end_ts | start_ts | end_ts
---------------------+---------------------+---------------------+---------------------
2017-03-01 00:00:00 | 2017-03-01 01:00:00 | |
2017-03-01 01:00:00 | 2017-03-01 02:00:00 | 2017-03-01 01:15:00 | 2017-03-01 02:15:00
2017-03-01 02:00:00 | 2017-03-01 03:00:00 | 2017-03-01 01:15:00 | 2017-03-01 02:15:00
2017-03-01 03:00:00 | 2017-03-01 04:00:00 | |
...
2017-03-01 07:00:00 | 2017-03-01 08:00:00 | |
2017-03-01 08:00:00 | 2017-03-01 09:00:00 | 2017-03-01 08:00:00 | 2017-03-01 20:00:00
...
2017-03-01 20:00:00 | 2017-03-01 21:00:00 | 2017-03-01 08:00:00 | 2017-03-01 20:00:00
2017-03-01 21:00:00 | 2017-03-01 22:00:00 | |
...
2017-03-02 18:00:00 | 2017-03-02 19:00:00 | |
2017-03-02 19:00:00 | 2017-03-02 20:00:00 | 2017-03-02 19:00:00 |
...
2017-03-02 23:00:00 | 2017-03-03 00:00:00 | 2017-03-02 19:00:00 |
(48 rows)
```
If we try without the `LEFT` clause, we'll only see 20 rows containing the up slices.
## Time to compute some timing
Let's add some times and sensible column names and replace the `*`
```sql
SELECT
-- will use `first_ts` and `last_ts` to calculate uptime duration
CASE WHEN uptime.start_ts IS NOT NULL THEN
greatest(uptime.start_ts, cal.start_ts)
END AS first_ts,
least(cal.end_ts, uptime.end_ts) AS last_ts,
date_trunc('day', cal.start_ts)::date AS cal_date,
extract(hour from cal.start_ts) AS cal_hour,
extract(epoch from age(cal.end_ts, cal.start_ts)) AS cal_seconds
FROM (
-- build virtual table of all hours between
-- a date range
SELECT
start_ts,
start_ts + interval '1 hour' AS end_ts
FROM generate_series('2017-03-01'::date,
'2017-03-03'::timestamp - interval '1 hour',
interval '1 hour'
) AS t(start_ts)
) AS cal
LEFT JOIN (
-- build virtual table of uptimes
SELECT *
FROM (
VALUES
('2017-03-01 01:15:00-06'::timestamp, '2017-03-01 02:15:00-06'::timestamp),
('2017-03-01 08:00:00-06', '2017-03-01 20:00:00-06'),
('2017-03-02 19:00:00-06', null)
) AS t(start_ts, end_ts)
) AS uptime ON cal.end_ts > uptime.start_ts AND cal.start_ts <= coalesce(uptime.end_ts, current_timestamp)
```
```plain
first_ts | last_ts | cal_date | cal_hour | cal_seconds
---------------------+---------------------+------------+----------+-------------
| 2017-03-01 01:00:00 | 2017-03-01 | 0 | 3600
2017-03-01 01:15:00 | 2017-03-01 02:00:00 | 2017-03-01 | 1 | 3600
2017-03-01 02:00:00 | 2017-03-01 02:15:00 | 2017-03-01 | 2 | 3600
| 2017-03-01 04:00:00 | 2017-03-01 | 3 | 3600
| 2017-03-01 05:00:00 | 2017-03-01 | 4 | 3600
| 2017-03-01 06:00:00 | 2017-03-01 | 5 | 3600
| 2017-03-01 07:00:00 | 2017-03-01 | 6 | 3600
| 2017-03-01 08:00:00 | 2017-03-01 | 7 | 3600
2017-03-01 08:00:00 | 2017-03-01 09:00:00 | 2017-03-01 | 8 | 3600
...
2017-03-01 20:00:00 | 2017-03-01 20:00:00 | 2017-03-01 | 20 | 3600
| 2017-03-01 22:00:00 | 2017-03-01 | 21 | 3600
...
| 2017-03-02 19:00:00 | 2017-03-02 | 18 | 3600
2017-03-02 19:00:00 | 2017-03-02 20:00:00 | 2017-03-02 | 19 | 3600
...
2017-03-02 23:00:00 | 2017-03-03 00:00:00 | 2017-03-02 | 23 | 3600
(48 rows)
```
## Subquery, Subquery, What's the Worry?
SQL is all about nested subqueries. It's hard to escape without creating
views, but who has time to lookup that [syntax](https://www.postgresql.org/docs/9.3/static/sql-createview.html)
_and_ get their [DBA's](https://imgflip.com/i/1kzzyn) permission to run the DDL?!?
Let's add some duration times to the result set. We'll use the traditional sub
query for it.
```sql
SELECT
-- calculate uptime seconds
coalesce(
extract(epoch FROM age(last_ts, first_ts)),
0
) AS up_seconds,
*
FROM (
SELECT
-- will use `first_ts` and `last_ts` to calculate uptime duration
CASE WHEN uptime.start_ts IS NOT NULL THEN
greatest(uptime.start_ts, cal.start_ts)
END AS first_ts,
least(cal.end_ts, uptime.end_ts) AS last_ts,
date_trunc('day', cal.start_ts)::date AS cal_date,
extract(hour from cal.start_ts) AS cal_hour,
extract(epoch from age(cal.end_ts, cal.start_ts)) AS cal_seconds
FROM (
-- build virtual table of all hours between
-- a date range
SELECT
start_ts,
start_ts + interval '1 hour' AS end_ts
FROM generate_series('2017-03-01'::date,
'2017-03-03'::timestamp - interval '1 hour',
interval '1 hour'
) AS t(start_ts)
) AS cal
LEFT JOIN (
-- build virtual table of uptimes
SELECT *
FROM (
VALUES
('2017-03-01 01:15:00-06'::timestamp, '2017-03-01 02:15:00-06'::timestamp),
('2017-03-01 08:00:00-06', '2017-03-01 20:00:00-06'),
('2017-03-02 19:00:00-06', null)
) AS t(start_ts, end_ts)
) AS uptime ON cal.end_ts > uptime.start_ts AND cal.start_ts <= coalesce(uptime.end_ts, current_timestamp)
) t1
```
```plain
up_seconds
------------
0
2700
900
0
0
...
3600
(48 rows)
```
Without the subquery we'd be getting into even more nested function calls and
would have to double compute values or have no visibility in the intermediate
steps. We could have calculated `up_seconds` directly in the first query which
introduced `first_ts` and `last_ts`. That would look like this:
```sql
SELECT
coalesce(
extract(epoch FROM
age(
least(cal.end_ts, uptime.end_ts),
CASE WHEN uptime.start_ts IS NOT NULL THEN
greatest(uptime.start_ts, cal.start_ts)
END
)
),
0
) AS up_seconds
FROM --- ...
```
It's not for the weak stomach, but frankly speaking, neither is the subquery...
## Enough Nesting, `LATERAL` join save me!
Lateral joins can give us the best of both worlds: reduced subquery nesting and
traceable computed values. We're going to move the initial computed values like
`first_ts` and `last_ts`, move them to a virtual table then `JOIN LATERAL` so
they can get their own table alias. We'll do it again for `up_seconds` and use
`first_ts` and `last_ts` from its sibling table.
```sql
SELECT
t2.up_seconds
FROM (
-- build virtual table of all hours between
-- a date range
SELECT
start_ts,
start_ts + interval '1 hour' AS end_ts
FROM generate_series('2017-03-01'::date,
'2017-03-03'::timestamp - interval '1 hour',
interval '1 hour'
) AS t(start_ts)
) AS cal
LEFT JOIN (
-- build virtual table of uptimes
SELECT *
FROM (
VALUES
('2017-03-01 01:15:00-06'::timestamp, '2017-03-01 02:15:00-06'::timestamp),
('2017-03-01 08:00:00-06', '2017-03-01 20:00:00-06'),
('2017-03-02 19:00:00-06', null)
) AS t(start_ts, end_ts)
) AS uptime ON cal.end_ts > uptime.start_ts AND cal.start_ts <= coalesce(uptime.end_ts, current_timestamp)
JOIN LATERAL (
SELECT
-- will use `first_ts` and `last_ts` to calculate uptime duration
CASE WHEN uptime.start_ts IS NOT NULL THEN
greatest(uptime.start_ts, cal.start_ts)
END AS first_ts,
least(cal.end_ts, uptime.end_ts) AS last_ts,
date_trunc('day', cal.start_ts)::date AS cal_date,
extract(hour from cal.start_ts) AS cal_hour,
extract(epoch from age(cal.end_ts, cal.start_ts)) AS cal_seconds
) t1 ON true
JOIN LATERAL (
-- calculate uptime seconds for the time slice
SELECT
coalesce(
extract(epoch FROM age(last_ts, first_ts)),
0
) AS up_seconds
) t2 ON true
```
This gives us the same results but without the deep nesting.
```plain
up_seconds
------------
0
2700
900
0
0
3600
...
3600
(48 rows)
```
What's great about this strategy is we can quickly choose which columns to see
as we build up the query.
```sql
SELECT
t2.up_seconds
...
-- or --
SELECT
t2.*,
t1.*
```
Let's build up the final calculation using the same strategy:
```sql
SELECT
t2.*,
t3.*
FROM ...
JOIN LATERAL (
-- calculate percentage between uptime seconds and available seconds
-- within the time slice
SELECT
up_seconds / cal_seconds AS up_pct
) t3 ON true
```
```plain
up_seconds | up_pct
------------+--------
0 | 0
2700 | 0.75
900 | 0.25
0 | 0
...
3600 | 1
(48 rows)
```
## Plot the Hours
Now we have all the computed data we need. Let's plot it as a cross tab (but not
actually use [`crosstab`](https://www.postgresql.org/docs/9.3/static/tablefunc.html))
We'll need to consolidate the long list of data by `cal_date` and pivot the
`cal_hour` as a column and `up_pct` as a value. In case of overlapping uptimes
we'll be pessimists and choose the lowest or `min` uptime percentage.
The final query looks like:
```sql
SELECT
cal_date,
max(CASE WHEN cal_hour = 0 THEN up_pct END) AS hour_0,
max(CASE WHEN cal_hour = 1 THEN up_pct END) AS hour_1,
max(CASE WHEN cal_hour = 2 THEN up_pct END) AS hour_2,
max(CASE WHEN cal_hour = 3 THEN up_pct END) AS hour_3,
max(CASE WHEN cal_hour = 4 THEN up_pct END) AS hour_4,
max(CASE WHEN cal_hour = 5 THEN up_pct END) AS hour_5,
max(CASE WHEN cal_hour = 6 THEN up_pct END) AS hour_6,
max(CASE WHEN cal_hour = 7 THEN up_pct END) AS hour_7,
max(CASE WHEN cal_hour = 8 THEN up_pct END) AS hour_8,
max(CASE WHEN cal_hour = 9 THEN up_pct END) AS hour_9,
max(CASE WHEN cal_hour = 10 THEN up_pct END) AS hour_10,
max(CASE WHEN cal_hour = 11 THEN up_pct END) AS hour_11,
max(CASE WHEN cal_hour = 12 THEN up_pct END) AS hour_12,
max(CASE WHEN cal_hour = 13 THEN up_pct END) AS hour_13,
max(CASE WHEN cal_hour = 14 THEN up_pct END) AS hour_14,
max(CASE WHEN cal_hour = 15 THEN up_pct END) AS hour_15,
max(CASE WHEN cal_hour = 16 THEN up_pct END) AS hour_16,
max(CASE WHEN cal_hour = 17 THEN up_pct END) AS hour_17,
max(CASE WHEN cal_hour = 18 THEN up_pct END) AS hour_18,
max(CASE WHEN cal_hour = 19 THEN up_pct END) AS hour_19,
max(CASE WHEN cal_hour = 20 THEN up_pct END) AS hour_20,
max(CASE WHEN cal_hour = 21 THEN up_pct END) AS hour_21,
max(CASE WHEN cal_hour = 22 THEN up_pct END) AS hour_22,
max(CASE WHEN cal_hour = 23 THEN up_pct END) AS hour_23
FROM (
-- build virtual table of all hours between
-- a date range
SELECT
start_ts,
start_ts + interval '1 hour' AS end_ts
FROM generate_series('2017-03-01'::date,
'2017-03-03'::timestamp - interval '1 hour',
interval '1 hour'
) AS t(start_ts)
) AS cal
LEFT JOIN (
-- build virtual table of uptimes
SELECT *
FROM (
VALUES
('2017-03-01 01:15:00-06'::timestamp, '2017-03-01 02:15:00-06'::timestamp),
('2017-03-01 08:00:00-06', '2017-03-01 20:00:00-06'),
('2017-03-02 19:00:00-06', null)
) AS t(start_ts, end_ts)
) AS uptime ON cal.end_ts > uptime.start_ts AND cal.start_ts <= coalesce(uptime.end_ts, current_timestamp)
JOIN LATERAL (
SELECT
-- will use `first_ts` and `last_ts` to calculate uptime duration
CASE WHEN uptime.start_ts IS NOT NULL THEN
greatest(uptime.start_ts, cal.start_ts)
END AS first_ts,
least(cal.end_ts, uptime.end_ts) AS last_ts,
date_trunc('day', cal.start_ts)::date AS cal_date,
extract(hour from cal.start_ts) AS cal_hour,
extract(epoch from age(cal.end_ts, cal.start_ts)) AS cal_seconds
) t1 ON true
JOIN LATERAL (
SELECT
coalesce(
extract(epoch FROM age(last_ts, first_ts)),
0
) AS up_seconds
) t2 ON true
JOIN LATERAL (
-- calculate percentage between uptime seconds and available seconds
-- within the time slice
SELECT
up_seconds / cal_seconds AS up_pct
) t3 ON true
GROUP BY cal_date
```
```plain
cal_date | hour_0 | hour_1 | hour_2 | hour_3 | ... | hour_23
------------+--------+--------+--------+--------+ ... +---------
2017-03-01 | 0 | 0.75 | 0.25 | 0 | ... | 0
2017-03-02 | 0 | 0 | 0 | 0 | ... | 1
(2 rows)
```
## More than CTE and Cross Join
This example only scratches the surface of `LATERAL`s super powers. On the
surface `LATERAL` can do things `CTE`, cross join, and `WINDOW` can do.
PostgreSQL describe `LATERAL` as:
> Subqueries appearing in FROM can be preceded by the key word LATERAL.
> This allows them to reference columns provided by preceding FROM items.
> (Without LATERAL, each subquery is evaluated independently and so cannot
> cross-reference any other FROM item.)
TL;DR - `LATERAL` allows subqueries to reference earlier tables.
## References
- [Postgres Lateral Joins](https://www.postgresql.org/docs/9.6/static/queries-table-expressions.html#QUERIES-LATERAL)
---
# How to Get Better At Anything
Date: 2017-02-07
URL: https://ddrscott.github.io/blog/2017/how-to-get-better-at-anything/
## RTFM
* WTFM, Write The Flip'n Manual
* Do a lightning talk about it :)
## Make it Fun
* Gamification
* Trophy anyone?
* If it can't be made fun, make it fun.
## Practice
* Slowly with intent
* Pressure cycles: none, some, actual, intolerable.
* The practice of practice deserves its own talk.
## Better not good
* Good is the killer of better.
* People tend to stop because good is unreachable.
* People tend to stop after they're good enough.
* If we aim to get better, we'll eventually be better than good.
* Progressive goals. OMG! I forgot goal setting!
## Learn, don't Memorize
* Learning is understanding
* Understanding is connecting the new thing to an old thing.
* Computers memorize. You're not a computer.
## Learn with others
* With the Internet you're never alone and always alone.
* Actually talk with people!
## Cheat
* If you're not cheating, you're not trying.
* Reverse engineer the cheat, don't actually cheat!
## How I got better at Vim
* **RTFM** - `:help`
* **WTFM** - Blogging, figure out other people's problems :/
* **Practice** - Wrote stuff that wasn't needed immediately, my own notes.
* **Don't Memorize** - Vim has grammar like English. Oooh... Reconnection!
* **Learn with Others** - We have vim users, teach them (but don't let them know you're only one step ahead)
* **Cheat** - Look at other's Vimscript to figure out that cool thing they did.
## What do you want to get better at next?!?
* lightning talks?
* Shuffling cards? (@devin)
* [SQL?](http://www.postgresql.org/docs/9.6/static/index.html)
* [Drawing?](https://www.amazon.com/Drawing-Right-Side-Brain-Definitive/dp/1585429201)
* [Piano?](http://www.pianofundamentals.com/book/en/chapter_1)
---
# ANSI Codes with Character
Date: 2016-07-26
URL: https://ddrscott.github.io/blog/2016/ansi-codes-with-character/
This was a lightening talk given to the office about ANSI Escape Codes. Most of
the time, all 5 minutes of it, was spent explaining the code snippets.
[Wiki about ANSI Codes](https://en.wikipedia.org/wiki/ANSI_escape_code)
## What is an ANSI code?
ANSI Escape Codes are a nearly universal means of embedding display options in
computer terminals.
`\e[` is how to tell the terminal we're giving it a command instead just
outputting text.
What does that mean?!? Let the examples do the talking.
## Color Examples
```bash
echo -e "\e[2J\e[32m It's not easy being green \e[0m"
echo -e "\e[2J\e[31m Apples are red \e[0m"
```
```ruby
(30..37).each{|i| puts "i: \e[#{i}m #{i} \e[0m"}
```
## Position Examples
```ruby
x=`tput cols`.to_i
y=`tput lines`.to_i
loop do
print "\e[s" # Save current cursor position
print "\e[#{rand(y)};#{rand(x)}H" # move to row/column
print "💩" # print POOP!
print "\e[u" # restore position
sleep(rand)
end
```
```bash
# Ruby oneliner troll
ruby -e 'x=`tput cols`.to_i; y=`tput lines`.to_i; loop {print "\e[s\e[#{rand(y)};#{rand(x)}H💩\e[u"; sleep(rand)}'
```
## ANSI-nine Examples
```bash
# Poop-field
curl -s https://raw.githubusercontent.com/ddrscott/ansinine/master/stars | ruby
# Smoke
curl -s https://raw.githubusercontent.com/ddrscott/ansinine/master/fire.rb | ruby
```
---
# Photography Refactored
Date: 2016-07-11
URL: https://ddrscott.github.io/blog/2016/photography-lightening-talk/
Someone convinced me to do a lightening talk about photography. Here's the slide deck.
Look at the `quickfix` window above. It spends most of its space showing the file name
of the hit, then the remainder is spent on text around it. In projects using
[Rails Engines](http://guides.rubyonrails.org/engines.html) with deeply nested
directory structures, this often leaves me with just a bunch of paths in the `quickfix`.
## Solution A - The Unix Way
Some may argue Vim isn't suppose to do search. Vim rightly delegates to the
[Unix philosophy](http://www.catb.org/esr/writings/taoup/html/ch01s06.html) by
allowing an external program do its searching. Let's try that for this
solution using `grep`, `ack`, and `ag`.
We've run the 3 separate programs (normally, I would only use `ag`) then
browse the results to see if there's an interesting file. At this point I start
using my handy-dandy mouse to scroll around, precisely highlight the path of
interest, copy, and type `vim
Use `:cnext`, `:cprev`, `:cfirst`, `:clast` to go to next, previous, first, and
last `quickfix` result respectively. Map those to keys to make it easier to
navigate.
```
nnoremap [q :cprev
Things to notice:
- `ag` output is in a buffer with additional syntax highlighting!
- `n` and `N` used to jump to matches. Regular Vim navigation works, too!
- `
## Solution A
Turn on word wrapping. `set wrap`. Boring, but effective. You might also want
to make word wrapping look nicer. I do that with the following settings.
```vim
set breakindent
set breakindentopt=sbr
" I use a unicode curly array with a
Here's the help doc to clear things up:
```plain
'sidescroll' 'ss' number (default 0)
global
The minimal number of columns to scroll horizontally. Used only when
the 'wrap' option is off and the cursor is moved off of the screen.
When it is zero the cursor will be put in the middle of the screen.
When using a slow terminal set it to a large number or 0. When using
a fast terminal use a small number or 1. Not used for "zh" and "zl"
commands.
'sidescrolloff' 'siso' number (default 0)
global
The minimal number of screen columns to keep to the left and to the
right of the cursor if 'nowrap' is set. Setting this option to a
value greater than 0 while having |'sidescroll'| also at a non-zero
value makes some context visible in the line you are scrolling in
horizontally (except at beginning of the line). Setting this option
to a large value (like 999) has the effect of keeping the cursor
horizontally centered in the window, as long as one does not come too
close to the beginning of the line.
Example: Try this together with 'sidescroll' and 'listchars' as
in the following example to never allow the cursor to move
onto the "extends" character:
:set nowrap sidescroll=1 listchars=extends:>,precedes:<
:set sidescrolloff=1
```
Seems like the default was intended for a "slow terminal". If you're using a
slow terminal while editing a large amount of unwrapped text, I'd recommend
getting a computer from this millennia and enabling `sidescroll`.
Also note that a sensible example is shown in the `sidescrolloff` section.
## Off Topic...
It's interesting to study all the decisions made due to slow terminals.
Try `:help slow-terminal` for a quick look and try `:helpgrep slow` to see way
more mentions. Use `:help helpgrep` if you didn't know about `helpgrep` :)
## Closing
I'm sure you're thinking why so many words were written for a single setting.
Similar to my previous post about [Yank without Jank](/blog/2016/yank-without-jank/),
these unexpected janky behaviors cause anxiety. Anxiety that usually can't be
identified or resolved in the heat of a coding session, but is there, wading in
the weeds, ready to pounce at your next stray keystroke. As a student of Vim, I
want identify and resolve these issues so I can get back to why I like Vim;
using the dot operator.
---
# Ruby Abuse: How Not to Write Ruby, But Still Have Fun
Date: 2016-04-30
URL: https://ddrscott.github.io/blog/2016/ruby-abuse/
```ruby
alias λ lambda
sadd = λ {|ns, a| [*ns, a].sort }
hsort = λ {|h| Hash[h.sort] }
hadd = λ {|h, n, g| h.merge(g => sadd.(h[g], n)) }
school = λ {|gs| School.new(gs) }
School = Struct.new(:gs) do
define_method :add, λ {|n, g| (school . (hsort . (hadd . (to_hash, n, g)))) }
define_method :to_hash, λ {| | gs || {} }
define_method :grade, λ {|g| to_hash[g] || [] }
end
```
---
# Making a Window Submode in Vim
Date: 2016-04-29
URL: https://ddrscott.github.io/blog/2016/making-a-window-submode/
Tags: vim
I found a plugin that is changing my Vim-tire life! This
[plugin](https://github.com/kana/vim-submode) is so awesome it should be
built into default Vim. What does the [plugin](https://github.com/kana/vim-submode)
do? It enables the creation of new submodes. Why would a person want *more* modes?!?
Isn't dealing with modes the main deterrent for new Vim users? Isn't Normal,
Insert, Command-line, Visual, Select, and Operator-pending enough? (Did I miss one?)
Let's try out a new submode and see what happens.
## Problem
Window commands are prefixed with `
I know you can't see what keys I'm pressing, but I guarantee I only pressed
`
## Solution B
Let's try using marks to keep things in place:
```vim
vnoremap y myy`y
vnoremap Y myY`y
```
The capital `Y` mapping is just in case we want to do a line wise yank from a
character wise selection.
Like a well trained dog, the cursor stays even though you yank it.
**BONUS** This snippet also takes over the `y` marker, so you can manually
`
I could swap the functionality of `0` and `^`:
```
nnoremap 0 ^
nnoremap ^ 0
```
This still forces me to reach for `^` when I need to need to get to that left
edge. There must be a better way!
## The Solution
Let's give `0` some super toggling powers. When I hit it the first time, I want it be
be like `^`. If I hit it again, I want it to finish its travels and go to the
first column.
### Solution A
```vim
function! ToggleHomeZero()
let pos = getpos('.')
execute "normal! ^"
if pos == getpos('.')
execute "normal! 0"
endif
endfunction
nnoremap 0 :call ToggleHome()
After nearly a year without Posterous, I've finally got around to migrating to another system.
This time around it's a static blog builder: http://octopress.org/
Octopress is a wrapper around Jekyll which is a utility for creating a static blogging site. A blog that can be
hosted on anything that can server files: Apache, AWS, Dropbox, etc...
## Q: Why not use WordPress, Tumbler, Blogger?!?
Those tailor to non-developers and their editors are not the ones I use everyday, my IDE. Octopress lets me
write my articles the same way as I write code: plain text. It uses markdown or any other HTML generator I configure.
Static pages are easier for me to deploy when I have to switch hosting providers. Hopefully Github sticks around
for a while, but if they don't, I can rest assured I can have the article hosted some where else faster than DNS propagation.
``` ruby
puts "I can use code snippets"
# And they will be nicely formatted.
maybe_one_day do
use(CodePen) or use(jsFiddle)
end
```
The initial theme I selected is https://github.com/sevenadrian/MediumFox. It's clean, simple and pretty much
what I would have *tried* to make if I had time. They seemed to have omitted some features from the classic theme, but
as with all Open Source projects it is easy to remedy.
Time to post to do my first deploy... then convert all the Posterous archives.