# Why, Scott, WHY?!? > Full content of every published post. Personal tech blog by Scott Pierce. --- # Why AI Can't Make You a Beautiful Logo Date: 2026-05-15 URL: https://ddrscott.github.io/blog/2026/why-ai-cant-make-you-a-beautiful-logo/ Tags: ai, communication, design, llm, prompting Description: AI is a distillation of the internet. It cannot read your mind. The fastest way to better prompts is to strip subjective words and feed it objective context. **TL;DR** — AI is a distillation of the internet. It can't read your mind. The fastest way to get better results is to strip subjective words out of your prompts and feed it objective context instead. The same way you'd talk to a stranger you just met. ## "Make a beautiful logo" Say you ask an AI: *"Make a beautiful logo that reflects the values of this website."* That's it. You hit enter. You're trusting the model with a request that, in your brain, has a clear picture attached to it. Colors. Tone. The vibe of the site. Maybe a feeling you can't even put words to. The AI has none of that. It doesn't know if it should visit the website. It doesn't know what your conception of "beautiful" is. It doesn't even know if the domain name has any correlation to the content you actually care about. So it guesses. And then you're surprised when the guess is wrong. ## AI is a distillation of the internet Here's the framing that fixes this for me: **AI is a distillation of the internet.** It's an extremely broad, general-purpose guesser. When you ask it something subjective, it's averaging across every opinion, style, and aesthetic on the open web and handing you back the centroid. There is essentially zero hope of that centroid matching what's in your head. Try this thought experiment. Four people sit in a room. You say *"create a beautiful woman."* Each of those four people will draw a different person. Different hair, different build, different face, different everything. The word "beautiful" doesn't have a destination. It has four. Or four million. Now do the same thing with *"blonde."* How blonde? Strawberry? Platinum? Dirty? You've narrowed it, but only a little. The internet can't answer subjective questions because subjective questions don't have answers. They have preferences. And your preference is a thing only you can supply. ## The Google search rule still applies If you typed *"beautiful woman"* into Google, you'd get a top page of results that are mostly stock photos and whatever the algorithm thinks is safe to surface. You'd scroll, drill in, refine — *"curly hair, round face, this height, this style"* — and slowly converge on what you actually wanted. Same rule with AI. **Take every subjective word out.** Replace it with something measurable. - ~~Beautiful~~ → high contrast, warm tones, geometric sans-serif - ~~Modern~~ → flat color, no gradients, 8px rounded corners - ~~Professional~~ → muted palette, generous whitespace, serif headings This isn't because AI is dumb. It's because subjective words are coordinates without an origin. Objective words are coordinates with an origin. AI needs the origin. ## The trick: let AI teach you its language Here's the move that changed how I prompt. Don't pre-declare anything as good or bad. Don't say *"I like this style"* or *"this is the kind of thing I want."* That's all subjective and you're back to the same problem. Instead: grab a reference image — a logo, a screenshot, a piece of art you respond to — drop it into the model, and say: **"Analyze this. Give me the words you think are appropriate for what I just gave you."** The AI won't tell you if it's good or bad art. It'll do what a computer does. It'll tell you: - The dominant color codes - The tonal range (bright, dark, contrast ratios) - The font family or family characteristics - The composition (negative space, balance, weight) - The mood vocabulary it associates with those properties Now you have the AI's language for the thing you wanted. You can speak it back. *"Make me a logo that uses this palette, this kind of weight, this kind of negative space — but swap the serif for a geometric sans-serif and tighten the kerning by 5%."* You've stopped guessing what AI knows. You've made AI tell you what it knows, and then you're working in that vocabulary. ## Prompting is about context The whole thing reduces to two questions: 1. **Where are you starting?** 2. **Where do you need to get to?** If you can answer both with specifics, AI fills in the middle. That's its job. That's what it's good at. The failures happen when you give it the destination ("beautiful logo") without the starting point, or the starting point without a destination. Context isn't decoration on a prompt. It's the prompt. ## We do this with humans too This isn't an AI quirk. It's a communication quirk that AI happens to amplify. When I meet someone new — say I meet Timothy for the first time — we don't immediately get to business. We chat. We figure out what language we both speak. Where the common ground is. Whether we've got shared references — faith, family, profession, taste in music. Once we've calibrated, the actual conversation gets easier, faster, more precise. AI is the same, just a touch more computery. Large language models understand English pretty well now, so you can be a little less robotic than you had to be three years ago. But you still need to be specific. You still need to calibrate. You still need to take out the subjective words and put in the objective ones. ## The whole article in one line > Take subjective words out. Put objective things in. Let AI teach you its language before you try to speak it. That's it. That's the diatribe. Go make better prompts. --- # Postgres via Cloudflare Tunnel Date: 2026-02-19 URL: https://ddrscott.github.io/blog/2026/postgres-via-cloudflare-tunnel/ Tags: devops, postgres Description: Expose your local Postgres to the world without opening firewall ports. **Access your home Postgres from anywhere with Cloudflare Tunnels.** ## Start the PG Server using Docker ```bash docker run -d --name pg16 \ -e POSTGRES_PASSWORD=secret \ -p 15432:5432 \ -v pg16_data:/var/lib/postgresql/data \ pgvector/pgvector:pg16 ``` ## Test Local Connection using Docker ```bash docker run -it --rm \ -e PGPASSWORD=secret \ pgvector/pgvector:pg16 \ psql -h 192.168.86.250 -U postgres -p 15432 -c 'select version()' ``` > Replace `192.168.86.250` with your machine's private IP. ## Create the Tunnel Install `cloudflared` on the Docker host following the [official guide](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/get-started/create-remote-tunnel/). Configure the tunnel to point to `localhost:15432`. ## Connect from Remote On any remote machine: ```bash cloudflared access tcp --hostname pg16.dataturd.com --url localhost:15432 ``` Then connect your client to `localhost:15432` as if Postgres were local: ```bash # With Docker docker run -it --rm \ -e PGPASSWORD=secret \ --network host \ pgvector/pgvector:pg16 \ psql -h localhost -U postgres -p 15432 -c 'select version()' # Or with local psql PGPASSWORD=secret psql -h localhost -U postgres -p 15432 -c 'select version()' ``` --- # Live Code Highlighting: A Technical Deep Dive Date: 2026-02-10 URL: https://ddrscott.github.io/blog/2026/strudel-live-code-highlighting/ Description: How Strudel implements real-time code highlighting during music playback - a unique approach to connecting notation and sound in live coding environments. **How Strudel implements real-time code highlighting during music playback - a unique approach to connecting notation and sound in live coding environments.** ## Abstract Live coding environments for music have long grappled with a fundamental challenge: how to visually connect the code musicians write with the sounds they hear. While most live coding systems provide static syntax highlighting and flash feedback on evaluation, Strudel - a JavaScript port of TidalCycles - implements a sophisticated system that dynamically highlights code elements as they produce sound in real-time. This whitepaper examines Strudel's implementation in detail, surveys prior art in the field, and analyzes the architectural decisions that make this feature possible. ## 1. Introduction ### 1.1 The Problem of Liveness In traditional programming environments, there's a clear separation between writing code and executing it. The feedback loop consists of: write code, run it, observe output. Live coding environments compress this loop dramatically, but even in systems like TidalCycles or Sonic Pi, the visual connection between code and sound remains indirect. Consider a pattern like: ```javascript s("bd hh sd hh").note("c3 e3 g3 b3") ``` When this plays, a musician hears a sequence of sounds. But *which* part of the code is producing the current sound? Is it the `bd` (bass drum) or the `hh` (hi-hat)? The `c3` or the `g3`? Traditional syntax highlighting provides no answer - it colors keywords by their syntactic role, not their temporal activity. ### 1.2 Strudel's Solution Strudel solves this by implementing what we might call **temporal source mapping** - a system that maintains bidirectional links between: 1. **Source code positions** (character ranges in the editor) 2. **Musical events** (called "Haps" in Tidal terminology) When a Hap becomes active (i.e., its time span includes "now"), the corresponding source code is visually highlighted with a colored outline. This creates an immediate, visceral connection between notation and sound. ![Conceptual diagram of temporal source mapping](https://strudel.cc/learn/mini-notation) ## 2. Prior Art and Related Work ### 2.1 Gibber: Annotations and Visualizations Charlie Roberts' research on Gibber represents the closest prior art to Strudel's approach. In his 2015 NIME paper "Beyond Editing: Extended Interaction with Textual Code Fragments," Roberts describes techniques for: - **Inline annotations**: Displaying runtime values directly in source code - **Sparkline visualizations**: Showing continuous signals as miniature graphs - **Pattern playhead indicators**: Visual markers showing current position in sequences Roberts' key insight was that source code could serve dual purposes: as notation (what to play) and as visualization (what is playing). His 2018 web essay "Realtime Annotations & Visualizations in Live Coding Performance" catalogs these techniques extensively. However, Gibber's approach differs from Strudel's in a crucial way: Gibber's annotations are **injected widgets** that appear alongside or within code, whereas Strudel's highlighting **decorates the existing code** without modifying its visual structure. ### 2.2 Estuary: Collaborative Projectional Editing The Estuary project (Ogborn et al., 2017) explored collaborative live coding in the browser with TidalCycles-style patterns. While Estuary focused primarily on collaborative editing and "projectional" interfaces, it contributed to the ecosystem of browser-based live coding that Strudel builds upon. ### 2.3 Traditional Debugger Approaches Traditional debugging tools have long provided execution visualization: - **Breakpoint highlighting**: Current line indicators in step debuggers - **Coverage highlighting**: Showing which lines were executed - **Profiler heat maps**: Coloring code by execution frequency These approaches treat code as sequential instructions. Strudel's challenge is different: patterns are **declarative and cyclical**, not imperative and linear. A pattern like `"bd hh"` doesn't execute once - it continuously generates events across time. ### 2.4 CodeMirror Decoration System Strudel's implementation leverages CodeMirror 6's decoration system, which provides: - **Mark decorations**: Styling ranges of text - **Widget decorations**: Inserting DOM elements - **Line decorations**: Styling entire lines - **Computed decorations**: Dynamically generated based on state The key innovation in CodeMirror 6 is that decorations are computed from **immutable state**, enabling efficient updates without DOM thrashing. ## 3. Architecture Overview Strudel's live highlighting system consists of five interconnected components: ``` ┌─────────────────────────────────────────────────────────────────┐ │ USER CODE │ │ s("bd hh").note("c") │ └──────────────────────────────┬──────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────────┐ │ TRANSPILER │ │ Parses code, extracts miniLocations: [[3, 10], [17, 20]] │ └──────────────────────────────┬──────────────────────────────────┘ │ ┌───────────────┴───────────────┐ ▼ ▼ ┌──────────────────────────┐ ┌─────────────────────────────────┐ │ PATTERN ENGINE │ │ CODEMIRROR │ │ Attaches locations to │ │ Stores locations as invisible │ │ Haps via context │ │ decorations with IDs │ └────────────┬─────────────┘ └──────────────┬──────────────────┘ │ │ ▼ │ ┌──────────────────────────┐ │ │ SCHEDULER │ │ │ Queries pattern for │ │ │ active haps @ 60fps │ │ └────────────┬─────────────┘ │ │ │ ▼ │ ┌──────────────────────────┐ │ │ DRAWER │ │ │ Animation loop calls │───────────────────┘ │ highlight(haps, time) │ └────────────┬─────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────────┐ │ VISUAL HIGHLIGHTING │ │ CodeMirror computes decorations for active locations │ │ User sees: s("bd hh").note("c") │ │ ^^^^^^ ← outlined while sounding │ └─────────────────────────────────────────────────────────────────┘ ``` ## 4. Implementation Deep Dive ### 4.1 Source Location Extraction The transpiler (`packages/transpiler/transpiler.mjs`) uses Acorn for JavaScript parsing and Peggy for mini notation parsing. During transpilation, it captures the character positions of each pattern element: ```javascript export function transpiler(input, options = {}) { const { emitMiniLocations = true } = options; let miniLocations = []; const collectMiniLocations = (value, node) => { const leafLocs = getLeafLocations(`"${value}"`, node.start, input); miniLocations = miniLocations.concat(leafLocs); }; // Walk AST, collect locations for mini notation strings... return { output, miniLocations, widgets, sliders, labels }; } ``` The output format is an array of `[start, end]` tuples representing character ranges: ```javascript // For: s("bd hh").note("c3") miniLocations = [ [3, 5], // "bd" [6, 8], // "hh" [17, 19], // "c3" ] ``` ### 4.2 Location Propagation Through Patterns Patterns in Strudel carry a `context` object that propagates through all transformations. The `withLoc()` method attaches source positions: ```javascript // packages/core/pattern.mjs withLoc(start, end) { const location = { start, end }; return this.withContext((context) => { const locations = (context.locations || []).concat([location]); return { ...context, locations }; }); } ``` This is crucial: when patterns are composed (via `stack`, `cat`, `slow`, `fast`, etc.), the location context is preserved and combined. A Hap created from `s("bd").stack(s("hh"))` will carry locations from *both* source patterns. ### 4.3 The Hap Data Structure A Hap (Happening) is the fundamental event unit in Tidal/Strudel: ```javascript // packages/core/hap.mjs class Hap { constructor(whole, part, value, context = {}, stateful = false) { this.whole = whole; // TimeSpan: when the event "conceptually" occurs this.part = part; // TimeSpan: the portion being queried this.value = value; // The sound/note/control data this.context = context; // Includes locations! this.stateful = stateful; } isActive(time) { return this.whole.begin <= time && this.endClipped >= time; } } ``` The `context.locations` array enables reverse-mapping from event to source code. ### 4.4 CodeMirror State Management Strudel uses three CodeMirror 6 StateFields to manage highlighting: #### 4.4.1 `miniLocations` - Structural Storage ```javascript // packages/codemirror/highlight.mjs const miniLocations = StateField.define({ create() { return Decoration.none; }, update(locations, tr) { for (let e of tr.effects) { if (e.is(setMiniLocations)) { const marks = e.value.locations.map(([from, to]) => Decoration.mark({ id: `${from}:${to}`, // Key for matching! attributes: { style: `background-color: #00CA2880` }, }).range(from, to) ); locations = Decoration.set(marks, true); } } return locations; }, }); ``` Each decoration carries an `id` in the format `"start:end"`, enabling O(1) lookup when matching active Haps. #### 4.4.2 `visibleMiniLocations` - Active Hap Tracking ```javascript const visibleMiniLocations = StateField.define({ create() { return { atTime: 0, haps: new Map() }; }, update(visible, tr) { for (let e of tr.effects) { if (e.is(showMiniLocations)) { const haps = new Map(); for (let hap of e.value.haps) { if (!hap.context?.locations || !hap.whole) continue; for (let { start, end } of hap.context.locations) { const id = `${start}:${end}`; // Keep earliest-starting hap for each location if (!haps.has(id) || haps.get(id).whole.begin.lt(hap.whole.begin)) { haps.set(id, hap); } } } visible = { atTime: e.value.atTime, haps }; } } return visible; }, }); ``` This builds a Map from location IDs to their currently active Haps. #### 4.4.3 `miniLocationHighlights` - Computed Decorations ```javascript const miniLocationHighlights = EditorView.decorations.compute( [miniLocations, visibleMiniLocations, displayMiniLocationsState], (state) => { const shouldDisplay = state.field(displayMiniLocationsState); if (!shouldDisplay) return Decoration.none; const iterator = state.field(miniLocations).iter(); const { haps } = state.field(visibleMiniLocations); const builder = new RangeSetBuilder(); while (iterator.value) { const { from, to, value: { spec: { id } } } = iterator; if (haps.has(id)) { const hap = haps.get(id); const color = hap.value?.color ?? 'var(--foreground)'; const style = hap.value?.markcss || `outline: solid 2px ${color}`; builder.add(from, to, Decoration.mark({ attributes: { style } })); } iterator.next(); } return builder.finish(); }, ); ``` This is the heart of the system: it iterates through all stored locations, checks which have active Haps, and generates visible outline decorations for those. ### 4.5 The Animation Loop The Drawer class (`packages/draw/draw.mjs`) runs at `requestAnimationFrame` speed (~60fps): ```javascript export class Drawer { constructor(onDraw, drawTime) { this.framer = new Framer(() => { const phase = this.scheduler.now() + lookahead; // Query the pattern for currently active haps const haps = this.scheduler.pattern.queryArc( Math.max(this.lastFrame, phase - 1/10), phase ); // Maintain visible haps with lookbehind for visual continuity this.visibleHaps = (this.visibleHaps || []) .filter(h => h.whole && h.endClipped >= phase - lookbehind - lookahead) .concat(haps.filter(h => h.hasOnset())); // Call the callback with current state onDraw(this.visibleHaps, phase - lookahead, this, this.painters); }); } } ``` The StrudelMirror class connects this to CodeMirror: ```javascript // packages/codemirror/codemirror.mjs this.drawer = new Drawer((haps, time, _, painters) => { const currentFrame = haps.filter((hap) => hap.isActive(time)); this.highlight(currentFrame, time); // Updates CodeMirror! this.onDraw(haps, time, painters); }, drawTime); ``` ## 5. Design Decisions and Trade-offs ### 5.1 Location IDs vs. Direct References Strudel uses string-based location IDs (`"start:end"`) rather than direct object references. This enables: - **Serialization**: Locations can be stored, transmitted, and compared - **Immutability**: CodeMirror state remains pure - **Efficiency**: Map lookups are O(1) The trade-off is potential collisions if two distinct code elements share identical positions (rare in practice). ### 5.2 Outline vs. Background Highlighting Strudel uses CSS `outline` rather than `background-color` for active highlighting: ```javascript const style = hap.value?.markcss || `outline: solid 2px ${color}`; ``` Outlines don't affect layout, don't obscure text, and can overlap without visual confusion. This is important when multiple elements are active simultaneously. ### 5.3 60fps Update Rate The animation loop runs at display refresh rate. This provides smooth visual updates but requires careful performance optimization: - **Query window limiting**: Only query a small time window - **Hap filtering**: Remove old haps from the visible list - **Efficient state updates**: CodeMirror's computed decorations only recompute when dependencies change ### 5.4 Color Propagation Hap colors come from the pattern's `value` object: ```javascript const color = hap.value?.color ?? 'var(--foreground)'; ``` This enables patterns like: ```javascript s("bd hh").color("red blue") ``` Where different elements of the pattern can have different highlight colors. ## 6. Comparison with Alternative Approaches ### 6.1 Widget Injection (Gibber-style) **Approach**: Insert DOM elements alongside or within code to show state. **Pros**: - Can display rich information (graphs, sliders) - Doesn't require pattern-to-source mapping **Cons**: - Changes visual layout - Requires custom parser/renderer - Can clutter the editor ### 6.2 Line-Level Highlighting (Debugger-style) **Approach**: Highlight entire lines that are "active." **Pros**: - Simple to implement - Works with any language **Cons**: - Too coarse for pattern-based music - Patterns often span multiple elements on one line ### 6.3 Cursor Following (Sequencer-style) **Approach**: Move a cursor/playhead across the code as time progresses. **Pros**: - Intuitive for linear sequences - Familiar from DAWs **Cons**: - Doesn't work for polyphonic patterns - Assumes left-to-right temporal ordering ### 6.4 Strudel's Approach (Temporal Source Mapping) **Approach**: Map musical events to source positions, highlight active positions. **Pros**: - Works for polyphonic, polyrhythmic patterns - Preserves code readability - Scales to complex compositions **Cons**: - Requires deep integration with pattern engine - Performance overhead from continuous updates - Limited to pattern elements (not arbitrary expressions) ## 7. Performance Characteristics ### 7.1 Time Complexity | Operation | Complexity | |-----------|------------| | Location extraction (transpile) | O(n) where n = AST nodes | | Location storage (CodeMirror) | O(m) where m = locations | | Active hap lookup | O(k) where k = active haps | | Decoration computation | O(m) worst case, typically O(k) | ### 7.2 Space Complexity | Data Structure | Space | |----------------|-------| | miniLocations array | O(m) | | CodeMirror decorations | O(m) | | Active haps Map | O(k) | ### 7.3 Observed Behavior In practice, with typical live coding patterns (10-50 locations), the system maintains smooth 60fps updates. Performance degrades with: - Very long code (thousands of locations) - High-density patterns (hundreds of simultaneous events) - Low-powered devices ## 8. Extensibility and Customization ### 8.1 Custom Styling Users can customize highlight appearance via `.markcss()`: ```javascript s("bd hh").markcss("background: rgba(255,0,0,0.3); border-radius: 4px") ``` ### 8.2 Toggle Control Highlighting can be enabled/disabled: ```javascript // Via CodeMirror effect view.dispatch({ effects: displayMiniLocations.of(false) }); ``` ### 8.3 Integration with Other Visualizations The same Hap data drives other visualizations (pianoroll, waveform) via the Drawer's `onDraw` callback, enabling synchronized multi-modal feedback. ## 9. Future Directions ### 9.1 Expression-Level Tracking Currently, only mini notation elements are tracked. Extending to arbitrary JavaScript expressions would enable highlighting for: ```javascript note(Math.random() * 12) // Which random value is playing? ``` ### 9.2 Historical Visualization Showing recently-played elements with fading opacity could provide temporal context beyond the current instant. ### 9.3 Bidirectional Editing Clicking a highlighted element during playback could select that Hap for modification, enabling direct manipulation of playing sounds. ### 9.4 Cross-Editor Support The architecture could be ported to other editors (Monaco, Ace) by abstracting the CodeMirror-specific decoration system. ## 10. Conclusion Strudel's live code highlighting represents a significant advancement in live coding interfaces. By maintaining bidirectional links between source code and musical events, it creates an immediate visual connection between notation and sound that enhances both performance and learning. The key architectural insights are: 1. **Capture locations during parsing** - not execution 2. **Propagate locations through pattern composition** - maintaining context 3. **Use efficient lookup structures** - location IDs enable O(1) matching 4. **Leverage editor decoration systems** - CodeMirror's computed decorations 5. **Run at animation frame rate** - smooth visual updates This approach is generalizable to other domains where code produces time-varying output: animation, simulation, robotics, or any system where "which code is active now?" is a meaningful question. ## References 1. Roberts, C., Wright, M., & Kuchera-Morin, J. (2015). "Beyond Editing: Extended Interaction with Textual Code Fragments." NIME 2015. 2. Roberts, C. (2018). "Realtime Annotations & Visualizations in Live Coding Performance." Web Essay. https://charlieroberts.github.io/annotationsAndVisualizations/ 3. Ogborn, D., Beverley, J., del Angel, L. N., Tsabary, E., & McLean, A. (2017). "Estuary: Browser-based Collaborative Projectional Live Coding of Musical Patterns." ICLC 2017. 4. Roos, F. & McLean, A. (2023). "Strudel: Live Coding Patterns on the Web." ICLC 2023. 5. CodeMirror. "Decoration Example." https://codemirror.net/examples/decoration/ 6. Roberts, C. & Wakefield, G. (2017). "gibberwocky: New Live-Coding Instruments for Musical Performance." NIME 2017. 7. McLean, A. (2014). "Making Programming Languages to Dance to: Live Coding with Tidal." FARM 2014. 8. Xambó, A. & Roma, G. (2024). "Human-machine agencies in live coding for music performance." Journal of New Music Research. ## Appendix A: Key Source Files | File | Purpose | |------|---------| | `packages/transpiler/transpiler.mjs` | Source location extraction | | `packages/core/pattern.mjs` | `withLoc()` method, context propagation | | `packages/core/hap.mjs` | Hap class with `isActive()` | | `packages/core/repl.mjs` | Evaluation and location passing | | `packages/draw/draw.mjs` | Animation loop | | `packages/codemirror/highlight.mjs` | StateFields and decorations | | `packages/codemirror/codemirror.mjs` | StrudelMirror integration | ## Appendix B: Minimal Implementation For developers interested in implementing similar functionality, here's a minimal example: ```javascript // 1. During parsing, capture source positions const locations = []; ast.walk(node => { if (isPatternElement(node)) { locations.push([node.start, node.end]); } }); // 2. Attach locations to events class Event { constructor(time, value, locations) { this.time = time; this.value = value; this.locations = locations; } isActive(now) { return this.time <= now && now < this.time + this.duration; } } // 3. Animation loop function animate() { const now = scheduler.currentTime(); const activeEvents = events.filter(e => e.isActive(now)); // 4. Update editor decorations const activeLocations = new Set( activeEvents.flatMap(e => e.locations.map(l => `${l[0]}:${l[1]}`)) ); editor.updateDecorations(activeLocations); requestAnimationFrame(animate); } ``` --- *This whitepaper is based on analysis of Strudel version 1.3.0, hosted at [strudel.cc](https://strudel.cc) and [codeberg.org/strudel](https://codeberg.org/strudel).* --- # How to Avoid Shark Bites: A Data-Driven Approach Date: 2026-01-26 URL: https://ddrscott.github.io/blog/2026/shark-bite-prevention/ Tags: data, humor Description: An infographic exploring the most effective shark bite prevention strategy, backed by data from the International Shark Attack File. **The data is in. The results are conclusive.** After reviewing global shark attack statistics, I've discovered the single most effective method for avoiding shark bites: **stay out of the water**. ## The Numbers According to the [International Shark Attack File (ISAF)](https://www.floridamuseum.ufl.edu/shark-attacks/) maintained by the Florida Museum, which contains over 6,800 individual investigations dating back to the 1500s: - **2024**: 47 unprovoked shark attacks worldwide - **2023**: Unprovoked attacks increased with an uptick in fatalities - **2015**: Record year with 98 unprovoked attacks globally Meanwhile, [Shark Spotters](https://sharkspotters.org.za/safety/shark-safety-advice/shark-bite-stats/) reports these regional statistics: - **South Africa since 1905**: 248 unprovoked attacks total - **Cape Peninsula since 1960**: ~28 attacks (roughly 1 every 2 years) - **Primary species involved**: Great white, tiger, and bull sharks ## The Prevention Strategy | Location | Annual Shark Bites | Risk Level | |----------|-------------------|------------| | Swimming in ocean | ~50-100 | Moderate | | Standing on land | 0 | None | The math is simple: **100% of shark bites occur in water**. ## Context Matters Before you cancel your beach trip, consider the actual risk. The ISAF provides comparative data showing shark attacks are statistically less likely than: - Lightning strikes - Tornadoes - Rip currents - Alligator attacks The increase in reported attacks over time correlates more with increased ocean recreation and better reporting systems than with changes in shark behavior. ## Conclusion Yes, this is tongue-in-cheek. Sharks aren't hunting humans—we're not on their menu. But if you want a 100% guaranteed method to avoid shark bites, the data supports one foolproof strategy. Stay dry, friends. --- **Sources:** - [International Shark Attack File - Florida Museum](https://www.floridamuseum.ufl.edu/shark-attacks/) - [Shark Spotters - Shark Bite Statistics](https://sharkspotters.org.za/safety/shark-safety-advice/shark-bite-stats/) --- # Browser Agent - Claude Code Plugin for Playwright Date: 2026-01-06 URL: https://ddrscott.github.io/blog/2026/browser-agent-plugin/ Tags: claude-code Description: Stop fighting your browser automation. A context-friendly Playwright plugin for Claude Code. **Stop fighting your browser automation. Start controlling it.** I built [browser-agent](https://github.com/ddrscott/wiz-marketplace) because existing solutions frustrated me: - **Microsoft's [playwright-mcp](https://github.com/microsoft/playwright-mcp)** dumps ~16k tokens of accessibility tree on every call - **Other Playwright skills** open and close the browser constantly, losing your session I wanted something that felt natural—where I could tell Claude to browse Amazon, take over manually to poke around, then hand control back without losing my session or burning context. ## The Solution A persistent Chrome process that survives script execution. Claude connects, does work, disconnects. The browser stays open. ``` ┌─────────────────┐ ┌──────────────────────┐ │ /browser cmd │──────────│ Chrome (persistent) │ │ or Task agent │ CDP │ --remote-debugging │ └─────────────────┘ └──────────────────────┘ │ ▲ ▼ │ script runs stays open then disconnects user can interact ``` ## Quick Demo ```bash # Navigate /browser go to amazon.com # Search /browser search for VR headsets # Extract info /browser what's the top pick on this page? # Take over manually, browse around... # Hand back to Claude /browser click Add to Cart ``` The browser window stays visible. You can interact with it anytime. Claude reconnects to the same session—cookies, logins, everything persists. ## Why It Works | Problem | Old Way | Browser Agent | |---------|---------|---------------| | Context bloat | 16k tokens per call | Only script results | | Browser lifecycle | Opens/closes constantly | Persistent via CDP | | User handoff | Locked out during automation | Take over anytime | | API limitations | Predefined commands only | Full Playwright API | Claude writes custom Python scripts for each task using the complete Playwright API. No artificial limits. ## Architecture The plugin uses Claude Code's native patterns: - **Agent** (`agents/browser.md`) - System prompt with Playwright API reference - **Skill** (`skills/browser.md`) - Delegation patterns for the main agent - **Command** (`commands/browser.md`) - The `/browser` slash command - **Script** (`scripts/browser.py`) - Thin executor managing Chrome lifecycle The script is intentionally minimal—it handles the boring infrastructure (starting Chrome, CDP connection, PID tracking) so Claude's generated scripts stay focused on the actual task. ## Get It ```bash # Clone the marketplace git clone https://github.com/ddrscott/wiz-marketplace # Install the plugin claude plugin install wiz-marketplace/browser-agent ``` Prerequisites: Chrome, Python 3.11+, [uv](https://docs.astral.sh/uv/) ## What's Next This is part of [wiz-marketplace](https://github.com/ddrscott/wiz-marketplace), a collection of Claude Code plugins built with context efficiency in mind. More plugins coming. If you build something cool with it, let me know. --- # md-to-print: Because Screen Reading Killed My Eyes Date: 2026-01-03 URL: https://ddrscott.github.io/blog/2026/md-to-print-markdown-pdf-generator/ Tags: cli, python Description: A markdown to PDF converter that actually looks good on paper. 2-column layout, large fonts, automatic file watching, and a launchd setup that makes it invisible. ## The Problem With Reading on Screens I read a lot of markdown. Documentation, notes, LLM output, technical specs. And I'm tired of squinting at screens. My eyes are getting older, and backlights are merciless. "Just print it," I thought. So I did. And it looked like garbage. Most markdown-to-PDF tools are designed for web developers who want their docs to look exactly like GitHub renders them. Small fonts. Single column. Infinite scroll mentality. That's great for screens. It's terrible for paper. ## What I Actually Wanted Paper has different constraints than screens: 1. **Fixed width** - no responsive design, just 8.5 inches 2. **Two eyes** - humans can scan across columns efficiently 3. **No backlight** - contrast matters, but you can go bigger 4. **Physical presence** - a printed doc sits on your desk demanding attention So I built `md-to-print` with these opinions baked in: - **2-column layout** like a newspaper or textbook - **Large fonts** for aging eyes (9.5pt base, generous leading) - **High contrast** cream paper with dark ink - **Smart pagination** that doesn't split code blocks or list items mid-page ## The CSS Magic WeasyPrint gives you real CSS `@page` rules, which most web devs have never touched. Here's the good stuff: ```css @page { size: letter portrait; margin: 0.75in 0.6in 0.85in 0.6in; @top-center { content: string(doc-title); font-family: var(--font-mono); } @bottom-center { content: counter(page); } } ``` The `string-set` property lets you pull content from the document into running headers: ```css h1 { string-set: doc-title content(); } ``` So the first `

` becomes your document title in the header. Magic. ## The Real Win: Watch Mode Converting files manually is for chumps. The real power is in watch mode: ```bash md-to-print --watch ~/Downloads ``` Now any markdown file that lands in your Downloads folder automatically becomes a PDF. But who wants to remember to start that every time? ## Set It and Forget It: launchd macOS has a perfectly good service manager that nobody uses. Here's how to make `md-to-print` run forever: ```bash # Install globally uv tool install md-to-print # Create the LaunchAgent cat > ~/Library/LaunchAgents/com.md-to-print.watch.plist << 'EOF' Label com.md-to-print.watch ProgramArguments /Users/YOU/.local/bin/md-to-print --watch /Users/YOU/Downloads RunAtLoad KeepAlive EOF # Load it launchctl load ~/Library/LaunchAgents/com.md-to-print.watch.plist ``` Now it starts on login and restarts if it crashes. You'll never think about it again until you notice PDFs appearing next to your markdown files. ## The Aesthetic I didn't want generic. The styling uses my personal design system: - **JetBrains Mono** for headings and code - **Source Serif 4** for body text - **Signal Orange** (`#E85D00`) for accents - Headers prefixed with `//` like code comments Every `

` looks like this: ```css h2::before { content: "// "; color: var(--orange); } ``` It's opinionated. It's mine. Fork it and make it yours. ## Bonus: Mermaid Diagrams If you have `mmdc` (mermaid-cli) installed, diagrams Just Work: ~~~markdown ```mermaid graph LR A[Markdown] --> B[HTML] B --> C[PDF] ``` ~~~ The diagrams render to SVG and span both columns so they don't get squished. ## Installation ```bash git clone https://github.com/ddrscott/md-to-print cd md-to-print uv sync # Convert a file uv run md-to-print document.md # Watch a directory uv run md-to-print --watch ./docs/ ``` Or install globally and use anywhere: ```bash uv tool install -e /path/to/md-to-print md-to-print --watch ~/Downloads ``` ## The Takeaway Stop fighting screens. Some things are better on paper. `md-to-print` is extremely opinionated because I built it for myself. Large fonts for old eyes. Two columns for efficient reading. Orange accents because I like orange. If your eyes are tired and your printer is collecting dust, give it a shot. At minimum, you'll have prettier documentation. At best, you'll remember what it's like to read without a backlight. --- Source: [github.com/ddrscott/md-to-print](https://github.com/ddrscott/md-to-print) --- # Go SQL Injection: A Tale of Two Queries Date: 2025-12-13 URL: https://ddrscott.github.io/blog/2025/go-sql-injection-the-wrong-way/ Tags: sql Description: Learn how to build dynamic SQL filters in Go the right way by first seeing the wrong way. A practical guide to avoiding SQL injection with parameterized queries. Wouldn't it be great if every developer just *knew* not to concatenate user input into SQL strings? Like, built into our DNA alongside the instinct to breathe and the compulsion to argue about tabs vs spaces? Alas, we're not there yet. So let's build a car search filter in Go and do it wrong first. Because nothing teaches quite like watching things explode. ## The Setup Picture this: You're building a car dealership website. Users want to filter cars by cost, color, and year. Simple enough. You reach for `fmt.Sprintf` because it's right there, it's familiar, it's... about to ruin your day. ## The Wrong Way (Please Don't Do This) ```go package main import ( "fmt" ) // DON'T DO THIS - SQL INJECTION VULNERABLE func buildQueryWrong(filters map[string]string) string { query := "SELECT * FROM cars WHERE 1=1" // Allowed filter fields - at least we tried allowedFields := map[string]bool{ "cost": true, "color": true, "year": true, } for field, value := range filters { if allowedFields[field] && value != "" { // DANGER: Direct string concatenation! // This is the line that gets you on the news. query += fmt.Sprintf(" AND %s = '%s'", field, value) } } return query } func main() { // Normal usage looks fine... filters := map[string]string{ "color": "red", "year": "2020", } fmt.Println("Normal query:") fmt.Println(buildQueryWrong(filters)) // But an attacker can inject SQL! maliciousFilters := map[string]string{ "color": "red'; DROP TABLE cars; --", } fmt.Println("\nMalicious query (INJECTION!):") fmt.Println(buildQueryWrong(maliciousFilters)) } ``` Run this and weep: ``` Normal query: SELECT * FROM cars WHERE 1=1 AND color = 'red' AND year = '2020' Malicious query (INJECTION!): SELECT * FROM cars WHERE 1=1 AND color = 'red'; DROP TABLE cars; --' ``` See that `DROP TABLE cars`? That's not a filter. That's a resume-generating event. The `--` at the end comments out anything after it, so your carefully crafted query becomes a two-statement demolition derby. First it selects red cars, then it deletes every car you've ever known. ## Why This Happens The fundamental problem: we're mixing **code** (SQL structure) with **data** (user values). When you concatenate strings, the database can't tell the difference between: - `'red'` - a color the user wants - `'red'; DROP TABLE cars; --'` - a cry for help wrapped in malice ## The Right Way (Parameterized Queries) The fix is elegant: keep code and data separate. Let the database driver handle escaping. Here's a proper implementation: ```go package main import ( "context" "database/sql" "fmt" "strings" _ "github.com/lib/pq" // PostgreSQL driver ) // FilterField defines allowed fields and their SQL operators type FilterField struct { Column string Operator string // "=", ">=", "<=", "LIKE", etc. } // Whitelist of allowed filter fields // This is your bouncer. Nobody gets in without being on the list. var allowedFilters = map[string]FilterField{ "cost": {Column: "cost", Operator: "<="}, "cost_min": {Column: "cost", Operator: ">="}, "color": {Column: "color", Operator: "="}, "year": {Column: "year", Operator: "="}, "year_min": {Column: "year", Operator: ">="}, "year_max": {Column: "year", Operator: "<="}, } // QueryBuilder constructs parameterized SQL queries type QueryBuilder struct { conditions []string args []any paramCount int } func NewQueryBuilder() *QueryBuilder { return &QueryBuilder{ conditions: make([]string, 0), args: make([]any, 0), paramCount: 0, } } func (qb *QueryBuilder) AddFilter(filterKey string, value any) bool { field, ok := allowedFilters[filterKey] if !ok || value == nil || value == "" { return false // Not on the list? Not my problem. } qb.paramCount++ // $1, $2, $3... for PostgreSQL // Use ? for MySQL, @p1 for SQL Server placeholder := fmt.Sprintf("$%d", qb.paramCount) condition := fmt.Sprintf("%s %s %s", field.Column, field.Operator, placeholder) qb.conditions = append(qb.conditions, condition) qb.args = append(qb.args, value) return true } func (qb *QueryBuilder) Build(baseQuery string) (string, []any) { if len(qb.conditions) == 0 { return baseQuery, qb.args } whereClause := strings.Join(qb.conditions, " AND ") return baseQuery + " WHERE " + whereClause, qb.args } // SearchCars demonstrates real database usage func SearchCars(ctx context.Context, db *sql.DB, filters map[string]any) (*sql.Rows, error) { qb := NewQueryBuilder() for key, value := range filters { qb.AddFilter(key, value) } query, args := qb.Build("SELECT id, make, model, color, year, cost FROM cars") // Safe! The driver handles all escaping. return db.QueryContext(ctx, query, args...) } func main() { // Demo the query building filters := map[string]any{ "color": "red", "year_min": 2018, "cost": 50000, } qb := NewQueryBuilder() for key, value := range filters { qb.AddFilter(key, value) } query, args := qb.Build("SELECT * FROM cars") fmt.Println("Safe Query:") fmt.Println(query) fmt.Printf("Parameters: %v\n", args) // Even malicious input is safe now fmt.Println("\n--- Attempted Attack ---") maliciousFilters := map[string]any{ "color": "red'; DROP TABLE cars; --", } qb2 := NewQueryBuilder() for key, value := range maliciousFilters { qb2.AddFilter(key, value) } query2, args2 := qb2.Build("SELECT * FROM cars") fmt.Println("Query:") fmt.Println(query2) fmt.Printf("Parameters: %v\n", args2) fmt.Println("\nThe malicious string is just a literal value now.") fmt.Println("Good luck finding a car with that color.") } ``` Output: ``` Safe Query: SELECT * FROM cars WHERE color = $1 AND year >= $2 AND cost <= $3 Parameters: [red 2018 50000] --- Attempted Attack --- Query: SELECT * FROM cars WHERE color = $1 Parameters: [red'; DROP TABLE cars; --] The malicious string is just a literal value now. Good luck finding a car with that color. ``` The attacker's payload becomes a literal string. The database looks for a car with the color `red'; DROP TABLE cars; --` and finds nothing. Your tables survive. Your job survives. ## The Cheat Sheet | Wrong Way | Right Way | |-----------|-----------| | `fmt.Sprintf("... = '%s'", value)` | `db.Query("... = $1", value)` | | Values embedded in SQL string | Values passed as separate arguments | | Database sees `DROP TABLE` as SQL | Database sees `DROP TABLE` as text | | Career-limiting | Career-enhancing | ## Rules to Live By 1. **Never** use string concatenation for SQL with user input 2. **Always** use parameterized queries (`$1`, `?`, `@p1`) 3. **Whitelist** column names - don't let users specify arbitrary columns 4. **Test** with malicious input before your users do ## Placeholder Syntax by Database Because nothing in databases is ever consistent: | Database | Placeholder | |----------|-------------| | PostgreSQL | `$1`, `$2`, `$3` | | MySQL | `?`, `?`, `?` | | SQL Server | `@p1`, `@p2`, `@p3` | | SQLite | `?` or `$1` (both work) | ## Closing Thoughts SQL injection has been in the OWASP Top 10 since... well, since there was an OWASP Top 10. It's a solved problem. The solution has been known for decades. And yet, here we are, still writing blog posts about it. Maybe someday we'll have programming languages that make the wrong thing impossible instead of just difficult. Until then, we have parameterized queries and the eternal vigilance of code review. Now go audit your codebase. I'll wait. --- *Have a SQL war story? Found this useful? Drop a comment below. Misery loves company, and so does learning.* --- # The Greatness of Nano Banana 2 Date: 2025-11-23 URL: https://ddrscott.github.io/blog/2025/nano-banana-2/ Tags: ai Description: Fal.ai's Nano Banana 2 model generates images from text in seconds. Here's how I made a professional business card with a single API call. ## When Life Gives You Bananas Fal.ai recently released [Nano Banana 2](https://fal.ai/models/fal-ai/nano-banana-pro), and I'm here to tell you: this thing actually *reads*. Not "reads" as in "it's entertaining." I mean the text it generates is actually readable. If you've spent any time with image generation models, you know this is borderline miraculous. Most models turn "Hello World" into "HƎLLO WØRLD" like they're having a typographic seizure. I needed a business card mockup for an AI answering service I'm playing with. In the old days (you know, six months ago), I would have: 1. Opened Figma 2. Stared at a blank canvas 3. Questioned my life choices 4. Googled "business card dimensions" 5. Actually designed something mediocre 6. Spent another hour tweaking kerning Instead, I wrote a prompt and let the banana do its thing. ## The Code ```python import fal_client def on_queue_update(update): if isinstance(update, fal_client.InProgress): for log in update.logs: print(log["message"]) result = fal_client.subscribe( "fal-ai/nano-banana-pro", arguments={ "prompt": """Minimalist business card design, horizontal orientation, white background with subtle texture. Clean corporate style inspired by Dieter Rams and Swiss design principles. Left side: Bold headline text "Never Miss Another Call" in dark charcoal gray (#333), lightweight sans-serif font. Below that, three small bullet points in lighter gray: "24/7 AI-powered answering", "Trained on your business", "Books appointments automatically". Right side: Large phone number "847-686-3116" in vibrant orange (#ff8c00) as the focal point, bold weight. Above it in small uppercase gray text: "FREE DEMO". Below in small italic gray text: "Discover what you're missing". Bottom: Thin light gray separator line. Footer text in small caps: "Scott @ Left Join Studio, Inc. // scott@leftjoinstudio.com" Style: Professional, modern, high contrast, generous whitespace, no gradients or shadows, print-ready at 3.5 x 2 inches, 300 DPI. Typography-focused design only. Color palette: White background, charcoal gray text (#333), medium gray accents (#666), single orange accent color (#ff8c00).""", "num_images": 1, "aspect_ratio": "16:9", "output_format": "png", "resolution": "2K" }, with_logs=True, on_queue_update=on_queue_update, ) print(result) ``` That's it. The entire "design" process was writing a detailed prompt describing what I wanted. No layers, no export settings, no "wait, where did I put that font file?" ## Why Nano Banana? A few things make this model stand out: 1. **Text rendering that actually works**: Look at that business card. Every word is legible. The phone number is correct. The email address doesn't have random Unicode characters sprinkled in. This shouldn't be impressive, but here we are in 2025 celebrating an AI that can spell. 2. **Following instructions**: I asked for specific hex colors, specific layout, specific typography style. It delivered. No "creative interpretation" where my minimalist design suddenly has a gradient sunset and a wolf howling at the moon. 3. **Typography awareness**: The model understands visual hierarchy. Headlines are bold, body text is lighter, the phone number pops because it's the call to action. It's not just placing text - it's *designing* with text. 4. **The API**: Fal.ai's client is dead simple. Subscribe to a model, pass arguments, get results. No wrestling with base64 encoding or multipart form uploads. ## The Result The business card came out surprisingly usable. Would I send it to a print shop as-is? Maybe not. But for mockups, presentations, or "let me show you what I'm thinking" conversations, it's perfect. The real win is that every piece of text is correct. The phone number matches what I asked for. The email is spelled right. The bullet points say exactly what I typed. For anyone who's rage-quit Midjourney after the fifteenth attempt to get "OPEN 24 HOURS" to not render as "OPƎN 24 HØURS", this is the promised land. ## The Banana Bunch Fal.ai has a whole ecosystem of these fast inference models. Nano Banana is just one flavor. They've got text-to-image, image-to-image, upscaling, background removal - the whole smoothie bar. The pricing is also reasonable enough that you can actually experiment without selling a kidney. Which is important when you're generating 47 variations of a business card because "maybe the orange should be more... orangey." --- Give [Nano Banana 2](https://fal.ai/models/fal-ai/nano-banana-pro) a spin. Even if you don't need AI-generated business cards (and let's be honest, who knew they needed that?), it's worth seeing how fast image generation has become. What a time to be alive! --- # RingLite: A Virtual Ring Light That Doesn't Exist Date: 2025-11-21 URL: https://ddrscott.github.io/blog/2025/ringlite-virtual-ring-light/ Tags: cli Description: Introducing RingLite - a virtual ring light for video recording that's invisible to screen recorders. Built with Tauri, it floats above your content so you can see your script while looking well-lit. ## The Problem With Real Ring Lights If you've ever done video recording or live streaming, you know the struggle. You set up your fancy ring light, position it *just* right, and then realize you can't see your notes behind it. Or worse, it's sitting off to the side casting weird shadows that make you look like a villain in a budget horror film. Physical ring lights are great at one thing: being in the way. Wouldn't it be great if you could have a ring light that: 1. Floats right where you need it 2. Lets you see your script/notes through the center 3. Doesn't show up in your screen recordings Well, I got annoyed enough to build exactly that. ## Introducing RingLite RingLite is a virtual ring light that lives on your screen. It's a simple glowing white ring that you can drag anywhere, resize to fit your face, and - here's the magic part - **it's completely invisible to screen recorders**. That last bit is the whole point. You can position it right over your webcam preview, read your notes behind it, and your audience never sees it. It's like having a teleprompter and ring light in one invisible package. ## How Does the Invisibility Work? Both macOS and Windows have APIs that let you exclude windows from screen capture. Most apps don't use them, but they're perfect for this use case. On macOS, it's a matter of telling the window "don't share yourself": ```rust // NSWindowSharingType.none let _: () = msg_send![ns_window, setSharingType: 0]; ``` On Windows, there's a similar API: ```rust SetWindowDisplayAffinity(hwnd, WDA_EXCLUDEFROMCAPTURE); ``` The window still exists, you can still see it, but OBS, Zoom, and every other screen recorder just... don't. It's like the ring light has an invisibility cloak. ## The Controls I kept it simple: | Action | Control | |--------|---------| | Move | Drag anywhere | | Resize | Scroll wheel or `+`/`-` | | Nudge | Arrow keys | | Help | `H` | | Quit | `Esc` | The ring remembers its size between sessions because nobody wants to resize their ring light every single time. ## Built With Tauri I went with [Tauri](https://tauri.app/) for this because: 1. Cross-platform (macOS and Windows) with native performance 2. Tiny binary size (no Electron bloat) 3. Rust backend means easy access to native APIs for the screen capture exclusion The whole UI is just HTML/CSS/JS - a glowing ring with a box-shadow. The glow effect is achieved with layered shadows: ```css #ring { border: var(--ring-thickness, 40px) solid white; box-shadow: 0 0 60px 20px rgba(255, 255, 255, 0.3), inset 0 0 60px 20px rgba(255, 255, 255, 0.1); } ``` Nothing fancy, but it looks the part. ## Get It Download from [GitHub Releases](https://github.com/ddrscott/ringlite/releases). **macOS users**: Right-click and choose "Open" the first time to bypass Gatekeeper. Apple doesn't trust me, and honestly, that's fair. ## The Takeaway Sometimes the best tools are the ones that get out of your way - or in this case, stay in your way while pretending they don't exist. If you're recording videos or doing live streams and fighting with your ring light placement, give RingLite a try. At worst, you've got a free floating glow ring. At best, you'll wonder why this wasn't a thing sooner. --- Have questions or want to see other "invisible overlay" tools? Let me know in the comments. --- # tmate is Dead, Long Live tmate Date: 2025-11-21 URL: https://ddrscott.github.io/blog/2025/tmate-is-dead-long-live-tmate/ Tags: cli Description: When your favorite terminal sharing tool stops working on your phone, it's time to find something new. A pragmatic guide to browser-based alternatives. I've been a `tmate` loyalist for years. SSH into a shared tmux session? Chef's kiss. But here's the thing nobody tells you: try pulling that off on your phone while waiting at the DMV and you'll understand why I'm writing this post. ## The Problem Picture this: I'm debugging a production issue from my kid's tennis match. I've got my phone, a spotty 4G connection, and a terminal sharing link from a colleague. The SSH client on my phone? It's... fine. The tiny keyboard? Torture. The inability to pinch-zoom on terminal output? Criminal. Wouldn't it be great if I could just open a link in my phone's browser and see the terminal? You know, like how every other modern collaboration tool works? ## The Research Rabbit Hole I did what any reasonable developer would do: I spent way too long researching alternatives. Here's what I found that actually works for Mac-to-phone terminal sharing via browser. | Tool | Stars | Language | Last Release | Browser Access | Install | Key Feature | |------|-------|----------|--------------|----------------|---------|-------------| | [sshx] | 6,300+ | Rust | Feb 2025 | Shareable URLs | `brew install sshx` | E2E encrypted, infinite canvas | | [ttyd] | 9,900+ | C | Mar 2024 | localhost:7681 | `brew install ttyd` | Fast, ZMODEM file transfer | | [tty-share] | 857 | Go | Jan 2025 | Public URLs | `brew install tty-share` | Zero dependencies | | [upterm] | 988 | Go | Nov 2025 | SSH over WebSocket | `brew install --cask upterm` | GitHub auth integration | | [gotty] | 19,000+ | Go | Aug 2017 | localhost:8080 | `brew install gotty` | **Abandoned** | | [wetty] | ~5,000 | TypeScript | Sep 2023 | SSH in browser | `npm -g i wetty` | Requires Node.js | | [Warp] | 25,300+ | Rust | Weekly | Cloud-based | `brew install --cask warp` | AI Agent Mode (commercial) | [sshx]: https://github.com/ekzhang/sshx [ttyd]: https://github.com/tsl0922/ttyd [tty-share]: https://github.com/nicusor/tty-share [upterm]: https://github.com/owenthereal/upterm [gotty]: https://github.com/yudai/gotty [wetty]: https://github.com/butlerx/wetty [Warp]: https://www.warp.dev/ ### The Top 3 (That I'd Actually Use) **1. sshx - The Winner (It's Rust!)** ```sh brew install sshx sshx ``` One command. Shareable URL. Done. No ngrok, no port forwarding, no fuss. The URL has your encryption key embedded in the fragment (the part after `#`), so the server never sees your plaintext. End-to-end encryption without thinking about it. The "infinite canvas" UI is wild - you can have multiple terminals floating around like some kind of terminal mood board. Live cursors show where other people are looking. There's even a chat. It's like Figma had a baby with tmux. Why I chose it: - **Written in Rust** - I try to support the Rust ecosystem when I can - End-to-end encrypted (paranoid developers rejoice) - One command to shareable URL - Active development (Feb 2025 release) - That infinite canvas thing is actually useful for pair programming **2. ttyd - The Battle-Tested Alternative** If you want something more established: ```sh brew install ttyd ttyd bash ``` Open `http://localhost:7681` in any browser. For remote access, pair it with ngrok: ```sh ttyd -p 7681 bash # In another terminal: ngrok http 7681 ``` Why it's solid: - Written in C, so it's blazing fast - Read-only by default (won't accidentally `rm -rf /` from your phone) - ZMODEM file transfer if you're into that sort of thing - 9,900+ GitHub stars means someone else debugged the hard stuff **3. tty-share - The Lightweight One** ```sh brew install tty-share tty-share ``` Generates a public URL. Zero dependencies - it's a static Go binary. This is what you want on a Raspberry Pi or any resource-constrained environment. Why I like it: - Smallest footprint - Works anywhere Go compiles - January 2025 release, so it's not abandoned ### The One I'm Watching **Warp Terminal** is doing interesting things with AI and collaboration, but it's commercial software and cloud-dependent. If your threat model allows it and you've got $20/month burning a hole in your pocket, it's worth a look. Their "Agent Mode" that accepts natural language commands is genuinely impressive. But I'm a POSIX curmudgeon, so YMMV. ### The Ones to Avoid **gotty** - 19,000 stars but abandoned since 2017. There's an active fork by sorenisanerd, but why start with technical debt? **wetty** - Last release September 2023. Requires Node.js. I have enough JavaScript in my life, thank you. ## My Actual Setup Here's what I ended up with: ```sh # ~/.zshrc or ~/.bashrc alias share='sshx' alias share-readonly='ttyd -R bash' ``` `sshx` for when I'm pair programming with someone I trust. `ttyd -R` for when I want to show my phone something without risking accidental keystrokes. ## The Security Bit Look, sharing a terminal to your phone over the internet is inherently risky. Some ground rules: 1. **Read-only by default** - Both ttyd and tty-share support this 2. **Encrypted transport** - sshx does end-to-end, others use TLS 3. **Don't share production servers** - Obvious, but worth saying 4. **Kill the session when done** - `Ctrl+C` is your friend ## Conclusion Browser-based terminal sharing has gotten good enough that I don't miss the SSH-based workflow on my phone. The tools are mature, actively maintained, and respect the Unix philosophy of doing one thing well. **sshx is my pick.** It's Rust, it's encrypted, it's actively maintained, and it just works. I like supporting the Rust ecosystem when I can - the language attracts developers who care about correctness and performance, and it shows in the tooling. If Rust isn't your thing, **ttyd** is the safe bet with years of battle-testing behind it. What are you using? I'm curious if I missed anything good. Hit me up in the comments or on the socials. --- # Double Quotes in Novels and Code Date: 2025-10-29 URL: https://ddrscott.github.io/blog/2025/what-is-a-string/ Tags: beginners, programming, teaching Description: Strings aren't complicated - they're dialogue for your code After 20 years of teaching beginners to code, I've noticed they stumble on strings not because strings are hard, but because the quotation marks seem arbitrary. "Why quotes here but not there?" It's a fair question. Then I realized: **they already know this pattern from reading**. Strings work exactly like dialogue in a novel. ## The Narrator vs. The Character Look at this sentence from any book: > The old woman walked into the room. "Hello," she said warmly. Notice the pattern? The first part tells you **what happened** - it directs your imagination. The quoted part tells you **what was said** - the actual content, preserved exactly. Code works the same way: ```java print("Hello, World!"); ``` - `print` (no quotes) = instruction for what to DO - `"Hello, World!"` (quotes) = content to DISPLAY The narrator's words direct you. The character's words inform you. In code: instructions direct the computer. Quotes preserve content. ## The Common Mistake Here's what trips up every beginner: ```java System.out.println(Hello); ``` Error: `cannot find symbol: Hello` The computer sees `Hello` without quotes and thinks it's a variable name. But there's no variable called `Hello`, so it fails. What you meant: ```java System.out.println("Hello"); ``` Now it's clearly data to display, not an instruction. In writing, this would be like reading: > The woman walked into the room. Hello, she said warmly. Your brain stumbles because "Hello" looks like part of the narration, but it's supposed to be the spoken words. Same confusion. ## Variables: References vs. Content This is where the analogy gets interesting: ```java String name = "Alice"; System.out.println(name); // Prints: Alice System.out.println("name"); // Prints: name ``` `name` without quotes means "the container called name" (which holds "Alice"). `"name"` with quotes is literally just the text "name". In a story: ``` Alice opened the door. ``` versus ``` "My name is Alice," she said. ``` First case: `Alice` refers to the character (an entity in the story). Second case: `"Alice"` is just text being spoken. Same pattern. ## The Mystery Novel Problem Here's the interesting part. In fiction, misdirection creates drama: > Alice approached the security desk. > > "Name?" the guard asked. > > "My name is Lily," Alice said smoothly. Great writing. The reader knows Alice is lying. Tension builds. In code? This creates problems: ```java String age = "Bob"; int name = 25; ``` Technically valid - Java doesn't care. But six months later when you're reading this code, you'll waste time figuring out why someone stored a name in a variable called `age`. ### Be Obvious, Not Clever Unlike mystery novels where plot twists engage readers, in code we want to be obvious. **Mystery novel approach:** ```java String x = "data"; int temp = 42; String s = "user@email.com"; ``` **Instruction manual approach:** ```java String userData = "Alice Smith"; int userAge = 42; String emailAddress = "user@email.com"; ``` Boring? Yes. Clear at 3am? Absolutely. ## A Variable Name is a Promise When you write `String email`, you're telling everyone (including future-you) that this variable contains an email address. Breaking that promise costs time. I've seen production bugs caused by variables named `customerEmail` that actually contained names, and `customerName` that contained email addresses. The code worked fine - until someone assumed the names meant what they said. ## The Pattern **In writing:** - Narrator's words (no quotes) = directions for what to imagine - Character's words (quotes) = content to preserve **In code:** - Instructions (no quotes) = directions for what to do - Strings (quotes) = content to preserve **Variable names in both:** - Should accurately describe what they reference - Create clarity, not mystery ## Quick Test What will these print? ```java System.out.println("Hello"); System.out.println(Hello); String word = "Goodbye"; System.out.println(word); System.out.println("word"); ``` **Answers:** 1. `Hello` (the string data) 2. Error (`Hello` without quotes = undefined variable) 3. `Goodbye` (content of the variable) 4. `word` (literally the text "word") If those make sense, you've got it. ## The Takeaway Quotes in code work like quotes in writing - they mark content rather than instructions. The computer treats everything outside quotes as commands, everything inside quotes as data. And like good writing, good code is clear about what it means. Save the plot twists for novels. Questions? Comments? I'm curious if this parallel works for other people or if I just spent too much time reading mystery novels. --- # Why are LLMs bad at SQL? Date: 2025-01-31 URL: https://ddrscott.github.io/blog/2025/why-are-llms-bad-at-sql/ Tags: sql In response to a [LinkedIn](https://www.linkedin.com/feed/update/urn:li:activity:7290390986973712384?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7290390986973712384%2C7291128101659402240%29&dashCommentUrn=urn%3Ali%3Afsd_comment%3A%287291128101659402240%2Curn%3Ali%3Aactivity%3A7290390986973712384%29) comment by [Sasha](https://www.linkedin.com/in/aleksandar-sasha-dzeletovic/) Did I hear someone say SQL and LLMs?! **TL;DR** - Publicly available LLMs are **inherently** bad at SQL. Furthermore, they have little hope of ever getting good at it. ## Why I'm not surprised by the low scores. I'd imagine since the dawn of the language and the Internet, the amount of high-quality SQL examples with high-quality explanations is zero. Compare that with code in other languages it must be ∞ to 0. I'm surprised LLMs can even spell SQL :). Even with my own musing on SQL, I round down the quality examples to zero. When developers are first getting started, they're not looking for tutorials on SQL. They're looking to more popular languages to make Snake games, Flappy Birds, and sort binary trees to cheat on their homework. It's a fools errand to write about SQL. SQL has a public audience of none. From business is born SQL, and business likes to keep their expensive SQL a secret. From business is bred the deepest knowledge of SQL and it's rarely tied to a reasonable explanation. It'll never be a business priority to properly teach complex SQL. ## SQL is my Love Language SQL is a language that transcends understanding by human intelligence and artificial intelligence. It's tied so close to the data it's trying to manage it's almost inseparable. In the same way DNA describes an animal - SQL describes our data. The permutations of SQL sequences is infinitely complex and means nothing on it's own. It can only be understood with its data. Yet, there is hope for SQL and LLMs to live in harmony. It'll be in the private sector, when businesses invest in creating good training data - pairs of questions and answers tailored to their private datasets. The years I've invested in the SQL language has made me oddly sentimental. I'll close by borrowing from [Calum Scott](https://genius.com/Calum-scott-you-are-the-reason-lyrics): ``` I'd climb every mountain And swim every ocean Just to write SQL And fix what I've broken 'Cause I need data please That you are the reason ``` --- # How to Say the Impossible with Voice Cloning Date: 2025-01-06 URL: https://ddrscott.github.io/blog/2025/chatterbox-tts-voice-cloning/ Tags: ai, tts Description: Make AI say tongue twisters for you So I found myself with some free time and a dangerous combination: access to [Chatterbox TTS](https://github.com/resemble-ai/chatterbox) and an irresistible urge to make AI say ridiculous things in my own voice. What could possibly go wrong? ## What is Chatterbox TTS? Chatterbox TTS is a text-to-speech model that supports voice cloning - you feed it an audio sample of someone's voice, and it can generate new speech that sounds like that person. It's like having a digital ventriloquist and you are you're own dummy. ## The Setup Getting started was surprisingly straightforward: ```python from chatterbox.tts import ChatterboxTTS import torch # Automatically detect the best available device if torch.cuda.is_available(): device = "cuda" elif torch.backends.mps.is_available(): device = "mps" else: device = "cpu" model = ChatterboxTTS.from_pretrained(device=device) ``` The model loaded without complaint and immediately informed me it was using CUDA. Good start - my GPU was ready to make me sound like a robot trying to be me. ## The Voice Sample I used a voice sample called `spierce-sample-2.wav` - presumably a recording of myself saying something coherent enough for the AI to learn from. The beauty of voice cloning is that you only need a short sample, maybe 5-10 seconds, and the model can extrapolate from there. ## The Torture Test: Tongue Twisters Because I'm apparently 12 years old at heart, I decided the best way to test this voice cloning was with tongue twisters. If you're going to make an AI impersonate you, might as well make it suffer through the same verbal gymnastics that made you stumble in elementary school. Here's what I fed it: ```python text = """\ Near a ear, a nearer ear, a nearly eerie ear. Roberta ran rings around the Roman ruins. He threw three free throws. A happy hippo hopped and hiccupped. Toy boat. Toy boat. Toy boat. A synonym for cinnamon is a cinnamon synonym. One-one was a race horse. Two-two was one too. One-one won one race. Two-two won one too. """ ``` ## The Results The generated audio was... well, it was definitely trying. The AI gamely attempted each tongue twister with the determination of someone who doesn't actually have a tongue to twist. ### Audio Samples Here are the results from two different voice samples: **Sample 2 (Original voice):** **Sample 3 (Refined voice):** ### Observations - **"Near a ear, a nearer ear"** - The AI handled the repetitive "ear" sounds reasonably well, though it had that slightly mechanical cadence that screams "I am definitely a robot" - **"Toy boat" repetition** - This is where things got interesting. The AI seemed to understand that this was supposed to be rapid-fire, but the timing felt just a bit off - **The race horse one** - Surprisingly coherent for such a complex tongue twister The voice definitely sounded like it was trying to be me, with that uncanny valley quality where you think "that's almost right, but something's off." It's like hearing yourself on a recording, but if that recording was made by aliens who had only heard human speech through a tin can telephone. ## What's Actually Happening Here? Voice cloning works by analyzing the acoustic characteristics of your voice sample - things like pitch patterns, formant frequencies, and speaking rhythm. The model then tries to apply these characteristics to new text. It's not perfect, but it's genuinely impressive how much personality it can capture from a short sample. The tongue twisters were a good stress test because they require precise timing and articulation. Where a human would stumble and recover, the AI just... continues with robotic determination. There's something both admirable and slightly unsettling about that. ## The Verdict Chatterbox TTS is surprisingly capable for voice cloning. While it's not going to fool anyone into thinking it's actually me speaking, it's close enough to be both impressive and mildly disturbing. The fact that I can generate this on my local GPU using CUDA makes it feel like we're living in the future - a slightly weird future where I can make digital copies of myself recite tongue twisters. Would I use this for anything practical? Maybe. Could I use it to prank my friends? Absolutely. Will I resist the temptation to make it read my old blog posts in my own voice? Probably not. The democratization of voice cloning technology is both exciting and terrifying. On one hand, it opens up creative possibilities for content creation and accessibility. On the other hand, well... let's just say we're living in interesting times. Now if you'll excuse me, I need to go make my AI clone recite Shakespeare. For science, obviously. --- *Have you experimented with voice cloning? Found any good use cases beyond making yourself say ridiculous things? Let me know in the comments - I promise it's actually me writing this, not my AI clone.* --- # Column with a G Date: 2024-12-20 URL: https://ddrscott.github.io/blog/2024/column-with-a-g/ Tags: life Description: How to view large CSV files. I have a pet named `Golumn`, pronounced like "column" with a "G". It's a pet project to be more precise, and out of all my pets Golumn is my favorite, just don't tell the others. Have you ever received a CSV (Comma Separated Value) file that's a gigabyte or more from a data vendor that you're just checking out for the first time, try to open it and wait for Excel to read everything into memory and then crashes with an "Out of Memory" error? So then you have to split up the file into something smaller, and try again. But how can you make it smaller if you can't open it in Excel? So you roll up your sleeves and break out some ETL tools to view it. Then you realize you have a to write a script because ETL needs the data in a slightly different format? Well, after repeating this pattern for 19 years (2017 - 1998) or so, I had to fix this once and for all. No more work before the work. I needed a way to preview the data before doing any heavy lifting. Even in 2024, this is still a problem! - Excel Crashing with 1.5GB File - Any Suggestions? [Reddit](https://www.reddit.com/r/excel/comments/1f74uyo/excel_crashing_with_15gb_file_any_suggestions/) - What to do if a data set is too large for the Excel grid [Microsoft](https://support.microsoft.com/en-us/office/what-to-do-if-a-data-set-is-too-large-for-the-excel-grid-976e6a34-9756-48f4-828c-ca80b3d0e15c) Excel Warning, Data set too large In 2017, I set out to create the **fastest** CSV viewer known to the Internet. I searched for alternatives to Excel to make sure the my problem wasn't already solved. I found some that were IDE specific, for IDE's that I don't use. I found some that were Electron based, but Electron's startup time was slow back then. And there were some open source options, but they didn't scratch the itch. Many of the tools still required me to know the structure of the file before opening it. ## My Deepest Needs - **Speed!** I need to view the data before it finishes loading it. - **Good Guessing**. I don't want to pick the column separator,line delimiter, or quoting before preview. - **Fault tolerance**. Let me see the data even if the guessed format is bad so I can be more informed for the next attempt. ## Command Line Junky Did you know CSVs can be read from command line? This is the way I lived before Golumn. If you're in a POSIX (Linux, Mac, Unix) environment, you can try: ```sh head -100 /path/to/file.csv | column -ts, | less -S ``` **Command Breakdown**: * `head -100 /path/to/file` get the first 100 lines of a file * `column -ts,` print in table `-t` format separated by comma `-s,` * `less -NS` paginate the result with line numbers `-N` and horizontal scrolling `-S` The command line way, checks all 3 needs. It's fast, it let's me see results, and it's fault tolerant. It _may_ look like crap if the delimiter is wrong, but that's easy to remediate with different flags, and because I can see something, it allows for quick adjustments. For instance, `column -t-s\|` would split on pipe instead of comma. This is nice and all, but every once in a while, a graphical interface is needed to get a better feel for the data. GUIs provide for smooth scrolling and higher resolution than the terminal. It's also nice to have numbers aligned properly and clickable things. ## Golumn is Born Golumn is a Python project built with wxPython built on wxWidgets. It leverages Python's CSV sniffer and SQLite for data processing. https://pypi.org/project/golumn/ screenshot of data grid The command, `golumn /path/to/file.csv`, will open a GUI Window showing data in 0.1 seconds with a simple grid of data scrollable till my fingers get tired. Overtime, I've added keyboard shortcuts to quick filter/unfilter data based on the current selection, sort columns, and search and filter any column for a piece of text, but at it's core, it's a sharp tool that does 1 thing well: **View CSV files** ## Open Source and Legacy [Golumn](https://github.com/ddrscott/golumn) is open source. One day, someone other than myself can contribute to it. Someone other than myself could have the same impatience for CSV load times. Someone could have similar hotkey preferences. Or maybe not. I don't put it out there for fame or fortune. I but it out there because it's my favorite pet project and everyone should have a change to play with it. `Golumn` is buddy that has served me well for 7+ years, and I want it to have a home beyond me. History is made by sharing, not hoarding. ## Conclusion Thanks for getting this far. I hope you find Golumn useful, and reach out to me if you have any questions or suggestions. I'm always looking for ways to make it better. --- # Bowling Scores the SQL Way Date: 2024-10-09 URL: https://ddrscott.github.io/blog/2024/sql-bowling/ Tags: sql Description: How to calculate bowling scores using SQL. No Store Procedures allowed! The scoring rules for American Bowling are simple on paper, but can be a chore to calculate by hand. This is a fun exercise to calculate the scores in mass using SQL. Because if you can't do it in SQL, you're life is too easy! If you don't know the rules, here's a quick summary: https://www.kidslearntobowl.com/how-to-keep-score/ ## Data Preparation Making a table to store the raw data for the individual games. We're not going into full data modelling by adding players and timestamps. We're keeping it simple for this example. ```sql DROP TABLE IF EXISTS game_rolls; CREATE TABLE game_rolls ( game int, frame int, roll int, pins int ); ``` ### Add a Perfect Game A perfect game is 12 strikes in a row. 10 strikes in the first 9 frames and 2 strikes in the 10th frame. ```sql -- Perfect Game -- Total: 300 INSERT INTO game_rolls (game, frame, roll, pins) VALUES (1, 1, 1, 10), (1, 2, 1, 10), (1, 3, 1, 10), (1, 4, 1, 10), (1, 5, 1, 10), (1, 6, 1, 10), (1, 7, 1, 10), (1, 8, 1, 10), (1, 9, 1, 10), (1, 10, 1, 10), (1, 10, 2, 10), (1, 10, 3, 10); ``` ### Add an Unlucky Second Game We managed to knock down 1 pin each frame and missed the second roll. Repeat this for all 10 frames. ```sql -- 1 pin each frame. -- Total: 10 INSERT INTO game_rolls (game, frame, roll, pins) VALUES (2, 1, 1, 1), (2, 1, 2, 0), (2, 2, 1, 1), (2, 2, 2, 0), (2, 3, 1, 1), (2, 3, 2, 0), (2, 4, 1, 1), (2, 4, 2, 0), (2, 5, 1, 1), (2, 5, 2, 0), (2, 6, 1, 1), (2, 6, 2, 0), (2, 7, 1, 1), (2, 7, 2, 0), (2, 8, 1, 1), (2, 8, 2, 0), (2, 9, 1, 1), (2, 9, 2, 0), (2, 10, 1, 1), (2, 10, 2, 0); ``` ### Add Gutter Ball and Spare Now, we're showing off our skills. Throw the first ball into the gutter on purpose and take down the rest in the second roll for a spare. ```sql -- 0 then 10 each frame -- Total: 100 INSERT INTO game_rolls (game, frame, roll, pins) VALUES (3, 1, 1, 0), (3, 1, 2, 10), (3, 2, 1, 0), (3, 2, 2, 10), (3, 3, 1, 0), (3, 3, 2, 10), (3, 4, 1, 0), (3, 4, 2, 10), (3, 5, 1, 0), (3, 5, 2, 10), (3, 6, 1, 0), (3, 6, 2, 10), (3, 7, 1, 0), (3, 7, 2, 10), (3, 8, 1, 0), (3, 8, 2, 10), (3, 9, 1, 0), (3, 9, 2, 10), (3, 10, 1, 0), (3, 10, 2, 10); ``` ### Add Realistic Game with Strikes and Spares This is my typical night out, never quite getting halfway to perfection (300). If I spent more time bowling and less time writing SQL maybe I'd get better! ```sql INSERT INTO game_rolls (game, frame, roll, pins) VALUES (4, 1, 1, 7), -- First frame: 7 pins (4, 1, 2, 2), -- Spare with 2 pins => = 9 (4, 2, 1, 10), -- Second frame: Strike => 9+10+5+4=28 (4, 3, 1, 5), -- Third frame: 5 pins (4, 3, 2, 4), -- Open frame with 4 pins => 28+9=37 (4, 4, 1, 8), -- Fourth frame: 8 pins (4, 4, 2, 1), -- Open frame with 1 pin => 37+9=46 (4, 5, 1, 10), -- Fifth frame: Strike => 46+10+6+3=65 (4, 6, 1, 6), -- Sixth frame: 6 pins (4, 6, 2, 3), -- Open frame with 3 pins => 65+9=74 (4, 7, 1, 9), -- Seventh frame: 9 pins (4, 7, 2, 0), -- Open frame with 0 pins => 74+9=83 (4, 8, 1, 10), -- Eighth frame: Strike => 83+10+9=102 (4, 9, 1, 7), -- Ninth frame: 7 pins (4, 9, 2, 2), -- Open frame with 2 pins => 102+9=111 (4, 10, 1, 10), -- Tenth frame: Strike (4, 10, 2, 10), -- Bonus roll: Strike (4, 10, 3, 8); -- Bonus roll: 8 pins => 111+10+10+8=139 ``` ## Query to Calculate Scores There are many ways to calculate the scores. We're going to make use of [Window Functions](https://www.postgresql.org/docs/current/tutorial-window.html) to look ahead for bonus pins and look behind for spares. ```sql SELECT *, COALESCE(LEAD(pins, 1) OVER (w), 0) AS next_pins, COALESCE(LEAD(pins, 2) OVER (w), 0) AS next_next_pins, roll = 1 AND pins = 10 AS is_strike, roll = 2 AND (pins + LAG(pins, 1) OVER (w) = 10) AS is_spare FROM game_rolls WINDOW w AS (PARTITION BY game ORDER BY game, frame, roll) ; ``` **Results**: |game|frame|roll|pins|next_pins|next_next_pins|is_strike|is_spare| |----|-----|----|----|---------|--------------|---------|--------| | 4| 1| 1| 7| 2| 10|false |false | | 4| 1| 2| 2| 10| 5|false |false | | 4| 2| 1| 10| 5| 4|true |false | | 4| 3| 1| 5| 4| 8|false |false | | 4| 3| 2| 4| 8| 1|false |false | | 4| 4| 1| 8| 1| 10|false |false | | 4| 4| 2| 1| 10| 6|false |false | | 4| 5| 1| 10| 6| 3|true |false | | 4| 6| 1| 6| 3| 9|false |false | | 4| 6| 2| 3| 9| 0|false |false | | 4| 7| 1| 9| 0| 10|false |false | | 4| 7| 2| 0| 10| 7|false |false | | 4| 8| 1| 10| 7| 2|true |false | | 4| 9| 1| 7| 2| 10|false |false | | 4| 9| 2| 2| 10| 10|false |false | | 4| 10| 1| 10| 10| 8|true |false | | 4| 10| 2| 10| 8| 0|false |false | | 4| 10| 3| 8| 0| 0|false |false | That looks right. The future pins are correct and the strikes and spares are correctly identified. If we wanted, we could make a view, but we're going to continue with the query by using [Common Table Expressions (CTE)](https://www.postgresql.org/docs/current/queries-with.html). We'll call the first CTE `stats` and build on it with the next CTE to calculate the bonus based on the rules of strikes and spares. Strikes get the next two rolls and spares get the next roll. There is a special condition for the last frame. We accept the pins as final since there are no future frames to look at. We don't have to 'wait' to score the last frame. We can score it immediately. > I have to admit, this was the hardest part to figure out and I was stuck on it for longer than I'd like to admit. > The perfect game scenario was totalling 320 for the longest time! (Perfect games of 12 strikes in a row are 300 points, not 320) ```sql WITH stats AS ( SELECT *, COALESCE(LEAD(pins, 1) OVER (w), 0) AS next_pins, COALESCE(LEAD(pins, 2) OVER (w), 0) AS next_next_pins, roll = 1 AND pins = 10 AS is_strike, roll = 2 AND (pins + LAG(pins, 1) OVER (w) = 10) AS is_spare, frame = (max(frame) OVER (PARTITION BY game)) AS is_last_frame FROM game_rolls WINDOW w AS (PARTITION BY game ORDER BY game, frame, roll) ) SELECT *, CASE WHEN is_last_frame THEN pins WHEN is_strike THEN pins + next_pins + next_next_pins WHEN is_spare THEN pins + next_pins ELSE pins END AS score FROM stats ; ``` **Results with pins per frame**: |game|frame|roll|pins|next_pins|next_next_pins|is_strike|is_spare|is_last_frame|score| |----|-----|----|----|---------|--------------|---------|--------|-------------|-----| | 4| 1| 1| 7| 2| 10|false |false |false | 7| | 4| 1| 2| 2| 10| 5|false |false |false | 2| | 4| 2| 1| 10| 5| 4|true |false |false | 19| | 4| 3| 1| 5| 4| 8|false |false |false | 5| | 4| 3| 2| 4| 8| 1|false |false |false | 4| | 4| 4| 1| 8| 1| 10|false |false |false | 8| | 4| 4| 2| 1| 10| 6|false |false |false | 1| | 4| 5| 1| 10| 6| 3|true |false |false | 19| | 4| 6| 1| 6| 3| 9|false |false |false | 6| | 4| 6| 2| 3| 9| 0|false |false |false | 3| | 4| 7| 1| 9| 0| 10|false |false |false | 9| | 4| 7| 2| 0| 10| 7|false |false |false | 0| | 4| 8| 1| 10| 7| 2|true |false |false | 19| | 4| 9| 1| 7| 2| 10|false |false |false | 7| | 4| 9| 2| 2| 10| 10|false |false |false | 2| | 4| 10| 1| 10| 10| 8|true |false |true | 10| | 4| 10| 2| 10| 8| 0|false |false |true | 10| | 4| 10| 3| 8| 0| 0|false |false |true | 8| That looks good. The scores are correct and the last frame is scored correctly. We could simply `SUM` everything at this point: ```sql WITH stats AS ( SELECT *, COALESCE(LEAD(pins, 1) OVER (w), 0) AS next_pins, COALESCE(LEAD(pins, 2) OVER (w), 0) AS next_next_pins, roll = 1 AND pins = 10 AS is_strike, roll = 2 AND (pins + LAG(pins, 1) OVER (w) = 10) AS is_spare, frame = (max(frame) OVER (PARTITION BY game)) AS is_last_frame FROM game_rolls WINDOW w AS (PARTITION BY game ORDER BY game, frame, roll) ), scores AS ( SELECT *, CASE WHEN is_last_frame THEN pins WHEN is_strike THEN pins + next_pins + next_next_pins WHEN is_spare THEN pins + next_pins ELSE pins END AS score FROM stats ) SELECT game, SUM(score) FROM scores GROUP BY game ; ``` **Results final `SUM`**: |game|sum| |----|---| | 1|300| | 2| 10| | 3|100| | 4|139| Notice the use of CTEs again so we can build on the previous step. This is an important pattern that I follow strictly when building up queries. It allows me to extract parts into views in the future and provides a good place for meaningful names. In Bowling, we normally to see a running score frame by frame. We can do this by adding a cumulative score to each frame with yet another Window Function. ```sql WITH stats AS ( SELECT *, COALESCE(LEAD(pins, 1) OVER (w), 0) AS next_pins, COALESCE(LEAD(pins, 2) OVER (w), 0) AS next_next_pins, roll = 1 AND pins = 10 AS is_strike, roll = 2 AND (pins + LAG(pins, 1) OVER (w) = 10) AS is_spare, frame = (max(frame) OVER (PARTITION BY game)) AS is_last_frame FROM game_rolls WINDOW w AS (PARTITION BY game ORDER BY game, frame, roll) ), scores AS ( SELECT *, CASE WHEN is_last_frame THEN pins WHEN is_strike THEN pins + next_pins + next_next_pins WHEN is_spare THEN pins + next_pins ELSE pins END AS score FROM stats ), sums AS ( -- Note: This CTE will get optimized away by the query planner. -- Nothing is calling it, and we can choose which to call in the final query. SELECT game, SUM(score) FROM scores GROUP BY game ), cumulative_scores AS ( SELECT *, SUM(score) OVER (w) AS cumulative_score, CASE WHEN is_strike THEN 'X' WHEN is_spare THEN ' /' ELSE REPEAT(' ', roll - 1) || pins END display FROM scores WINDOW w AS (PARTITION BY game ORDER BY game, frame, roll) ) SELECT game, frame, display, cumulative_score FROM cumulative_scores ; ``` **Results Game 1**: |game|frame|display|cumulative_score| |----|-----|-------|----------------| | 1| 1|X | 30| | 1| 2|X | 60| | 1| 3|X | 90| | 1| 4|X | 120| | 1| 5|X | 150| | 1| 6|X | 180| | 1| 7|X | 210| | 1| 8|X | 240| | 1| 9|X | 270| | 1| 10|X | 280| | 1| 10| 10 | 290| | 1| 10| 10 | 300| **Results Game 2**: |game|frame|display|cumulative_score| |----|-----|-------|----------------| | 2| 1|1 | 1| | 2| 1| 0 | 1| | 2| 2|1 | 2| | 2| 2| 0 | 2| | 2| 3|1 | 3| | 2| 3| 0 | 3| | 2| 4|1 | 4| | 2| 4| 0 | 4| | 2| 5|1 | 5| | 2| 5| 0 | 5| | 2| 6|1 | 6| | 2| 6| 0 | 6| | 2| 7|1 | 7| | 2| 7| 0 | 7| | 2| 8|1 | 8| | 2| 8| 0 | 8| | 2| 9|1 | 9| | 2| 9| 0 | 9| | 2| 10|1 | 10| | 2| 10| 0 | 10| **Results Game 3**: |game|frame|display|cumulative_score| |----|-----|-------|----------------| | 3| 1|0 | 0| | 3| 1| / | 10| | 3| 2|0 | 10| | 3| 2| / | 20| | 3| 3|0 | 20| | 3| 3| / | 30| | 3| 4|0 | 30| | 3| 4| / | 40| | 3| 5|0 | 40| | 3| 5| / | 50| | 3| 6|0 | 50| | 3| 6| / | 60| | 3| 7|0 | 60| | 3| 7| / | 70| | 3| 8|0 | 70| | 3| 8| / | 80| | 3| 9|0 | 80| | 3| 9| / | 90| | 3| 10|0 | 90| | 3| 10| / | 100| **Results Game 4**: > I'm putting this in a preformatted text block so you can see the formatting. Markdown tables aren't doing > this justice. ``` |game|frame|display|cumulative_score| |----|-----|-------|----------------| | 4| 1|7 | 7| | 4| 1| 2 | 9| | 4| 2|X | 28| | 4| 3|5 | 33| | 4| 3| 4 | 37| | 4| 4|8 | 45| | 4| 4| 1 | 46| | 4| 5|X | 65| | 4| 6|6 | 71| | 4| 6| 3 | 74| | 4| 7|9 | 83| | 4| 7| 0 | 83| | 4| 8|X | 102| | 4| 9|7 | 109| | 4| 9| 2 | 111| | 4| 10|X | 121| | 4| 10| 10 | 131| | 4| 10| 8 | 139| ``` I narrowed the columns a little to make it easier to follow. In the final query, I would include all the columns to let the client decide how to display it. ## Conclusion If you wanted to use this using a Business Intelligence (BI) tool, you should create a view for each CTE and then query the final view. Should bowling be calculated in SQL? I'll leave that up to you. It was a fun exercise to figure out the logic and I hope you enjoyed the ride! --- # Karaoke Date: 2024-09-24 URL: https://ddrscott.github.io/blog/2024/tts-comparision/ Tags: life Description: What it's like when robots campaign for your Site Reliability position I've been doing a lot of comparisons between Text to Speech (TTS) engines recently and run into a this post from [Julien Bicknell](https://www.linkedin.com/in/juliensydney) who had ChatGPT "write a job ad in the style of Donald Trump". I couldn't resist passing the generated script through to the Text to Speech engines in front of me. **WARNING**: This is NOT a comprehensive analysis. I'm simply posting these recordings so I don't forget about them, and can listen to them in the future for a laugh. If someone wants a thorough analysis make me an offer. In the meantime, enjoy the robots trying to get an SRE job. ## Audio Outputs from Various Vendors
Open AI - Nova
Google Vertex AI - Casual K
Google Vertex AI - Journey-O
Play.HT - Play v3
Microsoft Copilot
## Conclusion They all feel robotic to me, but Open AI feels the most natural at the moment. I'm sure I could use Speech Synthesis Markup Language (SSML) tags to improve it a bit from Vertex AI, but that would require me to add those tags to voice script or prompt ChatGPT to generate it with SSML tags. ## Original Script [LinkedIn Post](https://www.linkedin.com/posts/juliensydney_makeadvertsgreatagain-activity-7243775950448947200-xzYH) Featured Banner > Listen, folks, I’ve talked to many, many people, and they all agree – we need the BEST Site Reliability Engineers. > > This is going to be the most incredible team, and we’re going to win. No one knows more about uptime and reliability than we do – nobody. > > What you’ll do: We’re talking cloud infrastructures, folks – AWS, Azure, GCP – you know it better than anyone. Automation? We’re going to automate like never before. Monitoring? It’s going to be beautiful. We’re going to watch everything, 24/7, and it’s going to work. Some people don’t know how to do this, but we do. > > You’ll need: > Cloud platforms experience. AWS, Azure, Google Cloud – we’re the best with all of them. > Automation skills. We’re talking Ansible, Terraform, Python – and folks, we automate more than anyone, believe me. > Kubernetes expertise – everyone’s talking about it, folks. And we’re doing more with it than anyone. > Monitoring tools like Prometheus and Grafana. We see everything – everything. > CI/CD pipelines? You build them, you run them, you win with them. > > Some people say it can’t be done. But let me tell you something – it can. And we’re going to do it better than anyone ever has. We’re going to make reliability so good, you’ll get tired of being reliable! > > Here’s the deal: > Tremendous salary – no one pays better, believe me. > Amazing benefits. You’ll say, “Mr. SRE Manager, I’ve never seen benefits like these before!”. > Work from the best places – remote, flexible, we’ve got it all. No one works better. > > Folks, this is the kind of role where people are talking about it. > > I was at a debate the other day, and let me tell you, everyone’s talking about how important it is to keep systems running. We’re going to bring back uptime. We’re going to bring back stability. And we're going to make systems so reliable, you’ll get tired of being reliable. > > Believe me! > --- # Karaoke Date: 2024-09-23 URL: https://ddrscott.github.io/blog/2024/karaoke-podcast/ Tags: life Description: Surprisingly Good AI Generated Podcast Summary of Karaoke Artcle This podcast was created using [Google's Notebook LM](https://notebooklm.google.com) I feed in my Karaoke Article and this was the results. [Audio File](https://cdn.dataturd.com/audio/karaoke-article.mp3) ## Feedback ### Pros - The speakers are lively and engaging. - Interesting conclusions - Good story telling - Unique perspective on the original article, which provides some feedback to me, the author, for how my readers could interpret the post. ### Cons - Misinterpreted the complications I had with the "middle" channel - Wrong conclusion about the contest. I did share he got a silver medal. ## Text Transcription (provided by [AssemblyAI](https://www.assemblyai.com)) **Alex**: Ever been listening to music and think, man, I wish I could take the lead singer out of this so I could sing it at karaoke night. **Pat**: Happens all the time. **Alex**: Well, that's what we're diving into today. We found this blog post by a self proclaimed, get this, recovering ten x developer, okay. Who used his coding powers to, well, take karaoke night to a whole new level. **Pat**: Wow. **Alex**: Helping his son prep for a singing competition using AI. The blog post is called Karaoke y. Scott. W h y. Catchy. **Pat**: I like it. **Alex**: So Scott son has this competition. They need to isolate vocals from these pre recorded tracks. **Pat**: Okay. **Alex**: And, I mean, I guess most people would just try to find a karaoke version, right? **Pat**: Or use some audio software, you know, try to minimize the vocals. **Alex**: Yeah, just like reduce the vocal track or something. **Pat**: Exactly. **Alex**: But not Scott. This guy goes all out. **Pat**: Oh, I see this coming. I can tell. **Alex**: He's like, if it's worth doing, it's worth overdoing. **Pat**: Love it. **Alex**: Which I thought was hilarious. So he decides to, you know what? I am going to build a whole karaoke system from scratch using AI, of course. **Pat**: Of course. Why not? Why not? **Alex**: And it's interesting because it is a good example of why those simpler solutions, they don't always work, especially for something. If you want to be serious about it, if you have a vocalist who actually wants to use this to train. **Pat**: Exactly. And those traditional methods, they kind of rely on just removing certain frequencies, which for casual singing along, it's fine. But if you really want to hear yourself, it just makes everything muddy. It's not good. **Alex**: So Scott jumps into this whole world of AI music source separation with this tool called splitter. Have you heard of this? **Pat**: Yeah. So spleeter uses a neural network, basically, it's trained on tons and tons of songs to learn to identify and separate all the different parts of the music. **Alex**: Okay. **Pat**: To vocals, instruments, even drums. **Alex**: Wow. So we're talking about Scott, who's a coder, not like an audio engineer or anything, right? Diving into collab, notebooks, APIs, static site building, all to just, you know, wow, give his son a shot. **Pat**: That's the part I think is so fascinating. Right? **Alex**: Yeah. **Pat**: Not even just the tech, but that he was willing to just go for it. Like it's such a specific and kind of personal goal. **Alex**: Totally. **Pat**: I don't know, it just shows you how accessible these tools are becoming. That anyone can use them. **Alex**: Yeah. And that you can use them for such, you know, exactly. Weird, specific things. **Pat**: Totally. **Alex**: Okay. But like, anything involving AI, it's not. **Pat**: Always, you know, it's not gonna be easy. **Alex**: Smooth sailing. And Scott's actually really honest about, like, the challenges he ran into. **Pat**: Of course. Yeah. I mean, it can be amazing, this AI stuff, but it still needs that human touch. **Alex**: Yeah. **Pat**: You know, especially if you want something that sounds really polished. **Alex**: Right. It can't do everything for you. **Pat**: Not yet, at least. **Alex**: Yeah. One of the things that tripped him up was this whole thing, this middle channel. **Pat**: Oh, yeah. **Alex**: What is that? And why did it, like, totally derail his whole AI karaoke machine? **Pat**: Okay, so the middle channel. Imagine listening to a song, right? Like, all the instruments and the vocals, they're kind of laid out in front of you, almost like a 3d map of the sound. **Alex**: Okay. **Pat**: And the middle channel is that center part. It's where you'll find a lot of the stuff that blends everything together. **Alex**: Okay. **Pat**: It's not just about, like, panning things to the center. It's reverb vocal harmonies, instruments that are kind of meant to glue the whole mix together. **Alex**: I'm picturing it. **Pat**: You might not really notice them on their own, but they really add to the overall feel. And that's where these AI tools, like splitters, that's where they can sometimes struggle. Cause it all gets a little intertwined with the main vocals in that middle channel. **Alex**: So it's like you baked a cake, and now you're trying to take out the sugar. **Pat**: Yeah, exactly. Good luck with that. **Alex**: Yep. **Pat**: So that's what happened with Scott. The AI was doing its thing, doing great, but it really needed a human to come in and fine tune to make sure those vocals were, you know, crystal clear. Especially for a competition. **Alex**: Yeah, for a competition, you need it to be perfect. **Pat**: Right. **Alex**: And he actually says in the blog that the final product was, like, 80% AI generated. 20% human stitched. **Pat**: Yeah. **Alex**: Which, I don't know, that just seems like AI these days, you know? **Pat**: Totally. **Alex**: It can do a lot, but it still needs us. **Pat**: Yeah. It's like it can do the heavy lifting, but we still need to, you know, kind of guide it. **Alex**: It's like when I'm prepping a presentation. **Pat**: Okay. **Alex**: Like, I might use AI to help me get my thoughts down. **Pat**: Yeah. **Alex**: Maybe even design some slides or whatever. But it's still me, you know? It's my voice. **Pat**: Exactly. **Alex**: My perspective that makes it all work together. **Pat**: It's that human element, not just in the final product, but in the whole process. **Alex**: Yeah. **Pat**: Think about Scott. I mean, diving into all that new tech. He wouldn't have learned all that if he just downloaded a karaoke track. **Alex**: That's a really good point. So, okay, all this tech talk, did it actually work? Did his son nail the audition? Was it a success? **Pat**: Well, that's the thing. He doesn't actually say if he won or not. **Alex**: Oh, really? **Pat**: He kind of leaves us hanging, so. **Alex**: We don't know if it was a standing. After all that work, he really focuses. **Pat**: More on the journey, what he learned along the way. **Alex**: Interesting. **Pat**: I think it says a lot about how the project was about more than just winning. It was about supporting his kid, doing something cool with technology, maybe learning something new himself. **Alex**: Yeah, yeah, for sure. **Pat**: And speaking of which, his big takeaway from all this is 99% practice, 1% technology, which I love, because it's so easy to get caught up in the fancy new thing. **Alex**: Totally. **Pat**: But at the end of the day, whether you're singing or coding or, you know, podcasting, it's all about putting in the time, the work. **Alex**: He even jokes that next time, instead of building a whole karaoke system, he'll just sneak into his son's teacher's suitcase and record practice directly. Well, there you go, hiding in a suitcase. I mean, talk about extreme lengths for your kid's karaoke career. **Pat**: That's dedication for you. Maybe a little over the top. **Alex**: He's got a good sense of humor about it, though. **Pat**: Definitely. **Alex**: Yeah. **Pat**: But it's a cool story, right? **Alex**: Yeah, it really shows how technology can bring families together. You know, like how many dads would build a whole AI karaoke system just for their kid? It's pretty sweet. **Pat**: Totally. And it's easy to focus on the big, impressive AI stuff, but I think it's these smaller stories that are more interesting. **Alex**: Yeah. **Pat**: More personal. **Alex**: It's not even just about the end result, you know? **Pat**: No, exactly. **Alex**: Like all that learning and problem solving, the whole process can be just as important. **Pat**: I think it goes back to what we were saying about AI and human input, that 80 20 split. AI isn't going to replace us. It's more like it helps us do things we couldn't do before. **Alex**: And it's like Scott said, 99% practice, 1% technology. You still got to put in a work. **Pat**: It's a good reminder for all of us. **Alex**: It is. So I think this deep dive taught us a lot more than how to make a karaoke track. It's about using technology in creative ways. **Pat**: Definitely, and not being afraid to experiment, even if it seems impossible. **Alex**: And who knows? Maybe it even inspired some people listening to, I don't know, dust off their old karaoke machines or maybe build their own or build their own. There you go. **Pat**: That's a wrap. --- # Karaoke Date: 2024-09-19 URL: https://ddrscott.github.io/blog/2024/karaoke/ Tags: life Description: How to leverage technology to win at singing! TL;DR - Karaoke! My 10-year-old had a singing competition coming up in 2 weeks. He had two songs to perfect, and one was in Latin. His teacher provided an audio recording of herself performing to use as a reference when she wasn't around. The recording was good enough, but I thought we could do better. Karaoke! Traditional karaoke machines perform brute force audio filtering by clipping middle frequencies, clipping identical frequencies found in the left and right channel, or removing the center channel, but that's not good enough for my son. I need AI magic! > If it's worth doing, it's worth overdoing! > - Me ## Get It Done! 1. **Get a recording with accompaniment of each song.** ✅ (His teacher sent this before the project started) 2. **Find an open-source project that uses AI to segment music files.** [Spleeter](https://github.com/deezer/spleeter) ✅ 3. **Create a notebook in Google Colab to take the library out for a spin.** ✅ 4. **Glue the separate wav files back together into a single MP3 file.** ✅ 5. **Create an API for the project.** (Screenshot below. I can't publish it since it costs $$$) ✅ 6. **Create a [static site audio player](https://ddrscott.github.io/prompt-games/singing) that allows my son to mix in or out vocals as needed.** ✅ 7. **PRACTICE!** ## Lessons Learned The whole project stemmed from the thought: What can AI do for me? It taught me about the middle channel and filter technique. It tried its best to implement some Python code using [librosa](https://librosa.org/doc/latest/index.html), but the output was never satisfactory. A simple web search discovered the [Spleeter project](https://github.com/deezer/spleeter), and ChatGPT was fairly good at implementing some snippets I could copy/paste. I was impressed with its implementation of the JavaScript visualization in the audio player. It used [Flexbox](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_flexible_box_layout) predictably, but I had to interject to get the player on the bottom as a different component. The audio slider to mix the vocals and accompaniment was not trivial, and AI failed many, many times. I ultimately had to read the JS Audio API to figure out the nuances. Overall, it was probably 80% AI-generated and 20% human-stitched and corrected. Time-wise, I think it would have been a wash regardless of AI. AI forced me to spend time troubleshooting unfamiliar code vs. traditionally reading unfamiliar docs. - **Useful:** 5 of 5 - **Productizable:** 3 of 5 - **Bug-Free:** 2 of 5 - **Fun:** 5 of 5 ## Screenshots ### Frontend for audio splitter _I can't provide the URL since it costs $$$ to keep it alive_
Screenshot of Audio Splitter Site
Drag n' Drop MP3 or Wav or Whatever Audio File
### Frontend for Karaoke Player [Static Site](https://ddrscott.github.io/prompt-games/singing)
Screenshot of Audio Split Player
Add file to static file for playback.
- No Lyrics - No upload - Everything is statically built off of my blog. Duct-tape and bubble gum till the bitter end. ## P.S. So you wanna know how he did in his _first_ singing competition? Silver medalist 🥈. 99% Practice. 1% Technology. --- # Taking Humour Seriously Date: 2024-08-14 URL: https://ddrscott.github.io/blog/2024/taking-humor-seriously/ Tags: life Description: Ever wonder what goes into making a joke that lasts just a few seconds? Spoiler: it's a lifetime of experiences.
I can finally understand what people see when they watch me develop!
Ever wonder what goes into making a joke that lasts just a few seconds? Spoiler: it's a lifetime of experiences. When a comedian delivers a joke, it often only takes a few moments. There's a short story that leads the audience in one direction, a conflict, and an unexpected ending. The whole affair only takes a minute or so, and then they're off to another joke. The best comedians refine their craft off-stage, spending countless hours writing, editing, and ruminating over their material. A rare few can improvise and come up with lines on demand, but even those took a lifetime of experiences to refine their craft and read the room. ## How I Made my Joke My joke video lasts 19 seconds, you'll hopefully laugh for 3, and if the news spreads, maybe 1,000 people will laugh, and we'll have delivered 3,000 seconds of laughter. But this joke didn't take 19 seconds to create. It took a lifetime: 1. Survive childhood 2. Learn to code 3. Learn Vim 4. Watch movies 5. Learn AI 6. Keep up with AI news, triggering a funny idea 7. Find Bruce Almighty clip 8. Create video by recalling everything I've learned up until this point 9. Write article to share my experience 10. Post on LinkedIn When an expert is at the mic, they make it seem effortless. Jerry Seinfeld's standup. Roger Federer's backhand. Scott's Vim jokes. So next time you see a seemingly effortless performance, remember the countless hours, the dedication, and the lifetime of experiences that made it possible. It's a testament to the power of persistence and the beauty of honing one's craft. **What’s something in your life that seems effortless now but took years of hard work to master? Share your story in the comments—I’d love to hear about your journey and the skills you’ve refined over time.** --- # Recovering 10x Developer Date: 2024-07-29 URL: https://ddrscott.github.io/blog/2024/recovering-10x-developer/ Tags: life Description: Ever wondered why being the fastest coder in the room isn't enough to make a real impact? Discover the journey of a "Recovering 10x Developer" who learned the hard way that speed isn't everything - value is. ## Introduction It all started over 20 years ago, writing code to win friends and influence people. It's fun to materialize apps out of thin air. Need a game? Need a report? Need a search engine? Need a cloud? Need AI? Yep, I've got it covered. All these things seem great on the surface. I've made a decent living building stuff, slowly at first, then eventually to the magical 10x threshold. But then it came crashing down. 10x wasn't good enough. If 10x isn't good enough, 20x won't be, 30x won't be. I've been looking at the wrong problem. I've had a magic wand and used it to conjure ponies and rainbows, but no one wants those. People want value. ### What was your initial motivation for becoming a developer? I'm an artist at heart. I've always been creative. Shapes and spaces have always made sense to me. I can recall fixing stuff around the house with my Dad and creating paper craft transformers when I was in elementary school. In High School, I was taking all the drafting electives and was planning on going down the mechanical engineer path. Then I discovered my friend's computer which had programming languages and realized I could take creation to another level since I didn't need to worry about running out of materials and could undo/redo to my heart's content until the vision was complete. ### Can you share a memorable project or achievement that highlights your skills? I was at a startup which attempted to compete with Amazon Kindle. I created a white-labeled multi-language, multi-currency, digital reading platform that could be skinned to any publisher, author, or brand to deliver their catalog instead of Kindle. The vision was to return the customer to the source and away from Amazon. I created an iOS Reader, Android Reader, Web Reader, Catalog Browser, and had proprietary DRM for Web Reader. It was a beautiful platform, but we couldn't convince publishers to sign up. ## The Realization ### When did you first start feeling that 10x wasn't enough? At the last company, I realized it didn't matter how fast I could produce something if it wasn't the right thing. I often debated with product owners and executives on what we should be working on, but more often than not, I conceded my points because I was getting paid for execution, not strategy. Eventually, I discovered we could change strategies midway by creating prototypes that would prove or disprove a point to fail fast. This ultimately led me down the path of prototyping everything and getting my final speed increase simply by reducing the requirements to the essentials. ### What specific event or project made you realize the importance of value? With the heat-up of AI in 2023 due to OpenAI's ChatGPT, everyone wanted a chat interface in their product line, but the value proposition never lined up for me. A company could shift their resources to chase after the wind or get down to basics and realize there are thousands of successful companies that figured out how to do things without a chatbot. No need to reinvent the wheel. Simply use the wheel that's already in front of us. ## The Shift in Mindset ### How did you start identifying the real needs of your clients? At one of my past employers, I recall someone bringing up the XY Problem where someone is struggling with solution X, but it won't solve problem Y. For instance, someone goes to the doctor asking for pain killers because their foot hurts, but what they really need is to take the thorn out of it. This issue is rampant in corporate settings because people don't often have safe spaces to question the tasks. They worry about their jobs and closing tickets. ### Can you provide an example where you successfully identified and delivered true value? In the chatbot era, I agreed to implement a chatbot solution for a client. However, during the discovery phase, we realized they weren't asking for a chatbot at all. They needed a way to flag key indicators among 100,000 historical contracts. They required a quick method to review these contracts and assign attributes to them. Essentially, they needed a pipeline, not a chatbot. The solution and the ultimate value of automation vastly outweighed the value of chatting about a contract. No one wanted to type the same things to the chatbot 100,000 times. The automation saves them millions of dollars annually in litigation and insurance claims. ## Lessons Learned ### What were the biggest challenges in shifting your mindset? The hardest part of the shift is overcoming the urge to write my own code and implementation. I need to trust that there are others who can do it just as well, if not better. It's simply a matter of finding them and building up the team. ### How has this new approach benefited your clients and your career? It's interesting; I've been discussing the value of projects at the onset for a long time, but I never had the right words or framework to drive the point home. My job as an execution expert was to make things happen, not to question why. If the client wanted a time machine, I'd get the first version out faster than anyone else. But now, I know better. The time machine, as cool as it is to make, doesn't solve the real pain of messy data or whatever solved problem it ends up being. Just like the chatbot case above, discovering the real problem created the real value. A time machine creates coolness and distracts from getting to the real value. ## Conclusion ### How has this journey changed you personally and professionally? It's scary for sure. Imposter syndrome sets in quickly but wears away with each success. I feel more brave with each client call. I feel less anxiety away from the keyboard. I feel like this is the way entrepreneurship is supposed to be: discover a problem no one is willing or able to solve, and solve it. ### What are the next steps in your journey, and how can your readers join you? If you or someone you know is struggling with projects and needs a value-driven approach, [reach out](https://calendly.com/leftjoinstudio/meet-and-greet) for a free initial consultation. Let's get to the root and solve real problems together and find real value. I'm excited to connect with people who are ready to focus on what truly matters and make impactful changes. Join me on this journey, and let's create something meaningful together. [Schedule an appointment](https://calendly.com/leftjoinstudio/meet-and-greet) for all your tech consulting needs. If you'd rather send us a message, [contact us here](https://www.leftjoinstudio.com/contact/) --- # How to Make a Website from scratch in 7:59 seconds with Github Date: 2023-03-15 URL: https://ddrscott.github.io/blog/2023/how-to-make-a-website-via-github/ Tags: life Description: Wordpress? Squarespace? Wix? Not today. Those are great products for hosting articles and marketing, but you want to make an application. You want to make something the old-fashioned way. You want to know what's under the hood, and you want to make it easy for others to help you along the way. Wordpress? Squarespace? Wix? Not today. Those are great products for hosting articles and marketing, but you want to make an application. You want to make something the old-fashioned way. You want to know what's under the hood, and you want to make it easy for others to help you along the way. Featured Banner **TL;DR** - The goal is to ~understand how~ make a website. Enter Github and Codespaces! ## Steps ### Create a Github Account I made one for my friend: TheChickenCow.
### Start a Codespaces Use `Codespaces` instead of `Create Respository`. use codespaces We'll eventually use Codespaces to push to the repository. Otherwise, it'll take more steps to import the repository into Codespaces. Trust us! ### Create the Content for the Website create files #### docs/index.html This is the primary content of the webpage. ```html Dune by Frank Herbert 1965

Book Cover

``` #### docs/app.css This provides the styling and coloring. ```css :root { /* https://paletton.com/#uid=30A0u0k4ftn0mRD1TBP7no+bil4 */ --base-color: #EADBCB; --text-color: #4F5573; --highlight-color: #70927C; } body { background-color: var(--base-color); color: var(--text-color); padding: 0 10vw; margin: 0; font-family: serif; font-size: 18px; border-left: 10px solid var(--highlight-color); } h1 { margin-top: 0; } .content { white-space: pre-line; line-height: 2; text-align: justify; } ``` #### docs/app.js This defines the behavior which fetches the book contents from a different Github page to display it on our website. This is for demonstration purposes only. Do this at your own risk. ```javascript const book_url = 'https://raw.githubusercontent.com/ganesh-k13/shell/master/test_search/www.glozman.com/TextPages/Frank%20Herbert%20-%20Dune.txt'; fetch(book_url).then(function (response) { return response.text(); }).then(function (html) { document.querySelector('.content').innerHTML = html.replace(/\n\n/g, '
'); }).catch(function (err) { console.warn('Something went wrong.', err); }); ``` #### README.md ```markdown # Dune Scroller _____________________ < Yep. We went there. > --------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || Read Dune by Frank Herbert via scrolling the whole freaking thing. ## Future Features As an exercise to do at home: - Pick up were you left off, because no one can finish this book in a single sitting. - Auto-scroll based on momentum of initial drag scroll. - Change font size. ## Thanks - [ganesh-k13](https://github.com/ganesh-k13/shell/blob/master/test_search/www.glozman.com/TextPages/Frank%20Herbert%20-%20Dune.txt) for the Dune text - [Wikipedia](https://en.wikipedia.org/wiki/Dune_(novel\)) for the book cover ``` ### Publish to Github 1. Select the Branch Tab on the far left side. 2. Select `Publish to Github` 3. Rename `codespaces-blank` to `dune-scroller` 4. Select Publish to Github public (not private) Publish to Github All files in the project will get automatically selected and hit `OK` to confirm the upload. ### Open on Github Notice the notifications at the bottom of Codespaces. It will show the repository has been published to Github. Select `Open on Github` to see the source code. Open on Github ### Publish the Final Site 1. Select the `Settings` tab near the top. 2. Select the `Pages` tab in the left sidebar. 3. In the Branch section, choose `main` 4. Next to it, choose `/docs` 5. Finally, click `Save` Publish Pages The publish will kick off a deployment Action. ### Wait about 2-minute The initial deployment of the site take about 2 minutes. Actions are observable in the top `Actions` tab. 1. Select `Actions` from the tab tab. 2. Select `pages build and deployment` Wait ### Get the Link When the deployment is finished, the `deploy` step will reveal the final website link that you can share with friends and family or the rest of the Internet. Get the Link https://ddrscott.github.io/dune-scroller/ ## Conclusion We hope this step by step tutorial helped give you a start on your web development journey. We know this probably gave you more questions than answers, but the goal was to get you started and have a site at the end. To learn more about Codespaces try their documentation: https://docs.github.com/en/codespaces ### Languages Used - html - css - javascript Learn about all 3 from https://www.w3schools.com/default.asp ## Connect Let us know what you're working on or publish or stuck on in the comments below. Happy coding! --- # Larry Pierce, aka Dad Date: 2023-02-01 URL: https://ddrscott.github.io/blog/2023/larry-pierce-aka-dad/ Tags: life Description: Lawrence Roy Pierce age 76, of Lake in the Hills, Illinois passed away on February 1, 2023, at Northwestern Medicine - Huntley Hospital. ```sh cowsay "Best Dad I ever had. I'll miss you" # ____________________________________ # < Best Dad I ever had. I'll miss you > # ------------------------------------ # \ ^__^ # \ (oo)\_______ # (__)\ )\/\ # ||----w | # || || ``` ## Computers Are Easy, Feelings Are Hard Today, I received the news my Dad had passed away. Two days ago he was hospitalized for falling.
Yesterday, I called to check up.
Today, he's gone.
My last phone call with him while he was in the hospital he told me, "I'm fine. Just really tired from the medicine. How's Karen's parents in China?" I told him, "they're okay and thanks for asking! Get some rest. We'll call tomorrow" Rest he did. The first and last time he took my advice. ## Not a Tech Person My Dad was not a tech person. He still preferred his 10 year old [Garmin GPS](https://web.archive.org/web/20190805205649/http://garmangps.blogspot.com/) and refused to have a phone smarter than him. Garmon GPS When he retired 10 years ago, he got his first computer and a non-work email address so we could email him pictures from time to time. He stayed on top of the news and current events through the TV and was just getting use to reading articles on [MSN](https://www.msn.com). He used a landline as his primary number and had a voicemail machine. His favorite feature of the landline was how the Caller-ID showed up on the TV before he answered it. He always knew where the phone was located because it was attached to the wall. Phones maybe smart, but no phone is wiser than Dad. ## Humor My Dad loved to laugh, but couldn't tell a joke. He had a boisterous one-of-a-kind laugh that went higher pitch during the inhale phase. (Note to self: turn his laugh into a ringtone) He was a literal person that kept things simple. All my jokes would go over his head. Sometimes he would look up to catch them. ## Adoption We can't choose our parents, but my Dad chose me. I was 4 years old at the time, and after logistics, I finally came over at 5. He chose my sister, too. Here we are around 1985. Early Pierce Family He would take us bike riding around [Lake Opeka in Des Plaines, IL](https://web.archive.org/web/20230205141050/https://www.dpparks.org/parks-facilities/lake-park/). We would feed the ducks with stale bread. He'd encourage us to be outside by being outside with us. ## Fiscally Responsible My Dad loved to save money. He kept the same wardrobe from his 40s and would only replace articles when they were overwarn, never to have more than what could fit into a small closet. I believe his only real source of new clothing were from gifts. He would try to fix everything himself and with moderate success. Our Internet was "The Readers Digest Fix-It Yourself Manual" and "Complete Do-It Yourself Manual". The Readers Digest Fix-It Yourself Manual and Complete Do-It Yourself Manual Our Youtube was [This Old House hosted by Bob Vila](https://en.wikipedia.org/wiki/This_Old_House). ## Hard Worker My Dad was a hard worker. He got early and came home late. He went back to school after realizing a B.A. in Political Science was a dead end. We worked at companies until they [closed down](https://en.wikipedia.org/wiki/A._B._Dick_Company) or he [retired](https://www.packaging-by-design.com). Loyal to the end, he even came out of retirement to help train the next generation. ## Retirement In his retirement, he preferred to stay around town, visit grand kids, make sandwiches, and catch up on TV shows, movies, and the news. A simple man with simple pleasure. Here we are 30+ years later in Harvard, IL: Grandpa Larry in Harvard, IL And here's the final picture I have with him and Morgan. Larry and Morgan ## Timeline * 1946-11-15 - Born * 2002-12-31 - Baptized in the name of the Father, Son, and Holy Spirit * 2023-02-01 - Meets Jesus ## Official Obituary Lawrence Roy "Larry" Pierce age 76, of Lake in the Hills, Illinois passed away on February 1, 2023, at Northwestern Medicine – Huntley Hospital. Larry was a supportive loving father that enjoyed hikes, bike riding and traveling. He lived a simple life and indulged in simple pleasures like books and movies. Although he couldn't tell a joke, he had a one-of-a-kind laugh. He was born in Chicago, Illinois on November 15, 1946, the loving son of Archie Roy and Grace (Thoring) Pierce. Larry is survived by his children, Scott (Karen) Pierce, Mark Pierce, and Julie Pierce; his grandchildren, Nathan, Morgan, Jeremy, and Kaili; his brother, Steve (Nancy) Pierce. ## Not The End This is only one side of my Dad's story and from one narrow perspective. I'm planning on having a more historical/objective write up soon. But in the meantime, please comment if Larry has impacted your life in anyway. Cheers to the best Dad I ever had! Dad at Grand Canyon National Park --- # Wordy Passwords Date: 2023-01-05 URL: https://ddrscott.github.io/blog/2023/wordy-passwords/ Tags: security Description: How to Generate a Wordy Password ## TL;DR ```sh grep -E '^[a-z]{3,}$' /usr/share/dict/words | shuf -n4 | paste -sd- - ``` ## Intro It is 2023 and passwords are still a thing. They are hard to remember and easy to crack. Many sites talk about using words instead of letters, numbers and symbols: - https://www.useapassphrase.com/ - https://www.nexcess.net/web-tools/secure-password-generator/ - https://xkpasswd.net/s/ Most of them were inspired by this comic: XKCD We recently needed to generate hard to guess, but recognizable URLs to share some data analysis with clients. We also planned to speak the URLs to our clients via video conferencing. GUIDs (Globally Unique Identifiers) are nearly impossible to comprehend and speak accurately: `f95b9390f8fe5d7a6aecd9d563b5f8a5` and `b987370e-6906-498b-a5d4-84c1c92c6e64`, so Wordy Passwords™ (TM cause you heard it here first!) came to mind to meet all the requirements. There are sites that can generate these things which we could scrape as needed, but we thought there's got to be a simpler way. ## Simpler Way If you're on Mac or Linux, the simpler way is already in your terminal. ```sh grep -E '^[a-z]{3,}$' /usr/share/dict/words | shuf -n4 | paste -sd- - ``` Let's explain: - `/usr/share/dict/words` is a file with a list of all English words, about 100k of them. - `grep -E '^[a-z]{3,}$' /usr/share/dict/words` filters for lowercase words with 3 or more letters. - `shuf -n4` picks 4 random entries from the filtered list. Change `4` to whatever is needed for your use case. - `paste -sd- -` joins all the lines with `-` symbol. This can also be changed to `-sd ' '` for spaces or `-sd ','` for commas, etc. Note, the last `-` tells the command to read from `/dev/stdin` and is necessary on Mac. Use `man grep`, `man shuf`, or `man paste` to get additional details about each command. Example Output: - `mentions-grouses-datelines-mouses` - `cellulite-sukiyaki-ardner-rusted` - `naturalness-indicate-dependent-beset` - `irony-excursion-flashily-crawl` We can then use these in our data generation URLs to get something like: - `https://your.dataturd.com/mentions-grouses-datelines-mouses` - `https://your.dataturd.com/cellulite-sukiyaki-ardner-rusted` - `https://your.dataturd.com/naturalness-indicate-dependent-beset` - `https://your.dataturd.com/irony-excursion-flashily-crawl` In case `/usr/share/dict/words` doesn't exist, try installing with `apt install wamerican`. To pick another language try `apt search wordlist` which will return a bunch of other options for specific languages. ### Windows PC If you're in Windows, we'd recommend using [WSL](https://learn.microsoft.com/en-us/windows/wsl/install). This will provide access to a Linux shell which has all the aforementioned commands. If you're not sure where to type in these fancy commands, try searching for "learn linux terminal". We'll eventually post additional articles regarding our love for the terminal and why commands like these keep us in it. ## Alternative Word List We experimented with other word lists. [doyle.txt](https://www.dataturd.com/words/doyle.txt) contains words extracted from books featuring Sherlock Holmes. Source code for how the word list was built is located at https://github.com/ddrscott/wordy-passwords . Feel free to search the Internet for other word lists. ## Closing The terminal is a magical place where mountains of code can be replaced with a few razor sharp commands. `shuf` and `paste` can make short work of many tasks. Let us know what you come up with in the comments below! --- # How to Setup Airflow in its own Kubernetes namespace using Helm Date: 2022-01-01 URL: https://ddrscott.github.io/blog/2022/how-to-airflow-with-helm/ Create a Helm values file to contain settings specific to your needs. The main settings we listed demonstrate a few things: 1. pull dags from a git repository 2. enable Airflow API endpoints 3. enable Airflow to use KubernetesPodOperator on itself 4. use simple database setup 5. disable Redis and enable KubernetesExecutor ```yaml git: dags: enabled: true repositories: - repository: https://ro_user:seeeecret@bitbucket.org/some-project/dags-repo.git name: some-project branch: main scheduler: extraEnvVars: - name: PYTHONPATH value: /opt/bitnami/airflow/dags/git_some-project/dags web: extraEnvVars: - name: PYTHONPATH value: /opt/bitnami/airflow/dags/git_some-project/dags - name: AIRFLOW__WEBSERVER__ENABLE_PROXY_FIX value: "True" - name: AIRFLOW_BASE_URL value: https://airflow.example.com - name: AIRFLOW__API__AUTH_BACKEND value: "airflow.api.auth.backend.default" - name: AIRFLOW__API__ENABLE_EXPERIMENTAL_API value: "True" worker: extraEnvVars: - name: PYTHONPATH value: /opt/bitnami/airflow/dags/git_some-project/dags # begin https://artifacthub.io/packages/helm/bitnami/airflow#kubernetesexecutor executor: KubernetesExecutor redis: endable: false serviceAccount: create: true rbac: create: true # end https://artifacthub.io/packages/helm/bitnami/airflow#kubernetesexecutor postgresql: # standalone is default, but its nice to call it out in case we forget. architecture: standalone ``` Install it using Helm ```sh helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update helm upgrade --install --namespace funland \ airflow bitnami/airflow \ --version 12.0.14 \ --values deploy/h2m-airflow-prod.yml ``` Get the admin password ```sh echo $(kubectl get secret --namespace "ilabs" airflow -o jsonpath="{.data.airflow-password}" | base64 --decode) ``` ## References - https://docs.bitnami.com/kubernetes/infrastructure/apache-airflow/ - https://artifacthub.io/packages/helm/bitnami/airflow --- # How to use Kubernetes Namespaces and Service Accounts for Fun and Profit Date: 2022-01-01 URL: https://ddrscott.github.io/blog/2022/how-to-k8s-service-account/ We'll create a namespace named `funland` and show how to use a token to access it. ## Create a Namespace ```sh kubectl create namespace funland ``` ## Create a Service Account This creates a new service account, an "owner" role, and assigns the owner to the new service account _and_ to the `default` service account. ```sh kubectl -n funland apply -f- < Bad news: It doesn't support CBR (Comic Book Reader) files.
Good news: I recently read the [manual](https://www.gnu.org/software/make/manual/make.html) on `make` because I had trouble sleeping.
Bad news: It didn't put me to sleep. ## Problem I have a bunch of CBR ([Comic Book Reader](https://en.wikipedia.org/wiki/Comic_book_archive)) files that I need to convert to PDF ([Portable Document Format](https://en.wikipedia.org/wiki/PDF)) files. The general step for a single file would look something like: ```sh rar x naruto-001.cbr .images # 1) unpack archive into .images/*.jpg magick convert .images/* naruto-001.pdf # 2) convert all the images into pdf file rm -rf .images # 3) remove the images. ``` But we're not here to convert a single file. We're here to convert a ton of them. Without `make`, we could probably: 1. extract the snippet to a bash script named `foo.sh` 2. run the script on one file with `./foo.sh naruto-001.cbr` 3. run the script on all the files with `find . -name '*.cbr' -exec ./foo.sh {} \;`. Then we'd curse when errors come up because: 1. we forgot to make the script executable. `chmod +x foo.sh` 2. we forget to add `-e` so the script stops when something goes wrong. `#!/bin/bash -e` 3. something with `find` never works first try! 4. file `naruto-301.cbr` failed, and we don't want to run `naruto-{000-300}.cbr` again! ## Solution Make way for `make`! A `Makefile` can handle most of our iteration problems automatically. It already has the facilities to handle multiple files, error handling, and skipping finished files. ### A Full `Makefile` ```make cbr_files = $(wildcard *.cbr) pdf_files = $(cbr_files:.cbr=.pdf) image_dir = .images-$< all: ${pdf_files} @echo All done clean: rm ${pdf_files} %.pdf: %.cbr @echo Building $< into $@ rar x $< ${image_dir}/ find ${image_dir}/ -size 0 -ls -delete magick convert ${image_dir}/* $@ rm -rf ${image_dir} @echo $@ is ready ``` ### `Makefile` with in-Line Comments ```make # 1. Get a list of all CBR files in the current directory cbr_files = $(wildcard *.cbr) # 2. Get a list of all PDFs I want to build by renaming CBR list to PDF pdf_files = $(cbr_files:.cbr=.pdf) # 3. Set where the unpacked images should go. We'll delete them after each build image_dir = .images-$< # 4. The first target specified is always the default when we run `make` without any arguments all: ${pdf_files} @echo All done # 5. `make clean` will remove all the generated PDFs clean: rm ${pdf_files} # 6. Use Pattern Rules to build an individual PDF file %.pdf: %.cbr # use @ to prevent `make` from printing the command it is executing @echo Building $< into $@ # ^ ^ # | | # | | # | +- foo.pdf # | # +- foo.cbr # rar x $< ${image_dir}/ # ^ ^ ^ # | | | # | | +- unpack to destination directory # | | # | +- foo.cbr # | # +- extract # find ${image_dir}/ -size 0 -ls -delete # ^ ^ ^ ^ # | | | | # | | | +- delete the file # | | | # | | +- print `ls -l` of the file # | | # | +- only files with zero size # +- find in image directory # magick convert ${image_dir}/* $@ # ^ ^ # | | # | +- foo.pdf # | # +- all files in image directory # rm -rf ${image_dir} # ^ # | # +- delete entire image director. We don't need it anymore. # @echo $@ is ready ``` Code the contents of either snippet into a `Makefile` and put the `Makefile` in the same directory as all the `cbr` files. Then execute `make` and a pile of `pdf` files should get created. If we run with `make -j 4` it will even run multiple files in parallel where `4` is the number of parallel jobs you'd like to run. Revel in the code we _didn't_ have to write to enable parallelism! ## A Little Deeper The `make` magic occurs in step 6 with `%.pdf: %.cbr`. This is officially called a [pattern rule](https://www.gnu.org/software/make/manual/make.html#Pattern-Rules) according to the documentation. It tells `make` to build a target for any file suffixed with `.pdf` and set a dependency for the same file suffixed with `.cbr`. With the pattern rule in place and using `$<` to represent the first dependency and `$@` to represent the target file, we can script out the remaining build. Another piece of magic is in step #3. `image_dir = .images-$<` is a variable assignment referencing `$<`. The `=` assignment operator in `make` is a macro, meaning it won't get evaluated until the variable is referenced in a target. This can be surprising at first, but in this case, it provides a nice unique directory for our images to get unpacked. More details about Makefile variable use can be found in their [docs](https://www.gnu.org/software/make/manual/html_node/Using-Variables.html#Using-Variables). ## Conclusion `make` isn't the shinest build system around. [Wikipedia](https://en.wikipedia.org/wiki/List_of_build_automation_software) says there are over 40 of them, but in my opinion, `make` get's the most value for the least amount of effort. `make` provides dependency checking, multiple job support, and simple bash-like syntax without the need to `npm install the_world`. Thanks for `make`ing it this far. Let me know how you `make` use of `make` in the comments below. --- # Slice and Dice SQL with SQL Ninja Date: 2020-06-08 URL: https://ddrscott.github.io/blog/2020/sql-ninja/ Tags: python, sql Description: SQL + Jinja = SQL Templating Done Right™ I write a lot of SQL. It's my primary language for various reasons which we won't get into at here, but even after using it for so long, SQL can still be a pain for some repetitive tasks. ## Problem Here's a SQL expression for pivoting some expenses by category: ```sql SELECT DATEFORMAT(entry_dt, 'yyyy-mm') as year_month, SUM(CASE WHEN category = 'katanas' THEN amount END) AS katanas, SUM(CASE WHEN category = 'shurikens' THEN amount END) AS shurikens, SUM(CASE WHEN category = 'hooks' THEN amount END) AS hooks, SUM(CASE WHEN category = 'shoes' THEN amount END) AS shoes, SUM(CASE WHEN category = 'disguises' THEN amount END) AS disguises, SUM( CASE WHEN category NOT IN ('katanas','shurikens','hooks','shoes','disguises') THEN amount END ) AS other_amount, SUM(amount) AS total_amount FROM expenses ORDER BY year_month, total_amount ``` Can we see the repetitive SQL going on? Here's a hint: `SUM(CASE WHEN(...)) AS ...`. Wouldn't it be nice to make a list of categories and have the SQL generated for you? Wouldn't it be nice to use generated SQL directly from command line? ## Solution SQL is a declarative language. HTML is a declarative language. HTML has so many template languages to choose from, but SQL has so few. Enter SQL Ninja. SQL Ninja uses the popular [Jinja Templating Engine](https://jinja.palletsprojects.com/) which typically generates HTML, but we can use it to generate SQL. ```sh pip install sql-ninja ``` Now, let's create a simple SQL file: ```sh cat > hello.sql < SELECT 'world' ``` That wasn't very exciting. Let's try to template it: ```sh cat > hello.sql < SELECT '' ``` What happened? We got a blank message. This is because we didn't provide one to the template engine. Try this instead: ```sh sql hello.sql msg='World!!!' # => SELECT 'World!!!' ``` We did it! ### Starting a New Project In the real world, we work with work with projects that have a lot of other files. We probably don't want our main projects root directory littered with `*.sql` files. SQL Ninja recommends putting templates in the following directory: ``` . └── sql └── templates └── hello.sql ``` So let's do that with our `hello.sql` ```sh mkdir -p sql/templates mv hello.sql sql/templates/ ``` And from here, we can run our exact `sql` command from before: ```sh sql hello.sql msg='World!!!' # => SELECT 'World!!!' ``` It still works because SQL Ninja has sensible defaults. Files in current the current working directory and `sql/templates` are the default search paths. !!! note "Default Searched Directories" 1. Current working directory 2. `sql/templates` Please use one or the other, but not both. ### Build The Query Create `sql/templates/expenses.sql` ```sql {%- set categories = categories or 'katanas shurikens hooks shoes disguises' -%} {%- set pivots = categories.split(' ') -%} SELECT DATEFORMAT(entry_dt, 'yyyy-mm') as year_month, {%- for pivot in pivots %} SUM(CASE WHEN category = '{{ pivot }}' THEN amount END) AS {{ pivot }}, {%- endfor %} SUM( CASE WHEN category NOT IN ( {%- for pivot in pivots -%} '{{pivot}}'{{',' if not loop.last}} {%- endfor -%}) THEN amount END ) AS other_amount, SUM(amount) AS total_amount FROM expenses ORDER BY year_month, total_amount ``` Let's run it: ```sh sql expenses.sql categories='food transportation' #=> SELECT DATEFORMAT(entry_dt, 'yyyy-mm') as year_month, SUM(CASE WHEN category = 'food' THEN amount END) AS food, SUM(CASE WHEN category = 'transportation' THEN amount END) AS transportation, SUM( CASE WHEN category NOT IN ('food','transportation') THEN amount END ) AS other_amount, SUM(amount) AS total_amount FROM expenses ORDER BY year_month, total_amount ``` ### Subquery Workflow Jinja supports including templates in other templates. This is perfect for sub queries! Let's break up the expenses query into two: 1. summaries by category 2. pivot the results ```sql -- sql/templates/expenses/summary.sql SELECT DATEFORMAT(entry_dt, 'yyyy-mm') as year_month, category, SUM(amount) AS amount FROM expenses ORDER BY year_month, category ``` ```sql -- sql/templates/expenses/pivot.sql {%- set categories = categories or 'katanas shurikens hooks shoes disguises' -%} {%- set pivots = categories.split(' ') -%} SELECT DATEFORMAT(entry_dt, 'yyyy-mm') as year_month, {%- for pivot in pivots %} SUM(CASE WHEN category = '{{ pivot }}' THEN amount END) AS {{ pivot }}, {%- endfor %} SUM( CASE WHEN category NOT IN ( {%- for pivot in pivots -%} '{{pivot}}'{{',' if not loop.last}} {%- endfor -%}) THEN amount END ) AS other_amount, SUM(amount) AS total_amount FROM ( {{"{% include 'expenses/summary.sql' %}"}} ) AS summary ORDER BY year_month, total_amount ``` Render the new template with: `sql expenses/pivot.sql` and override `categories` or not: ```sql SELECT DATEFORMAT(entry_dt, 'yyyy-mm') as year_month, SUM(CASE WHEN category = 'katanas' THEN amount END) AS katanas, SUM(CASE WHEN category = 'shurikens' THEN amount END) AS shurikens, SUM(CASE WHEN category = 'hooks' THEN amount END) AS hooks, SUM(CASE WHEN category = 'shoes' THEN amount END) AS shoes, SUM(CASE WHEN category = 'disguises' THEN amount END) AS disguises, SUM( CASE WHEN category NOT IN ('katanas','shurikens','hooks','shoes','disguises') THEN amount END ) AS other_amount, SUM(amount) AS total_amount FROM ( SELECT DATEFORMAT(entry_dt, 'yyyy-mm') as year_month, category, SUM(amount) AS amount FROM expenses ORDER BY year_month, category ) AS summary ORDER BY year_month, total_amount ``` ## Docker A [docker image](https://hub.docker.com/repository/docker/ddrscott/sql-ninja) has been built if installing Python is a problem: ```sh docker pull ddrscott/sql-ninja ``` Docker containers don't have access to local file system, so we need to mount the volume into the container. ```sh docker run --rm -v $PWD:/app -w /app ddrscott/sql-ninja expenses/pivot.sql # ^ ^ ^ ^ ^ # | | | | | # | | | | + the template # | | | | # | | | + the image # | | | # | | + start in /app path # | | # | + volume mount current path to /app # | # + remove container when complete ``` Make an alias if ya want: ```sh # Pick one or name it whatever is most memorable to you: alias sql='docker run --rm -v $PWD:/app -w /app ddrscott/sql-ninja' alias sqln='docker run --rm -v $PWD:/app -w /app ddrscott/sql-ninja' alias sqlninja='docker run --rm -v $PWD:/app -w /app ddrscott/sql-ninja' ``` ## Why not [Jinja SQL](https://github.com/hashedin/jinjasql)? The `jinjasql` project has 452 stars as of this writing and it uses Jinja, too. The project requires the user to write they SQL as Python strings. If we're going to write SQL in Python strings, then we don't need a templating engine. Just format strings and use Python. With SQL Ninja we want to write SQL in `.sql` files. Period. https://hashedin.com/blog/introducing-jinjasql-generate-sql-using-jinja-templates/) ## Conclusion I'm currently using it in my day job and it has helped parameterize several aspects of some large SQL statements. Jinja's `macro` and `include` features really standout and make writing SQL almost dreamy. Almost. Please let us know how this project is working for you and how to make it better! --- # What the SQL?! JOIN me at UNION Station Date: 2018-07-31 URL: https://ddrscott.github.io/blog/2018/join-me-at-union-station/ Tags: sql Chicago union station hall JOIN and UNION are staples in SQL. In English they're synonyms to each other, but in SQL they behave very differently. They work in different directions. **TL;DR** - Use `JOIN` to add columns. Use `UNION` to add rows. According to Wikipedia a `JOIN` is: > An SQL join clause combines columns from one or more tables in a relational database. It creates a set that can be saved as a table or used as it is. A JOIN is a means for combining columns from one (self-join) or more tables by using values common to each. ANSI-standard SQL specifies five types of JOIN: INNER, LEFT OUTER, RIGHT OUTER, FULL OUTER and CROSS. As a special case, a table (base table, view, or joined table) can JOIN to itself in a self-join. > > -- https://en.wikipedia.org/wiki/Join_(SQL) And here's their definition for `UNION`: > In SQL the UNION clause combines the results of two SQL queries into a single table of all matching rows. The two queries must result in the same number of columns and compatible data types in order to unite > > -- https://en.wikipedia.org/wiki/Set_operations_(SQL)#UNION_operator !!! tip "Memory Tip" `UNION` adds rows `UNder` the results. `JOIN` has not tip, but it's not a `UNION`. ## Data Setup We're going to use a 2 table database. People belong to a zip code and zip codes have a named region. !!! success "Schema" ```sql CREATE TABLE people AS SELECT * FROM ( VALUES (1, 'Candy', '60001'), (2, 'Mandy', '70001'), (3, 'Randy', '80001'), (4, 'Andy', '89991') ) AS t (id, name, zip_code); CREATE TABLE zip_codes AS SELECT * FROM ( VALUES ('60001', 'North'), ('70001', 'South'), ('80001', 'East'), ('90001', 'West') ) AS t (code, region); ``` ## JOIN Problems The best time to use a `JOIN` is when we need to show a column from another table which correlates to a column from a previously declared table. JOIN diagram **Problem**: The Sales Team would like a list of people's names with their regions. Create a query with each person correlated to the zip code `region`? !!! success "Solution" ```sql SELECT people.id, people.name, people.zip_code, zip_codes.region FROM people JOIN zip_codes ON zip_codes.code = people.zip_code; ``` | id | name | zip_code | region |----|-------|----------|-------- | 1 | Candy | 60001 | North | 2 | Mandy | 70001 | South | 3 | Randy | 80001 | East **Problem**: List all the people who belong to an unknown zip code? !!! success "Solution" ```sql SELECT people.id, people.name, people.zip_code, zip_codes.region FROM people LEFT JOIN zip_codes ON zip_codes.code = people.zip_code WHERE zip_codes.code IS NULL; ``` id | name | zip_code | region -- | ---- | -------- | ------ 4 | Andy | 89991 | ### JOIN Notes - `INNER JOIN` and `JOIN` are interpreted identically. The later saves typing 6 characters. - `INNER JOIN` is great for finding correlation, but bad for finding unmatched records. - `LEFT JOIN` tells the query engine we want to return all data on the `LEFT` or earlier tables even though the correlated value doesn't exist. - `RIGHT JOIN` is the opposite of `LEFT`. They retain rows from tables mentioned later in the query. - Mixing `LEFT JOIN` with `RIGHT JOIN` in the same query should be avoided. - The `WHERE` condition states we only want the uncorrelated rows, the `people` missing a `zip_codes` record. ## UNION Problems The `UNION` clause allows us to conform 2 or more unrelated results into a single uniform results. UNION diagram **Problem** The search department wants a list of all keywords for their search index. Return a list of keywords from both the people and the zip code tables. !!! success "Solution" ```sql SELECT name AS keyword FROM people UNION SELECT region FROM zip_codes; ``` | keyword | | --------- | | East | | North | | South | | Mandy | | Candy | | Randy | | Andy | | West | **Problem** The search department likes the results, but wants to be able to correlate the row back to its source table and row. !!! success "Solution" ```sql SELECT 'people' AS source, id::varchar AS key, name AS keyword FROM people UNION SELECT 'zip_codes', code, region FROM zip_codes ORDER BY 1, 2, 3; ``` source | key | keyword -----------|-------|--------- people | 1 | Candy people | 2 | Mandy people | 3 | Randy people | 4 | Andy zip_codes | 60001 | North zip_codes | 70001 | South zip_codes | 80001 | East zip_codes | 90001 | West ### UNION Notes - The `UNION` clause requires all the columns from its sub queries to be the same number and type. This is why `::varchar` casting was added. - `UNION` defaults to only returning distinct rows between the result sets. If you want all rows regardless of duplicates try `UNION ALL`. - `ORDER BY` with `UNION` works on the final results. When ordering is needed within a sub query use parenthesis. Ex: `UNION (SELECT ... ORDER BY ... LIMIT 1)` ## Bonus Problem Return the people with invalid zip codes __without__ using `JOIN`. Hints: - `EXCEPT` is the inverse of a `UNION`. `EXCEPT` removes the results from the source query instead of appending them. - `IN` can be used as a matching clause in a `WHERE` filter. !!! success "Solution" ```sql SELECT people.* FROM people WHERE zip_code IN ( SELECT zip_code FROM people EXCEPT SELECT code FROM zip_codes ); ``` id | name | zip_code ----|------|---------- 4 | Andy | 89991 ## Closing !!! success "Clean Up" ```sql DROP TABLE people; DROP TABLE zip_codes; ``` `JOIN` and `UNION` are an essential part of a SQL users tool belt. On a day to day basis, `JOIN` is used more often than `UNION`, so we would recommend understanding it first. `UNION` gets it power when we realize our data is more alike than originally intended. Both clauses should be withheld at dinner parties and other social events. Happy SQL-ing! --- # Move to MkDocs Date: 2018-05-04 URL: https://ddrscott.github.io/blog/2018/move-to-mkdocs/ Tags: blog I've upgraded to [MkDocs][] from [Octopress][]. The [Octopress][] installation I've been using was from 2014 (4 years old) and was missing features for search, long form writing, multi-part posts, and books. !!! note "http://www.mkdocs.org/" MkDocs is a **fast**, **simple** and **downright gorgeous** static site generator that's geared towards building project documentation. Documentation source files are written in Markdown, and configured with a single YAML configuration file. ## Features To see all the features of [MkDocs][] and [mkdocs-material] jump to their respective sites. Here's the features really care about: MkDoc Screenshot MkDoc Search [octopress]: http://octopress.org/ [mkdocs]: http://www.mkdocs.org/ [pandoc]: http://pandoc.org/ [mkdocs-material]: https://squidfunk.github.io/mkdocs-material/ ## Transition The migration process wasn't as easy as I'd hoped. I wanted all the old posts to move over, but there wasn't an easy path for my content prior to 2014. That's because some of those posts are in pure html and [MkDocs][] only wants to deal in Markdown. I tried [Pandoc][], but that created Markdown which isn't compatible with [MkDocs][]. It'll take some time before all that old stuff gets through. Fortunately, because it's _so_ old, most of the material isn't relevant anymore and gets very few hits. [MkDocs][] doesn't build the site hierarchy automatically like [Octopress][]. [Octopress][] stores all article in a flat directory with file names prefixed with a date. [MkDocs][] uses the file name as part of the URL, so I needed to rename the [Octopress][] files and put them in a directory structure to match the URL format I had in the original site. **Octopress Posts Directory**: ``` ├── source │   ├── _posts │   │   ├── 2016-05-28-negative-modulo.markdown │   │   ├── 2016-07-11-photography-lightening-talk.markdown │   │   ├── 2016-07-26-ansi-codes-with-character.markdown │   │   ├── 2017-02-07-how-to-get-better-at-anything.markdown │   │   ├── 2017-03-08-what-the-sql-lateral.markdown │   │   ├── 2017-03-15-what-the-sql-recursive.markdown │   │   ├── 2017-03-22-what-the-sql-window.markdown │   │   ├── 2017-04-10-vim-send-text.markdown │   │   ├── 2017-04-13-base16-shell.markdown │   │   ├── 2017-06-01-gnu-screen.markdown │   │   ├── 2017-06-12-fzf-dictionary.markdown │   │   ├── 2018-03-04-getting-rusty-with-vim.markdown │   │   ├── 2018-03-12-stream-stats-in-rust.markdown │   │   └── 2018-03-22-blog-setup.markdown ``` **MkDocs Docs Directory**: ``` ├── docs │   ├── blog │   │   ├── 2016 │   │   │   ├── negative-modulo.markdown │   │   │   └── yank-without-jank.markdown │   │   ├── 2017 │   │   │   ├── base16-shell.markdown │   │   │   ├── fzf-dictionary.markdown │   │   │   ├── gnu-screen.markdown │   │   │   ├── how-to-get-better-at-anything.markdown │   │   │   ├── vim-send-text.markdown │   │   │   ├── what-the-sql-lateral.markdown │   │   │   ├── what-the-sql-recursive.markdown │   │   │   └── what-the-sql-window.markdown │   │   └── 2018 │   │   ├── blog-setup.markdown │   │   ├── getting-rusty-with-vim.markdown │   │   ├── move-to-mkdocs.markdown │   │   └── stream-stats-in-rust.markdown ``` After getting the files to the correct directory, the site hierarchy needs to be configured in `mkdocs.yml`. That file contains much more than the page structure. You'll need to read more about it from their [documentation][mkdocs]. **mkdocs.yml**: ```yaml # .. stuff before this section .. pages: - Home: index.md - Projects: projects.md - '2018': - 'Move to MkDocs': blog/2018/move-to-mkdocs.markdown - 'A Rustic Journey Through Stream Stats': blog/2018/stream-stats-in-rust.markdown - 'Getting Rusty with Vim': blog/2018/getting-rusty-with-vim.markdown - 'Dev Blog Tools :: A Quick Tour of My Setup': blog/2018/blog-setup.markdown - '2017': - 'Base16 Shell': blog/2017/base16-shell.markdown - 'How to Get Better At Anything': blog/2017/how-to-get-better-at-anything.markdown - 'FZF + WordNet = Dictionary': blog/2017/fzf-dictionary.markdown - 'GNU Screen': blog/2017/gnu-screen.markdown - 'What the SQL?!? Lateral Joins': blog/2017/what-the-sql-lateral.markdown - 'What the SQL?!? WINDOW': blog/2017/what-the-sql-window.markdown - 'What the SQL?!? Recursive': blog/2017/what-the-sql-recursive.markdown - 'Vim Send Text': blog/2017/vim-send-text.markdown # .. more stuff .. ``` Once `mkdocs.yml` is configured we can run: ```sh # start a web server to preview the site mkdocs server # or generate the site files for deployment mkdocs build # or deploy to GitHub Pages mkdocs gh-deploy ``` ## Conclusion Here's a breakdown of the pros and cons that I've experienced so far: ### Pros + Full site search without using external service (Google, Angola, etc.) + Site hierarchy. + Automatic table of contents on all pages. + Various code blocks and annotations. + Less magic. ### Cons + Older posts don't translate well, because they have to be in Markdown format. + [MkDocs][] is a Python project, a language I'm not familiar with, yet. + Less magic. Hopefully this the catalyst to more content. Stay tuned! Questions? Comments? Trolls?!? Let me know! --- # Dev Blog Tools :: A Quick Tour of My Setup Date: 2018-03-22 URL: https://ddrscott.github.io/blog/2018/blog-setup/ Description: Tools I use to create posts, screenshots, code snippets, gifs, screencasts, and other interesting things. I've been asked to share about my blogging setup a few times, so in the spirit of keeping things [DRY][], it's time to make a post about it. **TL;DR** -- Mac, iTerm, NeoVim, LICEcap, Octopress, Base16, Input Font, Skitch, OBS > **WARNING** This is an atypical post. I normally prefer to go over a single feature, but this time I'm going to under explain a lot of features. Here's a short list of typical posts in case chaos is not your thing: > > + https://ddrscott.github.io/blog/2017/vim-send-text/ > + https://ddrscott.github.io/blog/2017/what-the-sql-lateral/ > + https://ddrscott.github.io/blog/2016/negative-modulo/ ## Computer I do all my writing and coding on a MacBook Pro Retina 15 inch. It's pretty maxed out, but I don't think it needs to be that way. The only stats I really care about are the size and the clarity of the screen. ## Terminal I like to live in a terminal. I choose [iTerm2][] for it's split panes and independent font size per pane. At times I wish it was as fast as the native Terminal.app, but it doesn't support splits. I've been told many times that [tmux][] supports splits, too, but I can't change the text size independently between splits. I'd also rather use [GNU screen][screen] anyway. > Old timers like old things™ ## Editor [NeoVim][] is always loaded in one of my terminal panes. It's the most efficient editor for me. I switched away from Vim 7 because it couldn't do background jobs. Prior to Vim I used [RubyMine][] and [IntelliJ][] which were great for code completion and navigating projects, but felt heavy for notes, free form writing, and editing system files. I'll save the rest of the Vim sermon for another post. When composing posts, I use [vim-markdown][] to get syntax highlighting, folding, TOC, and other goodies. My entire NeoVim config can be found in its [Github repository][config-nvim]. I don't recommend folks using it outright, but borrow parts if it and slowly integrate it into their own setup. One size doesn't fit all. It barely fits me! ## Colors I lived in [Solarized Dark](http://ethanschoonover.com/solarized) for many years. It is so common place in development shops it became the [new green screen][] of the 70's and 80's. So when folks see yellow and green highlights against an off-black chalkboard beaming from my terminal, there's pause and self reflection. They wonder how they were nestled into a monotonous monoculture. They ponder when they traded emotional delight for ocular comfort. They realize depression encroached silently on them like mold in a damp attic. But I digress… Here's how Base16 describes itself: > An architecture for building themes based on carefully chosen syntax highlighting using a base of sixteen colours. Base16 provides a set of guidelines detailing how to style syntax and how to code a builder for compiling base16 schemes and templates. > > -- https://github.com/chriskempson/base16 Base16 ocean colors [base16-ocean][] is the color scheme I use. It's scheme #74, type `j` or `k` to change the theme). The scheme is perfectly in sync between shell and Vim due to [base16-shell][] and [base16-vim][]. I wrote a [longer post about using Base16 while back](/blog/2017/base16-shell/). ## Font I've been using the same code font for as long as I can remember: Input Mono Condensed. Input Font Samples > Input is a flexible system of fonts designed specifically for code by David Jonathan Ross. It offers both monospaced and proportional fonts, all with a large range of widths, weights, and styles for richer code formatting. > > -- Font Bureau I enjoy the fancy 'a' and 'g' characters along with easy to distinguish 'l', '1', and 'I'.(Helvetica can't do it justice.) It also has several different character width options so I can squeeze more code into one eye shot. Input Font Letters Speaking of shots… ## Screenshots I have a few ways of taking screenshots and it's mainly the Mac way: + To snap a small portion of the screen, I use `⇧+⌘+4`, then select region to snap. + To snap a window with the shadow, I use `⇧+⌘+4`, then `space` and select a window. [Apple's support page][apple1] has the gruesome details on both. When I want to annotate a screenshot, I use [Evernote's Skitch App][skitch]. The featured image was created using a combination of OSX screenshot and then editing in [Skitch][skitch]. ## Animated Gifs and Videos For short demos, I like to use [LICEcap][] to record an animated Gif. I wouldn't use Gifs for anything longer than a few seconds since it doesn't support video playback controls without extra magic. Here's an example [LICEcap][] Gif: And here's a video of how I produced it:
For full screen video record, I use [OBS Studio][obs]. (You might have noticed me stopping the recording at the end of the video. Should have used a hotkey.) It produces small file sizes and has a lot of features including webcam overlays, filters, transforms, etc. It deserves a whole book of its own. > **Note to Self** > > 1. write book about OBS Studio > 2. retire ## Blog Generator I use [Octopress][octopress1] to generate the static HTML pages you're reading now. I haven't updated it since 2014 ([SHA 71e4d40b][octopress2]) and I'm terrified to do so. The setup Just Works™. I write in [Markdown][] and it does the rest. The source code for this exact post is [here](https://github.com/ddrscott/octopress/blob/master/source/_posts/2018-03-22-blog-setup.markdown). That being said, I'm strongly considering switching to [Mkdocs][] to have more structure and better search capabilities built-in. ## Closing You made it to the end! I hope at least one of these will benefit your daily computing life. Questions? Comments? Trolls?!? Let me know! [Hacker News](https://news.ycombinator.com/item?id=16648381) **Links** + iTerm2 -- https://www.iterm2.com/ + Tmux -- https://github.com/tmux/tmux/wiki + GNU Screen -- https://www.gnu.org/software/screen/ + RubyMine -- https://www.jetbrains.com/ruby/ + IntelliJ -- https://www.jetbrains.com/idea/ + Vim -- https://www.vim.org + NeoVim -- https://neovim.io/ + vim-markdown -- https://github.com/plasticboy/vim-markdown + My NeoVim Config -- https://github.com/ddrscott/config-nvim + base16-shell -- https://github.com/chriskempson/base16-shell + base16-vim -- https://github.com/chriskempson/base16-vim + base16-ocean -- http://chriskempson.com/projects/base16/ + Input Font -- http://input.fontbureau.com/ + Skitch -- https://evernote.com/products/skitch + LICEcap -- https://www.cockos.com/licecap/ + OBS Studio -- https://obsproject.com/ + Octopress -- http://octopress.org/ + Mkdocs -- http://www.mkdocs.org/ **Edits** + added Hacker News link [DRY]: https://en.wikipedia.org/wiki/Don%27t_repeat_yourself [iTerm2]: https://www.iterm2.com/ [tmux]: https://github.com/tmux/tmux/wiki [screen]: https://www.gnu.org/software/screen/ [rubymine]: https://www.jetbrains.com/ruby/ [intellij]: https://www.jetbrains.com/idea/ [vim]: https://www.vim.org [NeoVim]: https://neovim.io/ [vim-markdown]: https://github.com/plasticboy/vim-markdown [config-nvim]: https://github.com/ddrscott/config-nvim [new green screen]: https://en.wikipedia.org/wiki/Monochrome_monitor#/media/File:IBM_PC_5150.jpg [base16-shell]: https://github.com/chriskempson/base16-shell [base16-vim]: https://github.com/chriskempson/base16-vim [base16-ocean]: http://chriskempson.com/projects/base16/ [input_font]: http://input.fontbureau.com/ [apple1]: https://support.apple.com/en-us/HT201361 [skitch]: https://evernote.com/products/skitch [licecap]: https://www.cockos.com/licecap/ [obs]: https://obsproject.com/ [octopress1]: http://octopress.org/ [octopress2]: https://github.com/imathis/octopress/tree/71e4d40ba7aef73da65936bc9a77e432609811b2 [mkdocs]: http://www.mkdocs.org/ [markdown]: https://en.wikipedia.org/wiki/Markdown --- # A Rustic Journey Through Stream Stats Date: 2018-03-17 URL: https://ddrscott.github.io/blog/2018/stream-stats-in-rust/ Tags: rust Description: How I made a CLI program in Rust which combines tee, wc -l, and watch. After playing [Guessing Game](/blog/2018/getting-rusty-with-vim/) from the [Rust Book](https://doc.rust-lang.org/book/first-edition/guessing-game.html) a few times, it was time to make something a little more substantial. We're going to create `stream_stats`, a CLI program which prints throughput statistics from `stdin` while redirecting through `stdout`. Think `tee` + `wc -l` + `watch` all at the same time. **TL;DR** - `cargo install stream_stats` Here is a quick demo of the program: Stream Stats Gif Today, I we'll build this program up in 6 steps smallish steps. The minimum requirement of this program was the live feedback as seen in the demo _and_ minimal impact on the overall performance. ## Step 1 - Reproducing `cat` Inefficiently First step is to replicate `cat`. We'll do it as demonstrated by Rust's own [documentation](https://doc.rust-lang.org/std/io/struct.Stdin.html#method.read_line). ```rust use std::io; fn main() { let mut buffer = String::new(); while io::stdin().read_line(&mut buffer).unwrap() > 0 { print!("{}", buffer); buffer.clear(); } } ``` > I'm using `unwrap` to keep our program short and sweet. Save the code as `stream_stats.rs` and build it using `rustc -O stream_stats.rs`. This will compile the program into `stream_stats`. We can now run the program with `./stream_stats < stream_stats.rs` or `cat stream_stats.rs | stream_stats`. This should output the source code we just wrote. The program is sufficient for small streams, but will perform horribly on large files. ## Step 2 - Reproducing `cat` Efficiently with Buffering > It can be excessively inefficient to work directly with a Read instance. For example, every call to read on TcpStream results in a system call. A BufReader performs large, infrequent reads on the underlying Read and maintains an in-memory buffer of the results. > > -- https://doc.rust-lang.org/std/io/struct.BufReader.html Lets add some buffer use to increase performance and get it near the speed of `cat`. Replace the contents of `stream_stats.rs` with the following, recompile, and run the program. ```rust use std::io::{self, BufRead, BufReader, BufWriter, Write}; static READ_BUF_SIZE: usize = 1024 * 1024; fn main() { let mut reader = BufReader::with_capacity(READ_BUF_SIZE, io::stdin()); let mut writer = BufWriter::new(io::stdout()); let mut buffer = vec![]; while reader.read_until(b'\n', &mut buffer).unwrap() > 0 { writer.write(&buffer).unwrap(); buffer.clear(); } writer.flush().unwrap(); } ``` The exact difference is viewable on Github. Here's a one-liner which to help with the build/run cycle: ```sh rustc -O ./stream_stats.rs && ./stream_stats < stream_stats.rs ``` For a few extra lines, we get a lot of performance. There are ways to get even more performance, but it won't be worth the code complexity at this time. ## Step 3 - Count the Lines We're ready to start counting lines. We'll introduce a `struct` to hold a start time and line count. ```rust use std::io::{self, BufRead, BufReader, BufWriter, Write}; use std::time::Instant; static READ_BUF_SIZE: usize = 1024 * 1024; struct Stats { started: Instant, lines: usize, } fn main() { let mut reader = BufReader::with_capacity(READ_BUF_SIZE, io::stdin()); let mut writer = BufWriter::new(io::stdout()); let mut buffer = vec![]; let mut stats = Stats { started: Instant::now(), lines: 0, }; while reader.read_until(b'\n', &mut buffer).unwrap() > 0 { writer.write(&buffer).unwrap(); stats.lines += 1; buffer.clear(); } writer.flush().unwrap(); eprintln!("lines: {}, {:?}", stats.lines, stats.started.elapsed()); } ``` Again the exact difference is viewable on Github. ## Step 4 - Write to `/dev/tty` Using `eprintln!` is easy, but bad practice for non-error output. The next step is moving the output to `/dev/tty`. As a reminder, we're also not using `println!` because we're reserving it for the original content piped from `stdin`. ```rust use std::fs::{File, OpenOptions}; use std::io::{self, BufRead, BufReader, BufWriter, Write}; use std::time::Instant; static READ_BUF_SIZE: usize = 1024 * 1024; struct Stats { started: Instant, lines: usize, tty: File, } impl Stats { fn new(tty: &str) -> Stats { Stats { started: Instant::now(), lines: 0, tty: OpenOptions::new() .write(true) .append(true) .open(tty) .expect("Cannot open tty for writing!"), } } } fn main() { let mut reader = BufReader::with_capacity(READ_BUF_SIZE, io::stdin()); let mut writer = BufWriter::new(io::stdout()); let mut buffer = vec![]; let mut stats = Stats::new("/dev/tty"); while reader.read_until(b'\n', &mut buffer).unwrap() > 0 { writer.write(&buffer).unwrap(); stats.lines += 1; buffer.clear(); } writer.flush().unwrap(); writeln!( stats.tty, "lines: {}, {:?}", stats.lines, stats.started.elapsed() ).expect("Could not write to tty!"); } ``` Exact difference is viewable on Github. ## Step 5 - Beautify Stats Output The display logic is going to get a little more complex. We want to move the string formatting code to a `fmt::Display` trait. We'll also add the kilobytes to the displayed stats. ```rust use std::fmt; use std::fs::{File, OpenOptions}; use std::io::{self, BufRead, BufReader, BufWriter, Write}; use std::time::Instant; static READ_BUF_SIZE: usize = 1024 * 1024; static CLEAR_LINE: &str = "\x1B[1G\x1B[2K"; struct Stats { started: Instant, lines: usize, bytes: usize, tty: File, } impl Stats { fn new(tty: &str) -> Stats { Stats { started: Instant::now(), lines: 0, bytes: 0, tty: OpenOptions::new() .write(true) .append(true) .open(tty) .expect("Cannot open tty for writing!"), } } } impl fmt::Display for Stats { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let elapsed = self.started.elapsed(); let seconds: f64 = elapsed.as_secs() as f64 + elapsed.subsec_nanos() as f64 * 1e-9; if seconds == 0.0 { return write!(f, ""); } let kb = self.bytes as f64 / 1024 as f64; let kb_per_sec = kb / seconds; let lines_per_sec = self.lines as f64 / seconds; write!( f, "{}{:.1} sec | {:.0} kb [ {:.1}/s ] | {} lines [ {:.0}/s ]", CLEAR_LINE, seconds, kb, kb_per_sec, self.lines, lines_per_sec ) } } fn main() { let mut reader = BufReader::with_capacity(READ_BUF_SIZE, io::stdin()); let mut writer = BufWriter::new(io::stdout()); let mut buffer = vec![]; let mut stats = Stats::new("/dev/tty"); while reader.read_until(b'\n', &mut buffer).unwrap() > 0 { writer.write(&buffer).unwrap(); stats.lines += 1; stats.bytes += &buffer.len(); buffer.clear(); } writer.flush().unwrap(); writeln!(&stats.tty, "{}", &stats).expect("Could not write to tty!"); } ``` Exact difference is viewable on Github. ## Step 6 - Display the stats 10 times per second We're finally at the most useful part of the program. Viewing the stats while the stream is still going. For this task, we introduce a thread which loops forever sleeping a little and waking to output the stats. Because of the thread, we need to use `Arc` to safely tell Rust another thread is going to have a pointer to the stats object. To be honest, I don't fully understand why I need to use `AtomicUsize`. I tried to keep the `usize` variables would get errors regarding mutability. If someone out there can remove the `AtomicUsize` without introducing `unsafe` please let me know! Here's the final code listing: ```rust use std::fmt; use std::fs::{File, OpenOptions}; use std::io::{self, BufRead, BufReader, BufWriter, Write}; use std::sync::Arc; use std::sync::atomic::{AtomicUsize, Ordering}; use std::thread::{self, sleep}; use std::time::{Duration, Instant}; static READ_BUF_SIZE: usize = 1024 * 1024; static CLEAR_LINE: &str = "\x1B[1G\x1B[2K"; static UPDATE_INTERVAL_MS: u64 = 100; struct Stats { started: Instant, lines: AtomicUsize, bytes: AtomicUsize, tty: File, } impl Stats { fn new(tty: &str) -> Stats { Stats { started: Instant::now(), lines: AtomicUsize::new(0), bytes: AtomicUsize::new(0), tty: OpenOptions::new() .write(true) .append(true) .open(tty) .expect("Cannot open tty for writing!"), } } fn add(&self, buffer: &Vec) { self.lines.fetch_add(1, Ordering::Relaxed); self.bytes.fetch_add(buffer.len(), Ordering::Relaxed); } } impl fmt::Display for Stats { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let elapsed = self.started.elapsed(); let seconds: f64 = elapsed.as_secs() as f64 + elapsed.subsec_nanos() as f64 * 1e-9; if seconds == 0.0 { return write!(f, ""); } let bytes = self.bytes.load(Ordering::Relaxed) as f64; let lines = self.lines.load(Ordering::Relaxed) as f64; let kb = bytes / 1024 as f64; let kb_per_sec = kb / seconds; let lines_per_sec = lines / seconds; write!( f, "{}{:.1} sec | {:.0} kb [ {:.1}/s ] | {:.0} lines [ {:.0}/s ]", CLEAR_LINE, seconds, kb, kb_per_sec, lines, lines_per_sec ) } } fn main() { let mut reader = BufReader::with_capacity(READ_BUF_SIZE, io::stdin()); let mut writer = BufWriter::new(io::stdout()); let mut buffer = vec![]; let stats = Arc::new(Stats::new("/dev/tty")); let stats_clone = stats.clone(); thread::spawn(move || loop { sleep(Duration::from_millis(UPDATE_INTERVAL_MS)); write!(&stats_clone.tty, "{}", &stats_clone).expect("Could not write to tty!"); }); while reader.read_until(b'\n', &mut buffer).unwrap() > 0 { writer.write(&buffer).unwrap(); stats.add(&buffer); buffer.clear(); } writer.flush().unwrap(); writeln!(&stats.tty, "{}", &stats).expect("Could not write to tty!"); } ``` Exact difference is viewable on Github. ## Closing Thoughts I personally learned a lot assembling these steps and wish I did this _before_ publishing the `cargo` [crate](https://github.com/ddrscott/stream_stats) of the same name. Any suggestions, comments, and corrections welcome on this post or the final crate are welcome. https://github.com/ddrscott/stream_stats Thanks for learning with me! --- # Getting Rusty with Vim Date: 2018-03-04 URL: https://ddrscott.github.io/blog/2018/getting-rusty-with-vim/ Tags: rust, vim After dabbing in Go and Crystal, I figured I'd give Rust a try. Of course I used Vim along the way. Here are some notes I compiled after my first session. ## Vim Setup There are 2 excellent Vim plugins which play nice with Rust. First is https://github.com/rust-lang/rust.vim which provides: > ... Rust file detection, syntax highlighting, formatting, Syntastic integration, and more. It has nearly 1k stars, one of which is from me, and it's triple the stars of `rust-mode` for Emacs. The second plugin is https://github.com/racer-rust/vim-racer which provides omni-complete and jump to definition. Both features are good enough that I don't need to use ctags. I've in fact overridden several default Vim mappings with `vim-racer` implementations: ```vim au FileType rust nmap (rust-def) au FileType rust nmap (rust-def-vertical) au FileType rust nmap } (rust-def-split) au FileType rust nmap (rust-doc) ``` ## Rust Experience The featured image is an implementation of a number guessing game. The game is from the Rust Tutorial Guide at https://doc.rust-lang.org/book/first-edition/guessing-game.html. I followed the guide sentence by sentence, line by line, and everything worked without additional troubleshooting sessions. Good Job @rustlang! I massaged the code a little more to fool around and came up with the code in the featured screen shot. The source is available in this [gist](https://gist.github.com/ddrscott/991a329b7f1c1f7682da5e4c24cdecc5). It's not the most exciting code I've ever written, but possibly the most painless of the new languages I've tried. When I came across some confusing language decisions. I posted a tweet about it: @rustlang responded quickly with insightful information. They're totally getting the Raving Fan Award this weekend! ## Conclusion Rust is worth pursing with or without Vim. The feedback from the compiler and runtime errors is clear. The [racer-rust](https://github.com/racer-rust/vim-racer) completion utility gives all IDE super powers. And finally, I hear it's a pretty good language, too. https://www.rust-lang.org --- # FZF + WordNet = Dictionary Date: 2017-06-12 URL: https://ddrscott.github.io/blog/2017/fzf-dictionary/ `FZF + WordNet = Dictionary`. FZF is a fuzzy finding command line tool. WordNet is a dictionary structured for developers. When married together, we can get a snappy dictionary to help us find just the right word for any occasion. ## Install Required Program Before making our new shell function, lets install the required programs. 1. https://github.com/junegunn/fzf 2. http://wordnetweb.princeton.edu/perl/webwn These directions are for Max OSX with `homebrew` installed. If you're on a different system, read the docs from the sites above to get the programs for your operating system. ```sh brew install fzf brew cask install xquartz brew install wordnet ``` ## FZF FZF stands for Fuzzy Finder. It is a program which enables the user to filter a set of lines from standard in and feed those line back to standard out. A basic example is: `find . | fzf`. This presents a list of all files in the current working directory and prompts the user for input. As you type letters, the list will narrow, keeping only the items matching the search criteria. After selecting an entry from the list the line or lines chosen is printed to standard out. It provides a nifty argument `--preview` which can execute a program and display its output as an aside in the terminal. We'll write more about FZF in the future. ## WordNet > WordNet is a large lexical database of English. Nouns, verbs, adjectives and > adverbs are grouped into sets of cognitive synonyms (synsets), each expressing > a distinct concept. Synsets are interlinked by means of conceptual-semantic > and lexical relations. The resulting network of meaningfully related words and > concepts can be navigated with the browser. WordNet is also freely and > publicly available for download. WordNet's structure makes it a useful tool > for computational linguistics and natural language processing. > > WordNet superficially resembles a thesaurus, in that it groups words together > based on their meanings. However, there are some important distinctions. > First, WordNet interlinks not just word forms—strings of letters—but specific > senses of words. As a result, words that are found in close proximity to one > another in the network are semantically disambiguated. Second, WordNet labels > the semantic relations among words, whereas the groupings of words in > a thesaurus does not follow any explicit pattern other than meaning > similarity. Using WordNet we can find information similar to what is in a dictionary and thesaurus combined. Example output of looking up `happy` is: ```text % wn happy -over Overview of adj happy The adj happy has 4 senses (first 2 from tagged texts) 1. (37) happy -- (enjoying or showing or marked by joy or pleasure; "a happy smile"; "spent many happy days on the beach"; "a happy marriage") 2. (2) felicitous, happy -- (marked by good fortune; "a felicitous life"; "a happy outcome") 3. glad, happy -- (eagerly disposed to act or to be of service; "glad to help") 4. happy, well-chosen -- (well expressed and to the point; "a happy turn of phrase"; "a few well-chosen words") ``` `xquartz` is needed since the WordNet package also works in GUI mode. The GUI app has a useful interface and could be more intuitive to use if you don't mind clicking around. The GUI app can be started with `wnb`, but we're not here for GUI stuff. Let's move on to making our terminal script. ## Shell Script The following 3 functions can be used individually and are helpful all on their own. Add these directly into your shell profile or in a separate file and source it from the profile. `fold` is normally a built in command. By default it adds newlines when text overflows the terminal, but it doesn't do it in an easy to read fashion. The default behavior can break in the middle of a word and assumes a terminal width of 80 columns. Our `fold` function breaks at spaces and passes the whole terminal width to it when no other arguments are provided. ```sh # Default `fold` to screen width and break at spaces function fold { if [ $# -eq 0 ]; then /usr/bin/fold -w $COLUMNS -s else /usr/bin/fold $* fi } ``` `spell` is the FZF portion of our script. This fuzzy matches the built in Mac dictionary with a preview window containing the WordNet overview of the selected word. ```sh # Use `fzf` against system dictionary function spell { cat /usr/share/dict/words | fzf --preview 'wn {} -over | fold' --preview-window=up:60% } ``` The `dic` script uses `spell` to help find a word then outputs WordNet's definition. ```sh # Lookup definition of word using `wn $1 -over`. # If $1 is not provided, we'll use the `spell` command to pick a word. # # Requires: # brew install wordnet # brew install fzf function dic { if [ $# -eq 0 ]; then wn `spell` -over | fold else wn $1 -over | fold fi } ``` Here's another demo of the `dic` function: FZF Dictionary Demo 2 ## Conclusion Gluing programs together with `fzf --preview` is fun. Let us know what other recipes you come up with in the comments below. --- # GNU Screen Date: 2017-06-01 URL: https://ddrscott.github.io/blog/2017/gnu-screen/ > Screen is a full-screen window manager that multiplexes a physical terminal > between several processes, typically interactive shells. TL;DR - Screen keeps your ssh sessions alive on a host. ## Installation Most servers have `screen` installed already. If they don't it can be installed via `apt-get install screen`, `yum install screen`, `brew install screen`. Sorry Windows, try Remote Desktop. ## Startup Get a terminal on a remote host (or local) then run `screen` ```sh screen ``` If you're not brave, try `man screen` to read more about. Once `screen` has started, you'll want to remember `?`. That is how you get the screen options menu. It's typed literal hold `CTRL` and press `a`. To quit the `screen` app, type `exit`. To keep `screen` running, type `d` to detach from the program. To reattach to that session try `screen -x`. ## Options There are tons of options and they're best found by reading the `man` page or Googling `gnu screen shortcuts`. Here's some of my favorites. ### Startup Flags - `screen -DDR`. Force others of the current session and reattach yourself. - `screen -x`. Reattach yourself, but allow others to stay in. This is cooperative mode. Good for pairing and much faster than GUI screen sharing. ### Control Keys - ``. Create a "tab" to have multiple sessions. - ``. Toggle to previous session. - ``. Switch next session. - `a`. Send a literal `` back to shell. ### Config File and Pretty Colors It's easy to get lost in screen without a status line. So creating this file in your home directory will help. **~/.screenrc** ```text hardstatus alwayslastline hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{= kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B} %m-%d %{W}%c %{g}]' ``` This should give you a pretty statusline at the bottom of your terminal. Here's what it looks like: GNU Screen Statusline ## TMUX A strong competitor to `screen` is `tmux`. It has a more modern code base and is actively maintained. The reason I personally don't use it is out of habit and it's not installed everywhere. `screen` just works for my work flow. ## References - https://www.gnu.org/software/screen/ - http://www.pixelbeat.org/lkdb/screen.html - http://aperiodic.net/screen/quick_reference --- # Base16 Shell Date: 2017-04-13 URL: https://ddrscott.github.io/blog/2017/base16-shell/ After many years using the excellent Solarized color scheme, it has started to feel stale. Sometimes I think the dark blueish tint brings down my mood. Other times, I wonder what life could be like if I stared at more cheerful colors. Thus starts my farewell from Solarized, and hello to Base16. From Base16's [Github README](https://github.com/chriskempson/base16): > Base16 provides carefully chosen syntax highlighting and a default set of > sixteen colors suitable for a wide range of applications. Base16 is not a > single theme but a set of guidelines with numerous implementations. Which means after integrating into Base16 once, I'll have access to an unlimited supply of themes in the future! ## Installation Base16 has perfect iTerm and shell integration. Once the repo was installed locally, I called `base16_ocean` and was greeted by brand new palette. No iTerm tweaking, no downloading this other thing and importing stuff into iTerm. It was literally 2 steps performed in shell and then pick a theme. Here's what you do. (FYI. This is pretty much copy/paste from their repo) ```sh # 1. clone the repo to `~/.config/base16-shell` git clone https://github.com/chriskempson/base16-shell.git ~/.config/base16-shell # 2. update ~/.bashrc or ~/.zshrc cat >> ~/.zshrc <<'SH' BASE16_SHELL=$HOME/.config/base16-shell/ [ -n "$PS1" ] && [ -s $BASE16_SHELL/profile_helper.sh ] && eval "$($BASE16_SHELL/profile_helper.sh)" SH ``` After you're done with those steps, start a new terminal session or source the file, and start choosing a theme. Try `base16_ocean` to see what I'm seeing. Try `base16_` to see what other options you have available. To preview what they look like before making a choice go to their website: https://chriskempson.github.io/base16/. ## Vim Integration Install plugin from https://github.com/chriskempson/base16-vim. Add the following to your `.vimrc`: ```vim if filereadable(expand("~/.vimrc_background")) let base16colorspace=256 source ~/.vimrc_background endif ``` `base16-shell` commands create the `~/.vimrc_background` file every time a `base16_*` alias is used. This allows Vim to always stay synchronized with shell which is AWESOME! ## Conclusion After cycling through everyone of the user created themes, I've settled on `base16_ocean` as my new home. I may get tired of it, I may not, but either way I'm just a shell command away from changing. Indecision has never been so easy. --- # Vim Send Text Date: 2017-04-10 URL: https://ddrscott.github.io/blog/2017/vim-send-text/ Tags: vim Vim Send Text Demo After pairing with some Sublime users, I noticed a neat feature. Or more accurately, they were rubbing it in my face that their cute editor was better than mine. The feature was [SendText](https://github.com/wch/SendText). Well, I couldn't let Sublime users have all the fun, and apparently neither could a few other people. ## History There have been a few other implementations at this feature. These implementations sent the text to a screen or tmux split. Since I don't use either, I couldn't use them a la carte. - https://github.com/vim-scripts/tslime.vim - https://github.com/jpalardy/vim-slime - https://github.com/ervandew/screen This next implementation was good. It's only flaw, IMHO, was it's mappings and naming. The naming "ISlime2" is impossible for me to type on the first try. The mappings overlapped my existing mappings. [ISlime2](https://github.com/matschaffer/vim-islime2) did all the hard work AppleScript work and provides the Vim function to pass into the AppleScript. Enter [vim-sendtext](https://github.com/ddrscott/vim-sendtext). [vim-sendtext](https://github.com/ddrscott/vim-sendtext) is a fork of [ISlime2](https://github.com/matschaffer/vim-islime2). My fork removes all the mappings, exposes useful internal functions, and adds recommended mappings to the README.md. ## Recommended Mappings ```vim " Send current line nnoremap i :SendTextCurrentLine " Send in/around text object - operation pending nnoremap i :set opfunc=sendtext#iTermSendOperatorg@ " Send visual selection vnoremap i :call sendtext#iTermSendOperator(visualmode(), 1) " Move to next line then send it nnoremap ij :SendTextNextLine " Move to previous line then send it nnoremap ik :SendTextPreviousLine ``` ## Vim Operator Pending One of the main reasons to use Vim is Operator pending. It's at the heart of `vip`, `dip`, `ciw`, etc. [vim-sendtext](https://github.com/ddrscott/vim-sendtext) provides an operator pending function so we can logically do `{SEND}ap`, `{SEND}ip`, `{SEND}if`, etc. The identical function works in visual mode to help build confidence in our text object targets. To read more about operator pending functions and how to create them try: ```vim :h map-operator ``` ## Conclusion Hope [vim-sendtext](https://github.com/ddrscott/vim-sendtext) can remove some feature envy from Sublime. Happy console hacking! --- # What the SQL?!? Recursive Date: 2017-03-22 URL: https://ddrscott.github.io/blog/2017/what-the-sql-window/ Tags: sql Today's "What the SQL?!?" features the keyword `WINDOW`. This clause allows us to elegantly select results from the previous results from the previous results from the previous results... Please note, our target database is PostgreSQL. These examples may work with other databases, but might need some massaging to get them to work properly. Search online for the specific vendor's documentation if errors pop up. Try searching for "WINDOW queries $DB_VENDOR_NAME". Not all database vendors support the keyword `WINDOW`. ## Create Sample Data ```sql DROP TABLE IF EXISTS sample_moves; CREATE TABLE sample_moves AS SELECT column1::int AS id, column2::varchar AS name, column3::varchar AS address, column4::date AS moved_at FROM ( VALUES (1, 'Alice' , '1 Main St', '2017-01-01'), (2, 'Bob' , '2 Main St', '2017-02-01'), (3, 'Cat' , '2 Main St', '2017-03-01'), (4, 'Dan Sr' , '3 Main St', '1970-04-01'), (5, 'Dan Jr' , '3 Main St', '2001-04-01'), (6, 'Dan 3rd' , '3 Main St', '2017-04-01') ) as t ; CREATE INDEX ON sample_moves(address); SELECT * FROM sample_moves; ``` #### Results: | id | name | address | moved_at |----|---------|-----------|------------ | 1 | Alice | 1 Main St | 2017-01-01 | 2 | Bob | 2 Main St | 2017-02-01 | 3 | Cat | 2 Main St | 2017-03-01 | 4 | Dan Sr | 3 Main St | 1970-04-01 | 5 | Dan Jr | 3 Main St | 2001-04-01 | 6 | Dan 3rd | 3 Main St | 2017-04-01 ## Life Without Windows A quick poem... > Eyes big and wide,
> nothing seen inside.
> Feeling around
> nothing abound.
> This things wet,
> toxic I bet.
> Closing my eyes,
> still can't rest.
> Having a window,
> would be best.
### How many people live at each address? Using a standard `GROUP BY` with `COUNT` we consolidate the records and count how many rows belong to each address. > **Tip**: `COUNT(1)` is more efficient than `COUNT(*)`. ```sql SELECT address, COUNT(1) total FROM sample_moves GROUP BY address ORDER BY address; ``` #### Results: | address | total |-----------|------- | 1 Main St | 1 | 2 Main St | 2 | 3 Main St | 3 ### How many people live with each person? Enter subquery land. Life without windows is not exciting. ```sql SELECT *, ( SELECT -- everyone at the address, minus the person COUNT(1) - 1 FROM sample_moves t2 WHERE t2.address = t1.address ) AS others FROM sample_moves t1; ``` #### Results: | id | name | address | moved_at | others |----|---------|-----------|------------|-------- | 1 | Alice | 1 Main St | 2017-01-01 | 0 | 2 | Bob | 2 Main St | 2017-02-01 | 1 | 3 | Cat | 2 Main St | 2017-03-01 | 1 | 4 | Dan Sr | 3 Main St | 1970-04-01 | 2 | 5 | Dan Jr | 3 Main St | 2001-04-01 | 2 | 6 | Dan 3rd | 3 Main St | 2017-04-01 | 2 ### `JOIN` works, too ```sql SELECT t1.*, t2.others FROM sample_moves t1 JOIN ( SELECT address, COUNT(1) - 1 as others FROM sample_moves GROUP BY address ORDER BY address ) t2 USING (address); ``` ### And so does `JOIN LATERAL` ```sql SELECT t1.*, t2.others FROM sample_moves t1 JOIN LATERAL ( SELECT address, COUNT(1) - 1 as others FROM sample_moves sub WHERE sub.address = t1.address GROUP BY address ORDER BY address ) t2 ON true; ``` ### That's nice, but who moved in first? ```sql SELECT *, ( SELECT COUNT(1) - 1 FROM sample_moves t2 WHERE t2.address = t1.address ) AS others, ( SELECT name FROM sample_moves t3 WHERE t3.address = t1.address ORDER BY moved_at ASC LIMIT 1 ) AS first_person FROM sample_moves t1; ``` ## Wait I thought this was about windows?!? The keyword `OVER` is the gateway drug into `WINDOW` functions. Using `OVER` with parenthesis is an inline window. The `PARTITION BY` keywords gives similar functionality to `GROUP BY` and `JOIN ... USING` all in one power packed statement. It can never reduce the number of records in a result set which is the same behavior expected of a correlated subquery. `PARTITION BY` is treated the same as the traditional `GROUP BY`. The `ORDER BY` also has the same behavior as its use in a standard query. ```sql SELECT *, (count(1) OVER (PARTITION BY address)) - 1 AS others, first_value(name) OVER (PARTITION BY address ORDER BY moved_at) AS first_moved FROM sample_moves; ``` #### Results | id | name | address | moved_at | others | first_moved |----|---------|-----------|------------|--------|------------- | 1 | Alice | 1 Main St | 2017-01-01 | 0 | Alice | 2 | Bob | 2 Main St | 2017-02-01 | 1 | Bob | 3 | Cat | 2 Main St | 2017-03-01 | 1 | Bob | 4 | Dan Sr | 3 Main St | 1970-04-01 | 2 | Dan Sr | 5 | Dan Jr | 3 Main St | 2001-04-01 | 2 | Dan Sr | 6 | Dan 3rd | 3 Main St | 2017-04-01 | 2 | Dan Sr A picture with arrows worth a thousand words: Window SQL with arrows ## That doesn't look very DRY. Finally, a `WINDOW` The `WINDOW` keyword allows us to alias the options of the `OVER` clause. Namely the expression `(...)` between and including the parenthesis. In the following example we add the use of `RANGE` to provide additional direction to the windowing clause. ```sql SELECT *, (count(1) OVER w) - 1 AS others, first_value(name) OVER w AS first_moved, last_value(name) OVER w AS last_moved FROM sample_moves WINDOW w AS ( PARTITION BY address ORDER BY moved_at RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ); ``` #### Results | id | name | address | moved_at | others | first_moved | last_moved |----|---------|-----------|------------|--------|-------------|------------ | 1 | Alice | 1 Main St | 2017-01-01 | 0 | Alice | Alice | 2 | Bob | 2 Main St | 2017-02-01 | 0 | Bob | Bob | 3 | Cat | 2 Main St | 2017-03-01 | 1 | Bob | Cat | 4 | Dan Sr | 3 Main St | 1970-04-01 | 0 | Dan Sr | Dan Sr | 5 | Dan Jr | 3 Main St | 2001-04-01 | 1 | Dan Sr | Dan Jr | 6 | Dan 3rd | 3 Main St | 2017-04-01 | 2 | Dan Sr | Dan 3rd ```sql -- Previous and Next Record SELECT *, (count(1) OVER w) - 1 AS others, first_value(name) OVER w AS first_moved, last_value(name) OVER w AS last_moved, lag(id) OVER (ORDER BY id) AS prev_id, lead(id) OVER (ORDER BY id) AS next_id FROM sample_moves WINDOW w AS ( PARTITION BY address ORDER BY moved_at RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ) ORDER BY address; ``` #### Results | id | name | address | moved_at | others | first_moved | last_moved | prev_id | next_id |----|---------|-----------|------------|--------|-------------|------------|---------|--------- | 1 | Alice | 1 Main St | 2017-01-01 | 0 | Alice | Alice | | 2 | 2 | Bob | 2 Main St | 2017-02-01 | 1 | Bob | Cat | 1 | 3 | 3 | Cat | 2 Main St | 2017-03-01 | 1 | Bob | Cat | 2 | 4 | 4 | Dan Sr | 3 Main St | 1970-04-01 | 2 | Dan Sr | Dan 3rd | 3 | 5 | 5 | Dan Jr | 3 Main St | 2001-04-01 | 2 | Dan Sr | Dan 3rd | 4 | 6 | 6 | Dan 3rd | 3 Main St | 2017-04-01 | 2 | Dan Sr | Dan 3rd | 5 | ## List Window Functions Here is a list from [Postgres docs](https://www.postgresql.org/docs/9.3/static/functions-window.html) of all the window functions. In addition to these, *any regular aggregate function* can be use within a window. | Function | Description | -------- | ----------- | row_number() | number of the current row within its partition, counting from 1 | rank() | rank of the current row with gaps; same as row_number of its first peer | dense_rank() | rank of the current row without gaps; this function counts peer groups | percent_rank() | relative rank of the current row: (rank - 1) / (total rows - 1) | cume_dist() | relative rank of the current row: (number of rows preceding or peer with current row) / (total rows) | ntile | integer ranging from 1 to the argument value, dividing the partition as equally as possible | lag() | returns value evaluated at the row that is offset rows before the current row within the partition; if there is no such row, instead return default (which must be of the same type as value). Both offset and default are evaluated with respect to the current row. If omitted, offset defaults to 1 and default to null | lead() | returns value evaluated at the row that is offset rows after the current row within the partition; if there is no such row, instead return default (which must be of the same type as value). Both offset and default are evaluated with respect to the current row. If omitted, offset defaults to 1 and default to null | first_value() | returns value evaluated at the row that is the first row of the window frame | last_value() | returns value evaluated at the row that is the last row of the window frame | nth_value() | returns value evaluated at the row that is the nth row of the window frame (counting from 1); null if no such row ## References - Postgres Window Tutorial: https://www.postgresql.org/docs/9.3/static/tutorial-window.html - Postgres Window Functions: https://www.postgresql.org/docs/9.3/static/functions-window.html - Postgres Window Syntax: https://www.postgresql.org/docs/9.3/static/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS --- # What the SQL?!? Recursive Date: 2017-03-15 URL: https://ddrscott.github.io/blog/2017/what-the-sql-recursive/ Tags: sql Today's "What the SQL?!?" features the keyword `RECURSIVE`. This clause allows us to elegantly select results from the previous results from the previous results from the previous results... Please note, our target database is PostgreSQL. These examples may work with other databases, but might need some massaging to get them to work properly. Search online for the specific vendor's documentation if errors pop up. Try searching for "RECURSIVE queries $DB_VENDOR_NAME". Not all database vendors support the keyword `RECURSIVE`. ## Fibonacci Sequence According to [Wikipedia](https://en.wikipedia.org/wiki/Fibonacci_number): > In mathematics, the Fibonacci numbers are the numbers in the following integer > sequence, called the Fibonacci sequence, and characterized by the fact that > every number after the first two is the sum of the two preceding ones: > > 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 ... ## SQL Solution Our SQL solution will make use of the `RECURSIVE` [CTE](https://www.postgresql.org/docs/9.3/static/queries-with.html) keyword. ```sql WITH RECURSIVE t(i, fi, fib) AS ( SELECT 1, 0::NUMERIC, 1::NUMERIC UNION ALL SELECT i + 1, fib, fi + fib FROM t WHERE i < 10 ) SELECT i, fib FROM t ``` ## The Ins and Outs Here's some inline colorful comments to explain the sections: annotated SQL Maybe arrows will help a little more with the flow of data: Data Flow ## Fibonacci Results When you run the query, you'll get the following results: ``` i | fib ----+----- 1 | 1 2 | 1 3 | 2 4 | 3 5 | 5 6 | 8 7 | 13 8 | 21 9 | 34 10 | 55 (10 rows) ``` If you want to see the results for a high number, update `i < 10` to a higher value. If you go above `i < 793`, Postgres gives up and returns `Nan` which means `Not a number` which means the computed value is larger than your computer can handle and still treat like a number. Sorry, get a new computer or work with numbers less than 166 digits long. ## A Real World Example with Hierarchical Data Fibonacci sequence is nice and all, but you have real data concerns. You're thinking, "Show me the DATA!". So here's the data... ```sql -- Build `sample_people` table CREATE TABLE sample_people AS SELECT column1::int AS id, column2::varchar AS name, column3::int AS parent_id FROM ( VALUES (0, 'Root' , null), (1, 'Alice', 0), (2, 'Bob' , 1), (3, 'Cat' , 1), (4, 'Dan' , 3), (5, 'Evan' , 0), (6, 'Frank', 5) ) as t ; SELECT * FROM sample_people; -- id | name | parent_id -- ----+-------+----------- -- 0 | Root | -- 1 | Alice | 0 -- 2 | Bob | 1 -- 3 | Cat | 1 -- 4 | Dan | 3 -- 5 | Evan | 0 -- 6 | Frank | 5 ``` Our `sample_people` table represents a person by name and that person may have a parent. The parent of all the parents is `Root`. And finally our recursive query to get a nice display of the hierarchy. ```sql WITH RECURSIVE tree -- `tree` is the table alias. -- It must be used as part of the `UNION` statement. AS ( -- 1) Initialize table with all the top level rows. -- Anything without a parent is a parent. Is that apparent? SELECT 0 AS level, -- 2) Set the level to 0. sample_people.* -- 3) Return the initial row FROM sample_people WHERE parent_id = 0 -- 4) Top level doesn't have a parent. UNION ALL -- 5) Union all the parents with their children. SELECT tree.level + 1, -- 6) Increment the level every time we loop. sample_people.* -- 7) Return the current row - the child row. FROM tree -- 8) `tree` is populated with the previous results. -- Every loop gets a new record from current result. JOIN sample_people ON sample_people.parent_id = tree.id ) SELECT repeat(' ', level * 4) || name AS display FROM tree ORDER BY level, name ; -- display -- ------------- -- Alice -- Evan -- Bob -- Cat -- Frank -- Dan -- (6 rows) ``` ## Bait and Switch `RECURSIVE` is not actually recursive. It isn't a function calling itself. Sorry, not sorry. It's much closer to a `while` loop. Here's what Postgres has to say about it: > Note: Strictly speaking, this process is **iteration** not recursion, but RECURSIVE > is the terminology chosen by the SQL standards committee. [emphasis added] ## Closing So the next time you try to crawl a hierarchy of data, we hope `RECURSIVE` comes to mind. It's a great way to save round trips to the database and query what is needed based on the data's structure. Think of all the nested subqueries we can save together! ## References - Postgres WITH Queries: https://www.postgresql.org/docs/9.3/static/queries-with.html - Wikipedia Fibonacci number: https://en.wikipedia.org/wiki/Fibonacci_number --- # What the SQL?!? Lateral Joins Date: 2017-03-08 URL: https://ddrscott.github.io/blog/2017/what-the-sql-lateral/ Tags: sql Today's "What the SQL?!?" features the keyword `LATERAL`. A prerequisite to understanding lateral joins are regular joins and subqueries. I'll explain those briefly to see how `LATERAL` can simplify a complicated SQL query. Please note, our target database is PostgreSQL. These examples may work with other databases, but might need some massaging to get them to work properly. Search online for the specific vendor's documentation if errors pop up. Try searching for "lateral joins $DB_VENDOR_NAME". Not all database vendors support the keyword `LATERAL`. ## A Problem to Solve We have a table with system uptimes. The table records a start timestamp and an end timestamp. If the service is still running, the end timestamp is left null because it hasn't ended. We want a query to display an overview this data. Our final solution will return a row per day and 24 columns containing an uptime percentage for each hour in the day. It will look like the following. ``` cal_date | hour_0 | hour_1 | hour_2 | hour_3 | ... | hour_21 | hour_22 | hour_23 ------------+--------+--------+--------+--------+-----+---------+---------+--------- 2017-03-01 | 0 | 0.75 | 0.25 | 0 | ... | 0 | 0 | 0 2017-03-02 | 0 | 0 | 0 | 0 | ... | 1 | 1 | 1 (2 rows) ``` Please note we'll use `...` abbreviate some of the results. All queries are schema independent and should be copy/paste-able into any `psql` session. ## Sample Uptime Data The sample uptime data is derived from a virtual table built from the following query: ```sql SELECT * FROM ( VALUES ('2017-03-01 01:15:00-06'::timestamp, '2017-03-01 02:15:00-06'::timestamp), ('2017-03-01 08:00:00-06', '2017-03-01 20:00:00-06'), ('2017-03-02 19:00:00-06', null) ) AS t(start_ts, end_ts) ``` The data looks like: ```plain start_ts | end_ts ---------------------+--------------------- 2017-03-01 01:15:00 | 2017-03-01 02:15:00 2017-03-01 08:00:00 | 2017-03-01 20:00:00 2017-03-02 19:00:00 | (3 rows) ``` We want to plot the time against a time sliced table representing all the effective hours in the uptime window. We'll make use of another virtual table to build up all the time slices: ```sql SELECT start_ts, start_ts + interval '1 hour' AS end_ts FROM generate_series('2017-03-01'::date, '2017-03-03'::timestamp - interval '1 hour', interval '1 hour' ) AS t(start_ts) ``` This we make use of PostgreSQL's [generate_series](https://www.postgresql.org/docs/9.3/static/functions-srf.html) to return all the hours between a time range. The data looks like: ```plain start_ts | end_ts ---------------------+--------------------- 2017-03-01 00:00:00 | 2017-03-01 01:00:00 2017-03-01 01:00:00 | 2017-03-01 02:00:00 2017-03-01 02:00:00 | 2017-03-01 03:00:00 -- ... many more rows ... 2017-03-01 03:00:00 | 2017-03-01 04:00:00 2017-03-02 22:00:00 | 2017-03-02 23:00:00 2017-03-02 23:00:00 | 2017-03-03 00:00:00 (48 rows) ``` ## Left Join We use a left join to glue together overlapping time ranges between these two data sets. We want all the data on the `LEFT` side in the `FROM` clause to return regardless of an uptime record existing within its time slice. ```sql SELECT * FROM ( -- build virtual table of all hours between -- a date range SELECT start_ts, start_ts + interval '1 hour' AS end_ts FROM generate_series( '2017-03-01'::date, '2017-03-03'::timestamp - interval '1 hour', interval '1 hour' ) AS t(start_ts) ) AS cal LEFT JOIN ( -- build virtual table of uptimes SELECT * FROM ( VALUES ('2017-03-01 01:15:00-06'::timestamp, '2017-03-01 02:15:00-06'::timestamp), ('2017-03-01 08:00:00-06', '2017-03-01 20:00:00-06'), ('2017-03-02 19:00:00-06', null) ) AS t(start_ts, end_ts) ) AS uptime ON cal.end_ts > uptime.start_ts AND cal.start_ts <= coalesce(uptime.end_ts, current_timestamp) ``` The result set shows we have some variety in our sample data. With 3 slices up time and 3 slices of downtime. ```plain start_ts | end_ts | start_ts | end_ts ---------------------+---------------------+---------------------+--------------------- 2017-03-01 00:00:00 | 2017-03-01 01:00:00 | | 2017-03-01 01:00:00 | 2017-03-01 02:00:00 | 2017-03-01 01:15:00 | 2017-03-01 02:15:00 2017-03-01 02:00:00 | 2017-03-01 03:00:00 | 2017-03-01 01:15:00 | 2017-03-01 02:15:00 2017-03-01 03:00:00 | 2017-03-01 04:00:00 | | ... 2017-03-01 07:00:00 | 2017-03-01 08:00:00 | | 2017-03-01 08:00:00 | 2017-03-01 09:00:00 | 2017-03-01 08:00:00 | 2017-03-01 20:00:00 ... 2017-03-01 20:00:00 | 2017-03-01 21:00:00 | 2017-03-01 08:00:00 | 2017-03-01 20:00:00 2017-03-01 21:00:00 | 2017-03-01 22:00:00 | | ... 2017-03-02 18:00:00 | 2017-03-02 19:00:00 | | 2017-03-02 19:00:00 | 2017-03-02 20:00:00 | 2017-03-02 19:00:00 | ... 2017-03-02 23:00:00 | 2017-03-03 00:00:00 | 2017-03-02 19:00:00 | (48 rows) ``` If we try without the `LEFT` clause, we'll only see 20 rows containing the up slices. ## Time to compute some timing Let's add some times and sensible column names and replace the `*` ```sql SELECT -- will use `first_ts` and `last_ts` to calculate uptime duration CASE WHEN uptime.start_ts IS NOT NULL THEN greatest(uptime.start_ts, cal.start_ts) END AS first_ts, least(cal.end_ts, uptime.end_ts) AS last_ts, date_trunc('day', cal.start_ts)::date AS cal_date, extract(hour from cal.start_ts) AS cal_hour, extract(epoch from age(cal.end_ts, cal.start_ts)) AS cal_seconds FROM ( -- build virtual table of all hours between -- a date range SELECT start_ts, start_ts + interval '1 hour' AS end_ts FROM generate_series('2017-03-01'::date, '2017-03-03'::timestamp - interval '1 hour', interval '1 hour' ) AS t(start_ts) ) AS cal LEFT JOIN ( -- build virtual table of uptimes SELECT * FROM ( VALUES ('2017-03-01 01:15:00-06'::timestamp, '2017-03-01 02:15:00-06'::timestamp), ('2017-03-01 08:00:00-06', '2017-03-01 20:00:00-06'), ('2017-03-02 19:00:00-06', null) ) AS t(start_ts, end_ts) ) AS uptime ON cal.end_ts > uptime.start_ts AND cal.start_ts <= coalesce(uptime.end_ts, current_timestamp) ``` ```plain first_ts | last_ts | cal_date | cal_hour | cal_seconds ---------------------+---------------------+------------+----------+------------- | 2017-03-01 01:00:00 | 2017-03-01 | 0 | 3600 2017-03-01 01:15:00 | 2017-03-01 02:00:00 | 2017-03-01 | 1 | 3600 2017-03-01 02:00:00 | 2017-03-01 02:15:00 | 2017-03-01 | 2 | 3600 | 2017-03-01 04:00:00 | 2017-03-01 | 3 | 3600 | 2017-03-01 05:00:00 | 2017-03-01 | 4 | 3600 | 2017-03-01 06:00:00 | 2017-03-01 | 5 | 3600 | 2017-03-01 07:00:00 | 2017-03-01 | 6 | 3600 | 2017-03-01 08:00:00 | 2017-03-01 | 7 | 3600 2017-03-01 08:00:00 | 2017-03-01 09:00:00 | 2017-03-01 | 8 | 3600 ... 2017-03-01 20:00:00 | 2017-03-01 20:00:00 | 2017-03-01 | 20 | 3600 | 2017-03-01 22:00:00 | 2017-03-01 | 21 | 3600 ... | 2017-03-02 19:00:00 | 2017-03-02 | 18 | 3600 2017-03-02 19:00:00 | 2017-03-02 20:00:00 | 2017-03-02 | 19 | 3600 ... 2017-03-02 23:00:00 | 2017-03-03 00:00:00 | 2017-03-02 | 23 | 3600 (48 rows) ``` ## Subquery, Subquery, What's the Worry? SQL is all about nested subqueries. It's hard to escape without creating views, but who has time to lookup that [syntax](https://www.postgresql.org/docs/9.3/static/sql-createview.html) _and_ get their [DBA's](https://imgflip.com/i/1kzzyn) permission to run the DDL?!? Let's add some duration times to the result set. We'll use the traditional sub query for it. ```sql SELECT -- calculate uptime seconds coalesce( extract(epoch FROM age(last_ts, first_ts)), 0 ) AS up_seconds, * FROM ( SELECT -- will use `first_ts` and `last_ts` to calculate uptime duration CASE WHEN uptime.start_ts IS NOT NULL THEN greatest(uptime.start_ts, cal.start_ts) END AS first_ts, least(cal.end_ts, uptime.end_ts) AS last_ts, date_trunc('day', cal.start_ts)::date AS cal_date, extract(hour from cal.start_ts) AS cal_hour, extract(epoch from age(cal.end_ts, cal.start_ts)) AS cal_seconds FROM ( -- build virtual table of all hours between -- a date range SELECT start_ts, start_ts + interval '1 hour' AS end_ts FROM generate_series('2017-03-01'::date, '2017-03-03'::timestamp - interval '1 hour', interval '1 hour' ) AS t(start_ts) ) AS cal LEFT JOIN ( -- build virtual table of uptimes SELECT * FROM ( VALUES ('2017-03-01 01:15:00-06'::timestamp, '2017-03-01 02:15:00-06'::timestamp), ('2017-03-01 08:00:00-06', '2017-03-01 20:00:00-06'), ('2017-03-02 19:00:00-06', null) ) AS t(start_ts, end_ts) ) AS uptime ON cal.end_ts > uptime.start_ts AND cal.start_ts <= coalesce(uptime.end_ts, current_timestamp) ) t1 ``` ```plain up_seconds ------------ 0 2700 900 0 0 ... 3600 (48 rows) ``` Without the subquery we'd be getting into even more nested function calls and would have to double compute values or have no visibility in the intermediate steps. We could have calculated `up_seconds` directly in the first query which introduced `first_ts` and `last_ts`. That would look like this: ```sql SELECT coalesce( extract(epoch FROM age( least(cal.end_ts, uptime.end_ts), CASE WHEN uptime.start_ts IS NOT NULL THEN greatest(uptime.start_ts, cal.start_ts) END ) ), 0 ) AS up_seconds FROM --- ... ``` It's not for the weak stomach, but frankly speaking, neither is the subquery... ## Enough Nesting, `LATERAL` join save me! Lateral joins can give us the best of both worlds: reduced subquery nesting and traceable computed values. We're going to move the initial computed values like `first_ts` and `last_ts`, move them to a virtual table then `JOIN LATERAL` so they can get their own table alias. We'll do it again for `up_seconds` and use `first_ts` and `last_ts` from its sibling table. ```sql SELECT t2.up_seconds FROM ( -- build virtual table of all hours between -- a date range SELECT start_ts, start_ts + interval '1 hour' AS end_ts FROM generate_series('2017-03-01'::date, '2017-03-03'::timestamp - interval '1 hour', interval '1 hour' ) AS t(start_ts) ) AS cal LEFT JOIN ( -- build virtual table of uptimes SELECT * FROM ( VALUES ('2017-03-01 01:15:00-06'::timestamp, '2017-03-01 02:15:00-06'::timestamp), ('2017-03-01 08:00:00-06', '2017-03-01 20:00:00-06'), ('2017-03-02 19:00:00-06', null) ) AS t(start_ts, end_ts) ) AS uptime ON cal.end_ts > uptime.start_ts AND cal.start_ts <= coalesce(uptime.end_ts, current_timestamp) JOIN LATERAL ( SELECT -- will use `first_ts` and `last_ts` to calculate uptime duration CASE WHEN uptime.start_ts IS NOT NULL THEN greatest(uptime.start_ts, cal.start_ts) END AS first_ts, least(cal.end_ts, uptime.end_ts) AS last_ts, date_trunc('day', cal.start_ts)::date AS cal_date, extract(hour from cal.start_ts) AS cal_hour, extract(epoch from age(cal.end_ts, cal.start_ts)) AS cal_seconds ) t1 ON true JOIN LATERAL ( -- calculate uptime seconds for the time slice SELECT coalesce( extract(epoch FROM age(last_ts, first_ts)), 0 ) AS up_seconds ) t2 ON true ``` This gives us the same results but without the deep nesting. ```plain up_seconds ------------ 0 2700 900 0 0 3600 ... 3600 (48 rows) ``` What's great about this strategy is we can quickly choose which columns to see as we build up the query. ```sql SELECT t2.up_seconds ... -- or -- SELECT t2.*, t1.* ``` Let's build up the final calculation using the same strategy: ```sql SELECT t2.*, t3.* FROM ... JOIN LATERAL ( -- calculate percentage between uptime seconds and available seconds -- within the time slice SELECT up_seconds / cal_seconds AS up_pct ) t3 ON true ``` ```plain up_seconds | up_pct ------------+-------- 0 | 0 2700 | 0.75 900 | 0.25 0 | 0 ... 3600 | 1 (48 rows) ``` ## Plot the Hours Now we have all the computed data we need. Let's plot it as a cross tab (but not actually use [`crosstab`](https://www.postgresql.org/docs/9.3/static/tablefunc.html)) We'll need to consolidate the long list of data by `cal_date` and pivot the `cal_hour` as a column and `up_pct` as a value. In case of overlapping uptimes we'll be pessimists and choose the lowest or `min` uptime percentage. The final query looks like: ```sql SELECT cal_date, max(CASE WHEN cal_hour = 0 THEN up_pct END) AS hour_0, max(CASE WHEN cal_hour = 1 THEN up_pct END) AS hour_1, max(CASE WHEN cal_hour = 2 THEN up_pct END) AS hour_2, max(CASE WHEN cal_hour = 3 THEN up_pct END) AS hour_3, max(CASE WHEN cal_hour = 4 THEN up_pct END) AS hour_4, max(CASE WHEN cal_hour = 5 THEN up_pct END) AS hour_5, max(CASE WHEN cal_hour = 6 THEN up_pct END) AS hour_6, max(CASE WHEN cal_hour = 7 THEN up_pct END) AS hour_7, max(CASE WHEN cal_hour = 8 THEN up_pct END) AS hour_8, max(CASE WHEN cal_hour = 9 THEN up_pct END) AS hour_9, max(CASE WHEN cal_hour = 10 THEN up_pct END) AS hour_10, max(CASE WHEN cal_hour = 11 THEN up_pct END) AS hour_11, max(CASE WHEN cal_hour = 12 THEN up_pct END) AS hour_12, max(CASE WHEN cal_hour = 13 THEN up_pct END) AS hour_13, max(CASE WHEN cal_hour = 14 THEN up_pct END) AS hour_14, max(CASE WHEN cal_hour = 15 THEN up_pct END) AS hour_15, max(CASE WHEN cal_hour = 16 THEN up_pct END) AS hour_16, max(CASE WHEN cal_hour = 17 THEN up_pct END) AS hour_17, max(CASE WHEN cal_hour = 18 THEN up_pct END) AS hour_18, max(CASE WHEN cal_hour = 19 THEN up_pct END) AS hour_19, max(CASE WHEN cal_hour = 20 THEN up_pct END) AS hour_20, max(CASE WHEN cal_hour = 21 THEN up_pct END) AS hour_21, max(CASE WHEN cal_hour = 22 THEN up_pct END) AS hour_22, max(CASE WHEN cal_hour = 23 THEN up_pct END) AS hour_23 FROM ( -- build virtual table of all hours between -- a date range SELECT start_ts, start_ts + interval '1 hour' AS end_ts FROM generate_series('2017-03-01'::date, '2017-03-03'::timestamp - interval '1 hour', interval '1 hour' ) AS t(start_ts) ) AS cal LEFT JOIN ( -- build virtual table of uptimes SELECT * FROM ( VALUES ('2017-03-01 01:15:00-06'::timestamp, '2017-03-01 02:15:00-06'::timestamp), ('2017-03-01 08:00:00-06', '2017-03-01 20:00:00-06'), ('2017-03-02 19:00:00-06', null) ) AS t(start_ts, end_ts) ) AS uptime ON cal.end_ts > uptime.start_ts AND cal.start_ts <= coalesce(uptime.end_ts, current_timestamp) JOIN LATERAL ( SELECT -- will use `first_ts` and `last_ts` to calculate uptime duration CASE WHEN uptime.start_ts IS NOT NULL THEN greatest(uptime.start_ts, cal.start_ts) END AS first_ts, least(cal.end_ts, uptime.end_ts) AS last_ts, date_trunc('day', cal.start_ts)::date AS cal_date, extract(hour from cal.start_ts) AS cal_hour, extract(epoch from age(cal.end_ts, cal.start_ts)) AS cal_seconds ) t1 ON true JOIN LATERAL ( SELECT coalesce( extract(epoch FROM age(last_ts, first_ts)), 0 ) AS up_seconds ) t2 ON true JOIN LATERAL ( -- calculate percentage between uptime seconds and available seconds -- within the time slice SELECT up_seconds / cal_seconds AS up_pct ) t3 ON true GROUP BY cal_date ``` ```plain cal_date | hour_0 | hour_1 | hour_2 | hour_3 | ... | hour_23 ------------+--------+--------+--------+--------+ ... +--------- 2017-03-01 | 0 | 0.75 | 0.25 | 0 | ... | 0 2017-03-02 | 0 | 0 | 0 | 0 | ... | 1 (2 rows) ``` ## More than CTE and Cross Join This example only scratches the surface of `LATERAL`s super powers. On the surface `LATERAL` can do things `CTE`, cross join, and `WINDOW` can do. PostgreSQL describe `LATERAL` as: > Subqueries appearing in FROM can be preceded by the key word LATERAL. > This allows them to reference columns provided by preceding FROM items. > (Without LATERAL, each subquery is evaluated independently and so cannot > cross-reference any other FROM item.) TL;DR - `LATERAL` allows subqueries to reference earlier tables. ## References - [Postgres Lateral Joins](https://www.postgresql.org/docs/9.6/static/queries-table-expressions.html#QUERIES-LATERAL) --- # How to Get Better At Anything Date: 2017-02-07 URL: https://ddrscott.github.io/blog/2017/how-to-get-better-at-anything/ Better Title ## RTFM RTFM WTFM * WTFM, Write The Flip'n Manual * Do a lightning talk about it :) ## Make it Fun Surprised Koala Trophy * Gamification * Trophy anyone? * If it can't be made fun, make it fun. ## Practice I've practiced a lot * Slowly with intent * Pressure cycles: none, some, actual, intolerable. * The practice of practice deserves its own talk. ## Better not good Good to Better Chart * Good is the killer of better. * People tend to stop because good is unreachable. * People tend to stop after they're good enough. * If we aim to get better, we'll eventually be better than good. * Progressive goals. OMG! I forgot goal setting! ## Learn, don't Memorize One does not simply memorize PI * Learning is understanding * Understanding is connecting the new thing to an old thing. * Computers memorize. You're not a computer. ## Learn with others Google Hug * With the Internet you're never alone and always alone. * Actually talk with people! ## Cheat One does not simply memorize PI * If you're not cheating, you're not trying. * Reverse engineer the cheat, don't actually cheat! ## How I got better at Vim * **RTFM** - `:help` * **WTFM** - Blogging, figure out other people's problems :/ * **Practice** - Wrote stuff that wasn't needed immediately, my own notes. * **Don't Memorize** - Vim has grammar like English. Oooh... Reconnection! * **Learn with Others** - We have vim users, teach them (but don't let them know you're only one step ahead) * **Cheat** - Look at other's Vimscript to figure out that cool thing they did. ## What do you want to get better at next?!? * lightning talks? * Shuffling cards? (@devin) * [SQL?](http://www.postgresql.org/docs/9.6/static/index.html) * [Drawing?](https://www.amazon.com/Drawing-Right-Side-Brain-Definitive/dp/1585429201) * [Piano?](http://www.pianofundamentals.com/book/en/chapter_1) --- # ANSI Codes with Character Date: 2016-07-26 URL: https://ddrscott.github.io/blog/2016/ansi-codes-with-character/ This was a lightening talk given to the office about ANSI Escape Codes. Most of the time, all 5 minutes of it, was spent explaining the code snippets. [Wiki about ANSI Codes](https://en.wikipedia.org/wiki/ANSI_escape_code) ## What is an ANSI code? ANSI Escape Codes are a nearly universal means of embedding display options in computer terminals. `\e[` is how to tell the terminal we're giving it a command instead just outputting text. What does that mean?!? Let the examples do the talking. ## Color Examples ```bash echo -e "\e[2J\e[32m It's not easy being green \e[0m" echo -e "\e[2J\e[31m Apples are red \e[0m" ``` ```ruby (30..37).each{|i| puts "i: \e[#{i}m #{i} \e[0m"} ``` ## Position Examples ```ruby x=`tput cols`.to_i y=`tput lines`.to_i loop do print "\e[s" # Save current cursor position print "\e[#{rand(y)};#{rand(x)}H" # move to row/column print "💩" # print POOP! print "\e[u" # restore position sleep(rand) end ``` ```bash # Ruby oneliner troll ruby -e 'x=`tput cols`.to_i; y=`tput lines`.to_i; loop {print "\e[s\e[#{rand(y)};#{rand(x)}H💩\e[u"; sleep(rand)}' ``` ## ANSI-nine Examples ```bash # Poop-field curl -s https://raw.githubusercontent.com/ddrscott/ansinine/master/stars | ruby # Smoke curl -s https://raw.githubusercontent.com/ddrscott/ansinine/master/fire.rb | ruby ``` --- # Photography Refactored Date: 2016-07-11 URL: https://ddrscott.github.io/blog/2016/photography-lightening-talk/ Someone convinced me to do a lightening talk about photography. Here's the slide deck.
--- # PSA: Vim Modulo '%' Returns Negative Numbers! Date: 2016-05-28 URL: https://ddrscott.github.io/blog/2016/negative-modulo/ Tags: vim Surprise! Vim has the same modulo bug as Javascript. Some say it's not a bug, but if Ruby and Google Calculator is wrong, I don't want to be right. * Vim, `:echo -10 % 3` returns `-1` * Javascript `-10 % 3` returns `-1` * Ruby/IRB, `-10 % 3` returns `2` **-- my expectation is here** ## Solution Add this function some where in your Vimscript and throw away `%`. ```vim " ((n % m) + m) % m` or `((-10 % 3) + 3) % 3` returns `2` function! s:mod(n,m) return ((a:n % a:m) + a:m) % a:m endfunction ``` I hope this saves someone some time somewhere out there. It's an hour I'll never get back, but happy to give back. **References** * https://www.google.com/#q=-10+%25+3 * http://vimdoc.sourceforge.net/htmldoc/eval.html#expr6 * http://math.stackexchange.com/questions/519845/modulo-of-a-negative-number/519856 * https://en.wikipedia.org/wiki/Modulo_operation * http://stackoverflow.com/questions/4467539/javascript-modulo-not-behaving --- # Vim Side Search: Making Search Fun Again Date: 2016-05-27 URL: https://ddrscott.github.io/blog/2016/side-search/ Tags: vim The `quickfix` feature is nice, but it doesn't give enough context around the search term that leads to use `ag` from terminal and switch back and forth between programs. I do this search dance every day and I've had it! There must be better way! ## Problem Quickfix for help Look at the `quickfix` window above. It spends most of its space showing the file name of the hit, then the remainder is spent on text around it. In projects using [Rails Engines](http://guides.rubyonrails.org/engines.html) with deeply nested directory structures, this often leaves me with just a bunch of paths in the `quickfix`. ## Solution A - The Unix Way Some may argue Vim isn't suppose to do search. Vim rightly delegates to the [Unix philosophy](http://www.catb.org/esr/writings/taoup/html/ch01s06.html) by allowing an external program do its searching. Let's try that for this solution using `grep`, `ack`, and `ag`. grep vs ack vs ag We've run the 3 separate programs (normally, I would only use `ag`) then browse the results to see if there's an interesting file. At this point I start using my handy-dandy mouse to scroll around, precisely highlight the path of interest, copy, and type `vim `. Intuitive? Yes. Fast? No! Maybe I should use `tmux` or `screen` so I don't need to mouse around, but trying to select a path is still pretty slow for me and requires more cognitive load than I have patience for. After all, I'm trying to concentrate on a refactor or something, not how to open a bunch of files. Should I practice more? Yes. Will I? No! We're going to use `ag` from now on, since it's faster than `ack`, and has prettier output than `grep`. I really really really tried to get `grep` to output for humans, but couldn't figure it out. ## Solution B - Vim without Quickfix Let Vim do some work for us. ``` vim `ag --ignore=\*.{css,scss} -l help` +'/help' ``` What's that?!? Open Vim passing the result of `ag` command. `ag` is run with some file exclusions, `-l` only file names, and `help` is the search term. `+'/help'` tells Vim to immediately start searching for 'help'. After all that, Vim should have started with a bunch of buffers. View them with `:ls`. Take notice of the buffer numbers to see how many files were found. Use `n` and `N` to jump through search matches in the file. Use `:bn` to go to the next buffer and start hitting `n` again to cycle through the changes. If the number of files is small enough, you may be able to use `:ball` to open every buffer in its own window. Thats a lot of work to jump through changes. Good thing the `quickfix` exists. ## Solution C - Quickfix This is here for posterity. `quickfix` DOES make cycling through changes easier than Solution B, but as I stated in the intro, it doesn't give the context that we want. [Thoughtbot has a pretty good article](https://robots.thoughtbot.com/faster-grepping-in-vim) about how to setup Vim to use `ag`. Once you do that, you can `:grep help` to get the following output: Quickfix for help Use `:cnext`, `:cprev`, `:cfirst`, `:clast` to go to next, previous, first, and last `quickfix` result respectively. Map those to keys to make it easier to navigate. ``` nnoremap [q :cprev nnoremap ]q :cnext nnoremap [Q :cfirst nnoremap ]Q :clast ``` Get more help about `quickfix` using `:help quickfix`. Cry after realizing even `:help quickfix` can't show more context. I'll be here when you're done. ## Solution D - Side Search Plugin So how do we get the best of both worlds? How do we enter the land of a thousand wives/husbands? How do we get `ag` output and quick navigation? For me, it was writing a plugin in. For you it's using it. https://github.com/ddrscott/vim-side-search After installing the plugin using your favorite package manager, you'll have access to the following functionality: Side Search Demo Things to notice: - `ag` output is in a buffer with additional syntax highlighting! - `n` and `N` used to jump to matches. Regular Vim navigation works, too! - `` and `` used to open change and jump to change! - Number of matches shown in the buffer name! - I use too many exclamation points!!! The plugin's `README` has more details. ## Closing I've been using this plugin ever since its inception and don't know where I'd be without it. It gets some inspiration from [fugitive's](https://github.com/tpope/vim-fugitive) `:Gstatus` mode/buffer, and I wish there were more plugins that added functionality from `stdout` instead of transforming it into a different format. Unix tools makers spend a lot of time thinking about the output. Let's use it to our advantage. I've learn a lot creating this plugin and plan to write about it in a future post. Do you love it or hate it? Have more ideas for Side Search? Please let me know what you think of it. Have more ideas or issues for Side Search? Hit me up on [Github](https://github.com/ddrscott/vim-side-search). ### References - [The Silver Searcher](https://github.com/ggreer/the_silver_searcher) by Geoff Greer - `man ag` - `man grep` - `man tmux` - `man screen` - `:help quickfix` - [Faster Grepping in Vim](https://robots.thoughtbot.com/faster-grepping-in-vim) by Thoughtbot - [The number 12](https://www.google.com/search?q=the+number+12) ### In The News - [Hacker News](https://news.ycombinator.com/item?id=11787085) - [Reddit](https://www.reddit.com/r/vim/comments/4lbdur/vim_side_search_plugin_making_search_fun_again/) --- # Sensible Horizontal Scroll in Vim Date: 2016-05-05 URL: https://ddrscott.github.io/blog/2016/sidescroll/ Tags: vim Sometimes it's the little things that make a big difference, and this is about as small as it can get. Occasionally, I hold down `l`, `w`, or `e` to view long lines which have disappeared off the window. It's a bad habit and the penalty always ruins my concentration. But after I found this setting, I'm free to cursor around like an innocent child unaware of death. *TL;DR* -- `set sidescroll=1` ## Problem When `set wrap` is off, otherwise known as `set nowrap`, and a line is longer than the window can handle, you'll need to scroll to see more of the line. `{x}zl` and `{x}zh` will scroll the screen right and left respectively. That's a lot to remember to see some more text. Which leads me to hold down `w` or `e` to get it done followed by janky behavior when the cursor gets to the edge of the window. The default behavior of revealing more text is 1/2 a window width at a time. This abrupt jump throws off my fragile concentration. Demo Sidescroll Off ## Solution A Turn on word wrapping. `set wrap`. Boring, but effective. You might also want to make word wrapping look nicer. I do that with the following settings. ```vim set breakindent set breakindentopt=sbr " I use a unicode curly array with a set showbreak=↪>\ ``` This of course doesn't solve the problem if, in fact, we want wrapping off. ## Solution B ```vim set sidescroll=1 ``` This simple setting makes Vim behave like every other plain editor. It will incrementally scroll one character at a time to reveal more text as needed. Demo Sidescroll On Here's the help doc to clear things up: ```plain 'sidescroll' 'ss' number (default 0) global The minimal number of columns to scroll horizontally. Used only when the 'wrap' option is off and the cursor is moved off of the screen. When it is zero the cursor will be put in the middle of the screen. When using a slow terminal set it to a large number or 0. When using a fast terminal use a small number or 1. Not used for "zh" and "zl" commands. 'sidescrolloff' 'siso' number (default 0) global The minimal number of screen columns to keep to the left and to the right of the cursor if 'nowrap' is set. Setting this option to a value greater than 0 while having |'sidescroll'| also at a non-zero value makes some context visible in the line you are scrolling in horizontally (except at beginning of the line). Setting this option to a large value (like 999) has the effect of keeping the cursor horizontally centered in the window, as long as one does not come too close to the beginning of the line. Example: Try this together with 'sidescroll' and 'listchars' as in the following example to never allow the cursor to move onto the "extends" character: :set nowrap sidescroll=1 listchars=extends:>,precedes:< :set sidescrolloff=1 ``` Seems like the default was intended for a "slow terminal". If you're using a slow terminal while editing a large amount of unwrapped text, I'd recommend getting a computer from this millennia and enabling `sidescroll`. Also note that a sensible example is shown in the `sidescrolloff` section. ## Off Topic... It's interesting to study all the decisions made due to slow terminals. Try `:help slow-terminal` for a quick look and try `:helpgrep slow` to see way more mentions. Use `:help helpgrep` if you didn't know about `helpgrep` :) ## Closing I'm sure you're thinking why so many words were written for a single setting. Similar to my previous post about [Yank without Jank](/blog/2016/yank-without-jank/), these unexpected janky behaviors cause anxiety. Anxiety that usually can't be identified or resolved in the heat of a coding session, but is there, wading in the weeds, ready to pounce at your next stray keystroke. As a student of Vim, I want identify and resolve these issues so I can get back to why I like Vim; using the dot operator. --- # Ruby Abuse: How Not to Write Ruby, But Still Have Fun Date: 2016-04-30 URL: https://ddrscott.github.io/blog/2016/ruby-abuse/ ```ruby alias λ lambda sadd = λ {|ns, a| [*ns, a].sort } hsort = λ {|h| Hash[h.sort] } hadd = λ {|h, n, g| h.merge(g => sadd.(h[g], n)) } school = λ {|gs| School.new(gs) } School = Struct.new(:gs) do define_method :add, λ {|n, g| (school . (hsort . (hadd . (to_hash, n, g)))) } define_method :to_hash, λ {| | gs || {} } define_method :grade, λ {|g| to_hash[g] || [] } end ``` --- # Making a Window Submode in Vim Date: 2016-04-29 URL: https://ddrscott.github.io/blog/2016/making-a-window-submode/ Tags: vim I found a plugin that is changing my Vim-tire life! This [plugin](https://github.com/kana/vim-submode) is so awesome it should be built into default Vim. What does the [plugin](https://github.com/kana/vim-submode) do? It enables the creation of new submodes. Why would a person want *more* modes?!? Isn't dealing with modes the main deterrent for new Vim users? Isn't Normal, Insert, Command-line, Visual, Select, and Operator-pending enough? (Did I miss one?) Let's try out a new submode and see what happens. ## Problem Window commands are prefixed with ``. Want to create a horizontal split? Try `s`, didn't mean to do that and want to do vertical split? `qv`. Want to resize the vertical split `50>`? Too wide? Narrow it with `5<`. Move back to the other window? `p` or `w`. Are your fingers getting tired? After I get the windows just right using default mappings my fingers are crying for mercy. Here's a short list of common default window commands: ```vim " Change window focus {n}h move cursor left {n} window {n}l move cursor right {n} window {n}j move cursor down {n} window {n}k move cursor up {n} window " Move window H move window far left L move window far right J move window far bottom K move window far top " Change size {n}+ increase height by {n} rows {n}- decrease height by {n} rows {n}< decrease width by {n} columns {n}> increase width by {n} columns | maximize width _ maximize height = equalize sizes ``` For a comprehensive list of window commands try `:help windows.txt`. ## Solution A The most common solution to window-command-itis is to map other keys to these common actions so to include the `` prefix. From [spf13-vim](https://github.com/spf13/spf13-vim/blob/3.0/.vimrc): ```vim map j_ map k_ map l_ map h_ " Note: They go one extra by maximizing the height after entering the split. ``` From [Thoughbot](https://robots.thoughtbot.com/vim-splits-move-faster-and-more-naturally): ```vim nnoremap nnoremap nnoremap nnoremap ``` This has been the accepted solution for most, but it takes away so many convenient keys. And in some cases, it even overrides default behaviour. ``, I miss you. `C-H`, isn't that also ``? Guess I won't be using you either. ## Solution B - Submode to the Rescue This entire solution depends on [kana/vim-submode](https://github.com/kana/vim-submode.git), I consider it one of Japan's national treasures along with ninjas and ramen. Unfortunately, Kana's example use of submodes is a little underwhelming: undo/redo using `g-` and `g+`. I agree with the author that using `g-` and `g+` is not convenient, and using `g++++-++-+` is easier, but the solution for that was simply `u` and ``. I feel a better application for a new submode is window management. Imagine if resizing a split was `++++++++` or `------=->>>>>>>><>` or changing cursor location was `hjlll` or moving was `HjKLkjh`. Imagine no more! First, install the plugin. If you're not sure how to install a plugin, try [junegunn/vim-plug](https://github.com/junegunn/vim-plug). Next, add the following to your `$MYVIMRC`. ```vim " A message will appear in the message line when you're in a submode " and stay there until the mode has existed. let g:submode_always_show_submode = 1 " We're taking over the default setting. Don't worry we'll do " our best to put back the default functionality. call submode#enter_with('window', 'n', '', '') " Note: will also get you out to the mode without this mapping. " Note: also behaves as call submode#leave_with('window', 'n', '', '') " Go through every letter for key in ['a','b','c','d','e','f','g','h','i','j','k','l','m', \ 'n','o','p','q','r','s','t','u','v','w','x','y','z'] " maps lowercase, uppercase and call submode#map('window', 'n', '', key, '' . key) call submode#map('window', 'n', '', toupper(key), '' . toupper(key)) call submode#map('window', 'n', '', '', '' . '') endfor " Go through symbols. Sadly, '|', not supported in submode plugin. for key in ['=','_','+','-','<','>'] call submode#map('window', 'n', '', key, '' . key) endfor " Old way, just in case. nnoremap w ``` After `:source $MYVIMRC`, you'll have a glorious new submode in Vim. You can see I named it *window* mode. Can you guess how to get into *window* mode? ``, the normal prefix used to do any `wincmd`. If this is too drastic, feel free to change line #7 to something else. Just replace `` with a different normal mapping. Let's give it a test drive. window mode in action I know you can't see what keys I'm pressing, but I guarantee I only pressed `` once. I also didn't have to remember any new key bindings. The hesitation in the demo is the resistance to hitting `` every time, which I'll get over soon enough. ## Bonus Mappings But wait there's more! In case I haven't provided enough tips for one post, here's the overrides I have in `$MYVIMRC` to make windowing even better. ```vim " I don't like q, c won't exit Vim when it's the last window. call submode#map('window', 'n', '', 'q', 'c') call submode#map('window', 'n', '', '', 'c') " sets the width to 80 columns, pipe () by default " maximizes the width. call submode#map('window', 'n', '', '\', ':vertical resize 80') " Resize faster call submode#map('window', 'n', '', '+', '3+') call submode#map('window', 'n', '', '-', '3-') call submode#map('window', 'n', '', '<', '10<') call submode#map('window', 'n', '', '>', '10>') ``` ## Rainbows without Unicorns While learning this new way of windowing, there have been a few negatives: 1. I forget that I'm in window mode and get disoriented when I think I'm moving the cursor within a buffer, but it jumps around to other splits. 2. For one off window commands, I have to hit an extra key to get out of window mode or wait for the timeout. 3. When I use some one else's computer, I'm useless. I think most of these annoyances will go away with time, and the benefits overtime in keystroke savings are non-trivial. As for #3, regardless of submodes, the brain freeze will never go away, because no one thinks as strangely as me, and that's a Good Thing™. ## Thanks Shout-out to Kana Natsuno, @kana1, http://whileimautomaton.net/, https://github.com/kana . None of this awesomeness would be possible without https://github.com/kana/vim-submode . She makes some totally sweet plug-ins. Check out her stuff. You won't regret it! Let me know what you think. Am I crazy? What other things deserve a submode? Hit me up in the comments below! Thanks for reading! --- # Yank Without Jank Date: 2016-04-25 URL: https://ddrscott.github.io/blog/2016/yank-without-jank/ Tags: vim For all the great things Vim has to offer, it still has some inconsistencies with basic editors that I simply can't unlearn. One of these nasties is moving the cursor after a visual yank. Go ahead, try it: `vipy`. Where's your cursor? Where did you expect it to be located? When you're in a boring editor and do `shift-down-down-down `, where's your cursor? Where did you expect it to be located? This janky behaviour always throws me off for a moment, then I compose myself, do a `` to jump to the end of my selection, and `p`. There must be a better way! ## Solution A Rebind `y` to do exactly what we did above: ```vim vnoremap y y`> ``` This work and I lived with it for a few minutes, but it still wasn't perfect. I noticed when I do line select using capital `V` the cursor would still move. The vertical motion was perfect, but horizontal motion was still jarring. Yank tick greater than ## Solution B Let's try using marks to keep things in place: ```vim vnoremap y myy`y vnoremap Y myY`y ``` Yank Without Jank Annotation The capital `Y` mapping is just in case we want to do a line wise yank from a character wise selection. Yank mark y Like a well trained dog, the cursor stays even though you yank it. **BONUS** This snippet also takes over the `y` marker, so you can manually `y` at a later time to continue yanking where you left off. This is great when you're moving a lot of stuff around and want pick up where you last were. You can also change the mark to capitals in the binding so it spans buffers, too. ## Closing I've been using this setting for a while and noticed my blood pressure is way down. No more yank anxiety means I'm a step closer to editing utopia! Let me know how this goes for you in the comments below. ## Updates from Comments Commenter @Krzysztof noticed Solution B wasn't allowing the user to specify the target register. He was awesome enough to update the solution. Here's his solution: ```vim vnoremap y "my\"" . v:register . "y`y" ``` I've updated `$MYVIMRC` and it works great. Thanks @Krzysztof for being awesome! --- # Vim Toggle Movement: I Just Want to Go Home Date: 2016-04-14 URL: https://ddrscott.github.io/blog/2016/vim-toggle-movement/ Tags: vim I have a problem with the `^` key. I need its functionality, but its proximity is too far for either of my stubby index fingers. No vimrc change can physically move it closer to me, but I have found a way to move its funtionality to another a key. A key which already knows how to go home. An alternate home. A home where my heart isn't. Enough drama, what's the problem?!? ## The Problem In my daily coding, I have a deep seeded need to go to the first non-blank character of a line. The only key that Vim provides for that functionality is `^`, the hardest key to reach from the home row. A much more comfortable key to reach is `0`, but that shoots us past the first non-blank character all the way to the left edge of the window. `` is the ugly step child of either option since it's even harder to reach and takes us to the first column, too. In case you don't believe me. Here's what the Vim document says: ```text 0 To the first character of the line. |exclusive| motion. ** ** To the first character of the line. |exclusive| motion. When moving up or down next, stay in same TEXT column (if possible). Most other commands stay in the same SCREEN column. works like "1|", which differs from "0" when the line starts with a . *^* ^ To the first non-blank character of the line. |exclusive| motion. ``` Why can't I have a key that is easy to reach and takes me to the first non-blank?!? venn diagram - home, caret, and 0 I could swap the functionality of `0` and `^`: ``` nnoremap 0 ^ nnoremap ^ 0 ``` This still forces me to reach for `^` when I need to need to get to that left edge. There must be a better way! ## The Solution Let's give `0` some super toggling powers. When I hit it the first time, I want it be be like `^`. If I hit it again, I want it to finish its travels and go to the first column. ### Solution A ```vim function! ToggleHomeZero() let pos = getpos('.') execute "normal! ^" if pos == getpos('.') execute "normal! 0" endif endfunction nnoremap 0 :call ToggleHome() ``` This gets us exactly to the center of Venn diagrams heart: **Easy to reach + First non-blank character + First column = Rainbow Colored Unicorn!** ### Solution B After enjoying staring at the function for a while, I realized we could add super toggling powers to other movements. Lets extract the `normal` commands into arguments and share the love with other keys! ```vim function! ToggleMovement(firstOp, thenOp) let pos = getpos('.') execute "normal! " . a:firstOp if pos == getpos('.') execute "normal! " . a:thenOp endif endfunction " The original carat 0 swap nnoremap 0 :call ToggleMovement('^', '0') " How about ; and , nnoremap ; :call ToggleMovement(';', ',') nnoremap , :call ToggleMovement(',', ';') " How about H and L nnoremap H :call ToggleMovement('H', 'L') nnoremap L :call ToggleMovement('L', 'H') " How about G and gg nnoremap G :call ToggleMovement('G', 'gg') nnoremap gg :call ToggleMovement('gg', 'G') ``` ## Conclusion `ToggleMovement` is the gift that keeps on giving! What other movement can we add to the list? Let me know in the comments below. --- # BS to the Black Hole Date: 2016-04-13 URL: https://ddrscott.github.io/blog/2016/bs-to-the-black-hole/ Tags: vim ![BS to Black Hole](../assets/2016/blackhole_bs.png) First post in 2 years. Sorry to keep you waiting. I've been playing with Vim again, more specifically NeoVim https://neovim.io/, and this time I think it's going to stick. ## The Problem Sometimes, I want to delete text without worrying about blowing away the `unnamed` register. This can be done by prefixing a normal or visual delete with `"_`, but that's an awkward dance for my pinky and ring finger. Go ahead, try it. You'll feel like you're in junior high again. ## Solution #1 Setup a single key to do that `"_` thing for me. So my naive approach was to add the following: ```vim nnoremap "_ vnoremap "_ ``` This was fine for 32.1 seconds of usability testing. It did the job, but what cames after a `"_` was usually a `dw` or `db` operator. Ah oh, I said the "o" word. That means I have to make a `opfunc`. (Who makes these rules?!?) ## Solution #2 So what is this operator going to let us do? How about `iw` or `ap` or `v{motion around something you hate}`? If any of those seem awesome, here's how to get in on the hot action! ```vim " Add to your .vimrc or init.vim or vim.after or :e $MYVIMRC func! BlackHoleDeleteOperator(type) if a:type ==# 'char' execute 'normal! `[v`]"_d' elseif a:type ==# 'line' execute 'normal! `[V`]"_d' else execute 'normal! `<' . a:type . '`>"_d' endif endf " Map to because it's under worked in Vim. nnoremap :set opfunc=BlackHoleDeleteOperatorg@ vnoremap :call BlackHoleDeleteOperator(visualmode()) ``` ## How Does it Work? + `opfunc` is best explained in Vim help. Use `:help opfunc` and follow the `` until clarity is achieved. + `:help normal` - evaluates the following characters as if they were typed. + `:help marks` - page down a bit to get the list of automatic marks based on last positions of various changes, jumps, and actions. + http://learnvimscriptthehardway.stevelosh.com/chapters/33.html - seriously, this guy does a lot better explaining than me. Learn it the hard way, first, ask questions later. ## Closing Thanks for getting this far. Do you have a better mapping for ``? Do you have a more creative solution than typing `"_` to access the black hole register? Let me know by commenting or share this post to some one who does. --- # Tech Stack 2014 Edition Date: 2014-05-09 URL: https://ddrscott.github.io/blog/2014/tech-stack-2014-edition/ I've built a lot of things, but the most interesting thing is usually what I'm currently working on or have built recently. With that in mind I thought I'd take a moment to reflect on the technology stack of my current project. ## Overview The service provided is a mobile eBook store. End users can find, buy and read books in their browser and on mobile app platforms like iOS and Android. ## Frameworks The framework selection was very organic and was not chosen all at once. I had extensive background in Rails so it made sense to start there and integrate into other systems via RubyGem plugins when vanilla Rails/Database didn't fit. ### Ruby on Rails I've been using this since version 1, and never looked back. Rails and the Ruby language "get" me. They think the way I do – they complete me. Prior to Ruby I was using Java and writing wrappers that were very similar Ruby and ActiveRecord/Support so to find a language and framework that did all that and more was a dream come true. The discovery was in 2005 which makes it 9 years old. That's like 50 tech years. That being said. I still have not found another language or framework that does things more intuitively that Ruby or Rails. Maybe I'm an old dog that can't learn new tricks. Maybe I'm stuck in my ways. Regardless, I still feel productive and I believe my human clients agree. ### Solr / Sunspot We started using SQL search and quickly ran into performance walls for the variety queries we needed to perform, so we needed something else. Lucene is the industry standard in searching. Solr puts a wrapper around Lucene that allows for dynamic index fields additions and provides more search options. Sunspot integrates all that into ActiveRecord. It took maybe 10 minutes for us to try out Sunspot in our current catalog modals and made the user search experience faster. The thing we miss by not having database queries is table joins. Joins are available in Solr, but have a similar performance hit as DB. The workaround for joins was to have a wide data set. #### Example Sunspot Model Config ```ruby class Node belongs_to :title end class Title has_many :nodes has_many :prices searchable do text :name text :contributors text :snippet integer :nodes, multiple: true do solr_nodes; end string :countries, multiple: true do solr_countries; end string :currencies, multiple: true do solr_currencies; end Price.select(:country, :currency).uniq.each do |price| double "#{price.country}_#{price.currency}" do solr_country_currency(price); end end # ... many other fields end def solr_countries prices.pluck(:country) end def solr_currencies prices.pluck(:currency) end def solr_country_currency(price) prices.where(country: price.country, currency: price.currency).first.try(:price) end end class Price belongs_to :title end ``` #### Example Sunspot Search ```ruby Title.search do full_text('cats') with(:US_USD).less_than(1.0) end ``` The Sunspot site has many examples: http://sunspot.github.io/ ### Redis http://redis.io describes themselves as: > Redis is an open source, BSD licensed, advanced key-value store. > It is often referred to as a data structure server since keys can contain > strings, hashes, lists, sets and sorted sets. To me this is a remote shared hash, array, stack, and queue. All atomic all almost fast as locale memory structures. Here's how I'm using it: * Rails.cache for reponse caching * Resque queues to off load long jobs to a background process. * System wide mutex via `setnx` and `hsetnx` command * Anything that would require a DB lock, I would prefer to let Redis handle the lock. This one concept has allowed my RDBMS to stay small and lean. * ... Check out their documentation for more ideas. Every time I read through the docs I come up with 10 more things I could take off my RDBMS. If only I had could do it over again. ### Couch DB Couch is our dump of raw contents. We only use 20% of Couch's features because the remaining map/reduce portions of the system took long for our data set. I'm sure there are better ways to use it, but in the end one the main features of how it indexes and aggregates data we can't use. Maybe in the future, but we've already moved on. The feature we do use though is a version controlled document store. We take in various sources of data from an external source, convert it to JSON, add a few state tracking fields, and store it in Couch. Couch make the document retrieval fast and painless. ### Apache/Passenger I should be using Nginx, but haven't quite gotten around to figuring it out. Apache is fast enough for our needs right now. Passenger is the gold standard in managing multiple Ruby web processes. Nothing too exciting here. It just works. ### Node.js http://nodejs.org/ This is where the system starts to Frankenstein. We use Node.js for Socket.io ### Socket.io http://socket.io/ Most of the cool factor of your system revolves around the use of Socket.io to bring realtime server push to all our front ends: Browser and Apps. When we have a flash sale or special offer web sockets delivers. --- # Octopress to the Rescue Date: 2014-04-16 URL: https://ddrscott.github.io/blog/2014/octopress-to-the-rescue/ Octopress Logo After nearly a year without Posterous, I've finally got around to migrating to another system. This time around it's a static blog builder: http://octopress.org/ Octopress is a wrapper around Jekyll which is a utility for creating a static blogging site. A blog that can be hosted on anything that can server files: Apache, AWS, Dropbox, etc... ## Q: Why not use WordPress, Tumbler, Blogger?!? Those tailor to non-developers and their editors are not the ones I use everyday, my IDE. Octopress lets me write my articles the same way as I write code: plain text. It uses markdown or any other HTML generator I configure. Static pages are easier for me to deploy when I have to switch hosting providers. Hopefully Github sticks around for a while, but if they don't, I can rest assured I can have the article hosted some where else faster than DNS propagation. ``` ruby puts "I can use code snippets" # And they will be nicely formatted. maybe_one_day do use(CodePen) or use(jsFiddle) end ``` The initial theme I selected is https://github.com/sevenadrian/MediumFox. It's clean, simple and pretty much what I would have *tried* to make if I had time. They seemed to have omitted some features from the classic theme, but as with all Open Source projects it is easy to remedy. Time to post to do my first deploy... then convert all the Posterous archives.