DOGE

“Good morning. On top of your current duties, your team is taking over providing a vital service to hundreds of millions of people.”

“How will we do that on top of the existing work?”

“Efficiency.”

“OK. Completely different domain; the people who know all the ins and outs of this service, and software and stuff, when do they train us?”

“Oh, we fired them. You’re going to be very efficient.”

“Well, why didn’t you just not fire the people already knowledgeable and doing this work?”

“We put daily numbers on a website. Firing people makes big numbers today.”

“Won’t we end up just expanding this office with a similar number of people except who won’t know what they’re doing? Is this going to work?”

“Look, I’m 23 making six figures. Elon and I will be back at SpaceX soon enough and if shit breaks Trump is going to blame you and Democrats and make some splashy deportations and it’s going to be fine.”

[Nervous laughter] “Oh good!”

“Oh, not for you. We’re going to slander you while you collapse under an impossible workload, but fine for me. SpaceX is getting some big contracts soon.”

”But we’ll save some money, right?”

”Possibly [deep drag on a cigarette] but most of that is going to the 1% in a big tax cut.”

“Cool I guess. This seems kinda cynical and cruel to the thousands who got fired and millions depending on these services.”

“If it makes you feel better, these particular victims are imaginary because this is a writing exercise. We don’t exist either.”

“Oh, huh…” [puts on basketball shorts, catches ball thrown from out of frame, shoots, crowd cheers]

Installing a Second HTTPS Service in DDev

Let’s say you have a DDev setup with https://example.ddev.site, but you need a second secure URL for a separate service. Normally you’d do this with additional_hostnames, but you still would need to use a nonstandard port like 8443 because port 443 across all the hostnames goes to the web container. But there is a way to get https://host2.ddev.site (on port 443) to be served by another DDev service:

First, in your new service, manually set up it up to serve port 443 on the new hostname:

    environment:
# This configures traefik to map host2.ddev.site:443 specifically
# to this container port 8080. Because we don't want the web container
# handling this hostname, we do NOT include it in additional_hostnames.
VIRTUAL_HOST: host2.ddev.site
HTTP_EXPOSE: 80:8080
HTTPS_EXPOSE: 443:8080

In DDev’s config.yaml, do not place host2 in additional_hostnames. The reason is this would place “host2.ddev.site” in the web container’s VIRTUAL_HOST. I recommend a comment to point this out:

# We _are_ using the hostname host2.ddev.site in the project, BUT we cannot
# list it here because then the web container would handle it.
additional_hostnames: []

✅ Now you have two HTTPS services:

  • example.ddev.site:443 -> web container port 80
  • host2.ddev.site:443 -> host2 container port 8080

Allowing https://host2.ddev.site to work inside another container

This isn’t essential, but in my case I needed to use the same URL in the browser and within a third container site3 running Alpine linux.

In the site3 Dockerfile, install ca-certificates, so we can later install the self-signed DDev cert:

RUN apk add ca-certificates

In docker-compose.site3.yaml, map the DNS and add a volume mount with the cert:

    external_links:
# Set up host2.ddev.site to reach the main traefik router
- ddev-router:host2.ddev.site
volumes:
- ".:/mnt/ddev_config"
# Allow copying mkcert/rootCA.pem into container
- ddev-global-cache:/mnt/ddev-global-cache

In config.yaml, add post-start hooks to install the cert:

hooks:
post-start:
- exec: cp /mnt/ddev-global-cache/mkcert/rootCA.pem /usr/local/share/ca-certificates/my-cert.crt
service: site3
- exec: update-ca-certificates
service: site3

Using the DDev cert with node

If site3 uses Node, you’ll need to select the OpenSSL cert store instead of the built-in one.

node --use-openssl-ca script.js

If using pm2:

NODE_OPTIONS="--use-openssl-ca" pm2 start ...

Quick Thoughts on “AI Music”

The tech is fascinating but the trajectory of it being used to screw artists is already clear. Hopefully artists will lawyer up and get their works out of training data.

Large artist unions should be mandated access to freely and deeply test public models for the purposes of detecting IP in the training data. If you’re making money providing a model, be prepared to show your papers on its training; trust needs to be earned.

Commercial models should be taxed, with revenue supporting ongoing development of tools for smaller artists to protect themselves from being ripped off.

If this effort goes well for artists, I generally expect sound quality of music generation to improve but “humanness” of the public models to sink. Bad news for companies wanting nearly free anodyne music for commercial use, but arguably better for art.

Where do the real prices of services like Udio land after investors stop footing the bills? Do these VC subsidized toys ignite the creative spirit of everyday non-artists to get into the game of actual music creation? What are we missing out on by having a world where very few people make music?

Restarting a node service on macOS boot

This was a pain to get right, so here’s what I landed on:

  1. Open System Settings > Energy Saver
  2. Turn on Start up automatically after a power failure
  3. Under Privacy & Security > Full Disk Access, add the executable /usr/sbin/cron
  4. Log in as root: sudo su
  5. Install pm2 globally: npm i -g pm2
  6. Don’t get trapped in vim: export EDITOR=nano
  7. Copy the PATH to clipboard: eval 'echo PATH="$PATH"' | pbcopy
  8. Edit crontab: crontab -e
  9. Paste in the PATH
  10. Create cron entries using the full path for file/directory references (you don’t need it for executables)

Example root cron:

PATH=/Users/steve/.bun/bin:/usr/local/bin:...

@reboot pm2 start /full/path/to/script.js

This creates a PM2 service for the root user. You can only see/remove its entries when logged in as root:

sudo su

# list services
pm2 list

# remove 1st entry
pm2 del 0