this post was submitted on 05 Sep 2023
52 points (94.8% liked)

Programming

17354 readers
332 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
 

In a lot of projects, this is usually done via README. It tells you what running dependencies to install and how to run certain commands.

This can get harder to maintain as things change, the project grows, and complexity increases.

I see two parts to automate here: actually setting up the environment, and running the application or orchestrating multiple apps.

How do you address this?

top 23 comments
sorted by: hot top controversial new old
[–] thelonelyghost@infosec.pub 18 points 1 year ago
  • direnv
  • nixpkgs (with flakes enabled)
  • committed flake.nix and flake.lock
  • Makefile or several shell scripts in bin/ for common tasks

(explainer)

[–] wesker@lemmy.sdf.org 16 points 1 year ago* (last edited 1 year ago) (2 children)

My life changed once I discovered dev containers in VS Code. Basically developing within a bootstrapped Docker container.

[–] fuckspez@programming.dev 4 points 1 year ago (1 children)

We're following the same. It's been a life changer for us.

[–] __init__@programming.dev 5 points 1 year ago

Dev containers are the shit. We did the readme instructions style at my last job and it took new hires like a full day to set up, propagating changes was a nightmare and shit was always going wrong. We use dev containers now. Everyone gets the exact same version of everything with almost zero opportunity to screw it up. If anything gets messed up, it’s fixed by a rebuild almost every time.

[–] r1veRRR@feddit.de 2 points 1 year ago (2 children)

How's the filesystem performance? Whenever I've mounted something into a Docker Container, the performance has suffered. For example, things like NPM/MVN suddenly take way longer.

[–] __init__@programming.dev 1 points 1 year ago

It depends on how you set it up. You’re going to take a performance hit using a bind mount. The docs recommend putting your workspace into an actual docker volume for better performance, but I haven’t tried that myself cause so far the bind mount has performed “good enough” for me.

[–] wesker@lemmy.sdf.org 1 points 1 year ago

In my experience it's greatly dependent on how much memory you dedicate to Docker. I've never ran any performance benchmarks, this is just anecdotal.

[–] abhibeckert@lemmy.world 9 points 1 year ago* (last edited 1 year ago)

I use "Dev Containers" in Visual Studio Code.

  • Step 1 - open the root directory of your project in Visual Studio Code
  • Step 2 - say yes when prompted to launch the dev container
  • Step 3 - there is no Step 3.

You do need to create various config files/etc, but it's pretty straight forward and only needs to be done once for each project. Those config files are all kept in version control.

The dev container setup generally matches the production system pretty closely. Depending on the project that ranges from dead simple to exceedingly complex with several servers all of which are built, configured and launched automatically when you open the project. If you need to change a container you just edit the config file and run a command to rebuild all of them. It's surprisingly fast - thanks to caching at every step to avoid rebuilding the entire environment.

[–] bahmanm@lemmy.ml 8 points 1 year ago (1 children)

I work primarily on the JVM & the projects (personal/corporate) I work w/ can be summarised as below:

  1. Building & running the repo is done on the host using an SCM (software configuration management tool) such as Gradle or SBT.
  2. The external dependencies of the repo, such as Redis, are managed via adocker-compose.yml.
  3. The README contains a short series of commands to do different tasks RE (1)

However one approach that I've always been fond of (& apply/advocate wherever I can) is to replace (3) w/ a Makefile containing a bunch of standard targets shared across all repos, eg test, integration-test. Then Makefiles are thinly customised to fit the repo's particular repo.

This has proven to be very helpful wrt congnitive load (and also CI/CD pipelines): ALL projects, regardless of the toolchain, use the same set of commands, namely

  • make test
  • make integration-test
  • make compose-up
  • make run

In short (quoting myself here):

Don't repeat yourself. Make Make make things happen for you!

[–] r1veRRR@feddit.de 2 points 1 year ago (1 children)

How do you manage JVM versions? We have many older projects that use 8, and some newer ones using 17, for example.

[–] bahmanm@lemmy.ml 1 points 1 year ago

I've been using sdkman for about a decade now and am totally pleased w/ it. It does a very good job of managing JDK versions for you and much more, eg SBT, Gradle, Scala, Groovy, Leiningen, SpringBoot, ...

Now, technically you could use sdkman in your CI/CD pipeline too but I'd find it a strong smell. I've always used dedicated images pre-configured for a particular JDK version in the pipeline.

[–] dzamlo@programming.dev 6 points 1 year ago

There is also https://devenv.sh/ which is based on nix.

Nix/NixOS with flakes, direnv, and lorri.

[–] tuto193@lemmy.world 5 points 1 year ago

Which programming language(s) do you have in mind? Many already have built-in support for this (Go, Rust, Nim), while others have external tools you could use (Python->Poetry). Otherwise, if you want a "fast" (easy to understand) solution, a shell-script might be a fine solution.

If you want some real power, you could use the nix package manager (as already stated by other comments). It's easy to install, but you need to learn how to use it, and with that you can easily share dev environments.

[–] kicksystem@lemmy.world 5 points 1 year ago

Nix! And then you can also auto-create your CI/CD environment from Nix. They share a common source, which eliminates whole classes of problems. It's supsr fast and very awesome when it works, but it has a learning curve. Worth the effort though.

[–] neopenguin@lemmy.world 3 points 1 year ago

I like using vagrant and/or docker to provision the environments, when possible.

[–] karlhungus@lemmy.ca 3 points 1 year ago* (last edited 1 year ago)

For my local team: Generally a container (docker) for local dev. My team uses go so sometimes a Makefile without docker is enough. For other teams i've mostly i see docker.

for multiple apps this can get more complicated, docker compose, or skaffold is what i generally reach for (my team is responsible for k8s clusters so skaffold is pretty natural). I've seen other teams use garden.

hashicorp makes something called waypoint which i've never used. Nix people seem to be well liked as well.

[–] Michal@programming.dev 3 points 1 year ago

Docker compose to get started quickly. Although docker can be slow on windows so other methods are also documented.

[–] radau@lemmy.dbzer0.com 1 points 1 year ago

Typically using Lando which is a frontend for docker-compose which makes it easier for the users unfamiliar with docker to use it to spin up their environments.

[–] 0x0@programming.dev 0 points 1 year ago

Well-documented bash scripts.

[–] Kissaki@feddit.de -1 points 1 year ago (1 children)

Orchestration and containerization are heavy dependencies. I prefer few and simple requirements, especially on the environment.

That only works well with tech with a defined or prevalent environment. Then it's a matter of keeping docs up to date - like any doc.

Using small scripts if necessary, and splitting off non central dev workflows helps keeping it simple and focused.

[–] cyclohexane@lemmy.ml 1 points 1 year ago (1 children)

Containerization is only heavy outside of Linux, and orchestration only makes sense when manual orchestration becomes too tedious (it's easy to orchestrate a single app).

Keeping docs for those things is very troublesome imo. You can't feasibly consider everyone's different environment, C library used, their system's package manager and how it may package software differently than yours, and the endless array of things they may have already installed that may effect your app in some way. Sure, it's not super common, but it's hell when it does.

But I suppose if your use case is very simple, like "just have nodejs installed and run npm start" then sure. But things can get ugly very easily.

[–] Kissaki@feddit.de 1 points 1 year ago* (last edited 1 year ago)

For toolchains like rust, go, c#, typescript/nodejs how would "things get ugly very fast" when making the toolchain an env dependency?