I think a majority of the rants about Terraform I read are written from the perspective of someone managing inherently ephemeral infrastructure - things that are easily disposed of and reprovisioned quickly. The author of such a critique is likely managing an application stack on top of an account that someone else has provided them, a platform team maybe. CDK probably works for you in this case.
Now, if you belong to that platform team and have to manage the state of tens of thousands of "pet" resources that you can't just nuke and recreate using the CDK (because some other team depends on their avaiability) then Terraform is the best thing since sliced bread; it manages state, drift, and the declarative nature of the DSL is desirable.
I think with YMMV, these are the two most important things we need to keep in our mind. With plethora of technologies and similar tools, we generally read the tin superficially but not the manual, and we declare "This is bollocks!".
Every tool is targeted towards a specific use and thrive in specific scenarios. Calling a tool bad for something not designed for is akin to getting angry to your mug because it doesn't work as well when upside down [0].
> If you really do think that Terraform is code, then go try and make multiple DNS records for each random instance ID based on a dynamic number of instances. Correct me if I'm wrong, but I don't think you can do that in Terraform.
It depends on where the source of dynamism is coming from, but yes you can do this in Terraform. You get the instances with data.aws_instances, feed it into aws_route53_record with a for_each, and you're done. Maybe you need to play around with putting them into different modules because of issues with dynamic state identifiers, but it's not remotely the most complicated Terraform I've come across.
That's a separate question from whether or not it's a good idea. Terraform is a one-shot CLI tool, not a daemon, and it doesn't provide auto-reconciliation on its own (albeit there are daemons like Terraform Enterprise / TerraKube that will run Terraform on a schedule for you and thus provide auto-reconciliation). Stuff like DNS records for Kubernetes ingress is much better handled by external-dns, which itself is statically present in a Kubernetes cluster and therefore might be more properly installed with Terraform.
My experience is only with the main AWS cloudformation based version of CDK, although there is also CDK for terraform, which supports any resource that terraform supports, although some of what I'm about to say is not applicable to that version.
What I like about CDK, is that you can write real code, and it supports a wide range of languages, although typescript is the best experience.
Provided that you don't use any of the `fromLookup` type functions, you can run and test the code without needing any actual credentials to your cloud provider.
CDK essentially complies your code into a cloudformation template, you can run the build without credentials, then deploy the built cloudformation template separately.
You don't need to worry about your terraform server crashing half way though a deployment, because cloudformation runs the actual deployment.
I ditched Terraform years ago and just interact with the raw cloud provider SDKs now. It's much easier to long-term evolve actual code and deal with weird edgecases that come up when you're not in beholden to the straight jacket that is configuration masquerading as code.
Oh yea, and we can write tests for all that provisioning logic too.
I’ve been thinking about this for a long time. But doesn’t it brings a host of other issues? For example, I need to update instance RAM from 4 to 8 Gb but how do I know if the instance exists or should be created? I need to make a small change, how do I know what parts of my scripts to run?
You write code to do these things? If there's a requirement for you to be able to do such a thing make it a feature, implement it with tests and voila, no different than any other feature or bug you work on is it?
Not OP, but for rolling back we just… revert the change to the setup_k8s_stuff.py script
!
In practice it’s a module that integrates with quite a large number of things in the monolith because that’s one of the advantages of Infrastructure as Actual Code: symbols and enums and functions that have meaningful semantics in your business logic are frequently useful in your infrastructure logic too. The Apples API runs on the Apples tier, the Oranges API runs on the Oranges tier, etc. etc.
People call me old fashioned (“it’s not the 1990s any more”) but when I deploy something it’s a brand new set of instances to which traffic gets migrated. We don’t modify in place with anything clever and I imagine reverting changes in a mutable environment is indeed quite hard to get right (and what you are hinting at?)
One thing that annoys me is the inconsistency between mutable "data" resources and everything else.
Something that would be nice would be the rough equivalent of the deployment slots used in Azure App Service, but for everything else too. So you could provision a "whole new resource" and then atomically switch traffic over to it.
You can express this in Terraform, it's just a little more contrived. You release your changes as Terraform modules (a module in and of itself doesn't do anything, it's like a library/package), then your Terraform workspace instantiates both a "blue" module and a "green" module, at different versions, with DNS / load balancing resources depending on both modules and switching between either blue or green.
>> Wait, there's something here that I'm not getting. Why are you compiling the code to WebAssembly instead of just running it directly on the server?
> Well, everything's a tradeoff. Let's imagine a world where you run the code on the server directly.
> If you're using a language like Python, you need to have the Python runtime and any dependencies installed. This means you have to incur the famous wrath of pip (pip hell is a real place and you will go there without notice). If you're using a language like Go, you need to have either the Go compiler toolchain installed or prebuild binaries for every permutation of CPU architecture and OS that you want to run your infrastructure on. This doesn't scale well.
> One of the main advantages of using WebAssembly here is that you can compile your code once and then run it anywhere that has a WebAssembly runtime, such as with the yoke CLI or with Air Traffic Controller.
At this point, why not use a proper runtime like JVM or .Net?
Then one can also easily use reasonable languages like C#, Java or Kotlin as well.
Indeed. This isn't really a replacement for terraform, unless you are only using terraform to manage k8s resources. Which probably isn't most people who are currently using Terraform.
Author here. It's mainly for k8s resources; but if you install operators like external-dns or something like crossplane into your cluster, you can manage infra too.
This seems like a great approach that sits between using the sdk directly and a dsl/yaml. My experience has been that most of the people configuring these systems don’t know how to code, and configuration languages is their gateway. Most never venture past configuration which is why yaml is so used and difficult to get any traction outside of it. I think terraform adopted some of the patterns which have been around since a long time ( remember the chef va puppet discussion from a decade ago) and it massively helped with adoption. Cue seems a step up from terraform ( you can use cue vet for type checking, even if CRDs are not yet supported all the way) but tracking seems to be low as it’s hard for non-programmers to grasp. Maybe Claude will help to move all people that don’t want to manage these systems with code to something even more simpler than yaml and open the door for real infra as code for the rest.
> My experience has been that most of the people configuring these systems don’t know how to code, and configuration languages is their gateway
I don't really disagree but this is such a pessimistic, NIH-syndrome viewpoint. Feel free to look at the code for any of the major Terraform providers. There's a lot of production-hardened, battle-tested Go code that's dealing with the idiosyncrasies of the different cloud APIs. They are an incredibly deep abstraction. Terraform also implicitly builds a DAG to run operations in the right order. Comparing writing HCL to writing straight Go code with the AWS SDK, the HCL code has something like an order of magnitude fewer lines of code. It absolutely makes sense to use Terraform / HCL instead of writing straight Go code.
Yeah, don’t really understand the sentiment here. I’ve been programming for 20 years and actively use Terraform and CUE at work. I actually write a lot of Go code for our platform, but I’ve never once thought it’d be a good idea to just start calling APIs directly.
But doesn't the codeless "infrastructure as code" kind of smell like cargo cult practices, i mean there might be places where having your infrastructure defined as data is a really good thing, but at least in my work i keep hitting roadblocks where i really wish i was writing actual logic in a modern scripting language rather then trying to make data look like code and code look like data, which is what a lot of devops tutorials seem to be teaching.
> If you're using a language like Go, you need to have either the Go compiler toolchain installed or prebuild binaries for every permutation of CPU architecture and OS that you want to run your infrastructure on. This doesn't scale well.
This is exactly the approach that Terraform takes. Both Terraform and its providers are written in Go, which is a great language for this purpose because of GoReleaser and the ease of compiling to different architectures and OSes. It scales just fine.
Did the author talk to any senior Terraform practicioners before building this?
> If you really do think that Terraform is code, then go try and make multiple DNS records for each random instance ID based on a dynamic number of instances. Correct me if I'm wrong, but I don't think you can do that in Terraform.
It's possible a few ways. I prefer modules, and this LLM answer describes an older way with count and for_each.
It's always possible that incantation of the problem space has a gotcha that needs a work around, but I doubt it would be a blocker.
> New tools like CUE, jsonnette, PKL, and others have emerged to address some of the short comings of raw YAML configuration and templating. Inspiring new K8s package managers such as timoni. However it is yoke’s stance that these tools will always fall short of the safety, flexibility and power of building your packages from code.
The never-ending debate continues between configuration languages and traditional languages. I don't know if the industry will ever standardize in this area.
Speaking of IAC- I have an existing GCP project with some basic infra (service accounts, cloud run jobs, cloud build scripts, and databases) what is the best tool to _import_ all of this into IAC. The only real tool I’ve found is terraformer. I have no dog in the race regarding tooling e.g if my output is Pulumi, terraform, or just straight YAML. I’m just looking to “codify” it.
You can check the docs for the GCP provider to see if the resources you want to manage are "importable" into the Terraform state file; they usually are and you'll see a section at the bottom of each resources documentation page showing you how to do this. e.g. https://registry.terraform.io/providers/hashicorp/google/lat...
Your process will be -
1. Write TF configuration approximating what you think is deployed
2. Import all your resources into the state file
3. Run a `terraform plan ...` to show what Terraform wants to change about your resources (including creating any you missed or changing/recreating any your config doesn't match)
4. Correct your TF configuration to reflect the differences from 3.
5. Goto 3, repeat until you get a "No changes" plan or the you actually want TF to correct some things (add tags, for example)
6. run `terraform apply`
and optionally...
7. set up your CI/automation to run `terraform plan` regularly and report the "drift" via some means - stuff that has been changed about your resources outside of Terraform management.
I put a lot of stock in this last step, because small, incremental change is the cornerstone of platform management. If you want to make a change and come to find there's a huge amount of other stuff you have to correct as well, your change isn't small any more.
> If you really do think that Terraform is code, then go try and make multiple DNS records for each random instance ID based on a dynamic number of instances. Correct me if I'm wrong, but I don't think you can do that in Terraform.
If you really do think that Terraform is code, then go try and make multiple DNS records for each random instance ID based on a dynamic number of instances. Correct me if I'm wrong, but I don't think you can do that in Terraform.
I was 100% for infra as code as it gives devs more freedom to get what they need. Then the startup went from 50 to 100 to 1000 and people just needed to get stuff done and usually the exact same thing over and over. So we migrated to a custom DSL which is much easier to standardize, lint, review and read. I think when you don't know what you need code is better for flexibility, when the domain is sorted, DSL.
It looks almost the same as HCL (although I think this was convergent evolution, since I've actually never used Terraform):
# this is YSH code!
echo 'hello world'
Data aws_route53_zone cetacean_club {
name = 'cetacean.club.'
}
Resource aws_route53_record A {
zone_id = data.aws_route53_zone.cetacean_club.zone_id
name = "ingressd.$[data.aws_route53_zone.cetacean_club.name]"
type = 'A'
ttl = "300"
}
It can be serialized to JSON, or post-processed and then serialized
---
Then I show you can wrap Resource in a for loop, as well as parameterize it with a "proc" (procedure).
make-resource (12)
make-resource (34)
if (true) {
make-resource (500)
}
This is all still in progress, and can use feedback, e.g. on Github. (This demo runs, but it relies on a recent bug fix.)
The idea is not really to make something like Terraform, but rather to make a language with metaprogramming powerful enough to make your own "dialects", like Terraform.
At that time, Oils was a slow Python prototype, but now it's fast C++! So it's getting there
The idea of Oils is shell+Python+JSON+YAML, squished together in the same language. So this works by reflection and function calls, not generating text ("Unix sludge"). No Go templates generating YAML, etc.
I think a majority of the rants about Terraform I read are written from the perspective of someone managing inherently ephemeral infrastructure - things that are easily disposed of and reprovisioned quickly. The author of such a critique is likely managing an application stack on top of an account that someone else has provided them, a platform team maybe. CDK probably works for you in this case.
Now, if you belong to that platform team and have to manage the state of tens of thousands of "pet" resources that you can't just nuke and recreate using the CDK (because some other team depends on their avaiability) then Terraform is the best thing since sliced bread; it manages state, drift, and the declarative nature of the DSL is desirable.
Horses for courses.
> Horses for courses.
I think with YMMV, these are the two most important things we need to keep in our mind. With plethora of technologies and similar tools, we generally read the tin superficially but not the manual, and we declare "This is bollocks!".
Every tool is targeted towards a specific use and thrive in specific scenarios. Calling a tool bad for something not designed for is akin to getting angry to your mug because it doesn't work as well when upside down [0].
[0]: https://i.redd.it/mcfym6oqx5p11.jpg
> If you really do think that Terraform is code, then go try and make multiple DNS records for each random instance ID based on a dynamic number of instances. Correct me if I'm wrong, but I don't think you can do that in Terraform.
It depends on where the source of dynamism is coming from, but yes you can do this in Terraform. You get the instances with data.aws_instances, feed it into aws_route53_record with a for_each, and you're done. Maybe you need to play around with putting them into different modules because of issues with dynamic state identifiers, but it's not remotely the most complicated Terraform I've come across.
That's a separate question from whether or not it's a good idea. Terraform is a one-shot CLI tool, not a daemon, and it doesn't provide auto-reconciliation on its own (albeit there are daemons like Terraform Enterprise / TerraKube that will run Terraform on a schedule for you and thus provide auto-reconciliation). Stuff like DNS records for Kubernetes ingress is much better handled by external-dns, which itself is statically present in a Kubernetes cluster and therefore might be more properly installed with Terraform.
I'm quite happy with CDK[0].
My experience is only with the main AWS cloudformation based version of CDK, although there is also CDK for terraform, which supports any resource that terraform supports, although some of what I'm about to say is not applicable to that version.
What I like about CDK, is that you can write real code, and it supports a wide range of languages, although typescript is the best experience.
Provided that you don't use any of the `fromLookup` type functions, you can run and test the code without needing any actual credentials to your cloud provider.
CDK essentially complies your code into a cloudformation template, you can run the build without credentials, then deploy the built cloudformation template separately.
You don't need to worry about your terraform server crashing half way though a deployment, because cloudformation runs the actual deployment.
[0]: https://github.com/aws/aws-cdk
I ditched Terraform years ago and just interact with the raw cloud provider SDKs now. It's much easier to long-term evolve actual code and deal with weird edgecases that come up when you're not in beholden to the straight jacket that is configuration masquerading as code.
Oh yea, and we can write tests for all that provisioning logic too.
I’ve been thinking about this for a long time. But doesn’t it brings a host of other issues? For example, I need to update instance RAM from 4 to 8 Gb but how do I know if the instance exists or should be created? I need to make a small change, how do I know what parts of my scripts to run?
You write code to do these things? If there's a requirement for you to be able to do such a thing make it a feature, implement it with tests and voila, no different than any other feature or bug you work on is it?
How are you handling creating multiple resources in parallel? or rolling back changes after an unsuccessful run?
Not OP, but for rolling back we just… revert the change to the setup_k8s_stuff.py script !
In practice it’s a module that integrates with quite a large number of things in the monolith because that’s one of the advantages of Infrastructure as Actual Code: symbols and enums and functions that have meaningful semantics in your business logic are frequently useful in your infrastructure logic too. The Apples API runs on the Apples tier, the Oranges API runs on the Oranges tier, etc. etc.
People call me old fashioned (“it’s not the 1990s any more”) but when I deploy something it’s a brand new set of instances to which traffic gets migrated. We don’t modify in place with anything clever and I imagine reverting changes in a mutable environment is indeed quite hard to get right (and what you are hinting at?)
> I imagine reverting changes in a mutable environment is indeed quite hard to get right (and what you are hinting at?)
I guess you're not managing any databases then? Because you can't just treat those immutably, you have to manage the database in-place.
One thing that annoys me is the inconsistency between mutable "data" resources and everything else.
Something that would be nice would be the rough equivalent of the deployment slots used in Azure App Service, but for everything else too. So you could provision a "whole new resource" and then atomically switch traffic over to it.
You can express this in Terraform, it's just a little more contrived. You release your changes as Terraform modules (a module in and of itself doesn't do anything, it's like a library/package), then your Terraform workspace instantiates both a "blue" module and a "green" module, at different versions, with DNS / load balancing resources depending on both modules and switching between either blue or green.
Terraform added tests somewhat recently: https://developer.hashicorp.com/terraform/language/tests
I agree that the SDK is better for many use cases. I do like terraform for static resources like aws vpc, networking, s3 buckets, etc.
>> Wait, there's something here that I'm not getting. Why are you compiling the code to WebAssembly instead of just running it directly on the server?
> Well, everything's a tradeoff. Let's imagine a world where you run the code on the server directly.
> If you're using a language like Python, you need to have the Python runtime and any dependencies installed. This means you have to incur the famous wrath of pip (pip hell is a real place and you will go there without notice). If you're using a language like Go, you need to have either the Go compiler toolchain installed or prebuild binaries for every permutation of CPU architecture and OS that you want to run your infrastructure on. This doesn't scale well.
> One of the main advantages of using WebAssembly here is that you can compile your code once and then run it anywhere that has a WebAssembly runtime, such as with the yoke CLI or with Air Traffic Controller.
At this point, why not use a proper runtime like JVM or .Net?
Then one can also easily use reasonable languages like C#, Java or Kotlin as well.
> At this point, why not use a proper runtime like JVM or .Net?
Because then you are forced to only use managed languages?
Ahh, good point.
I guess Rust (and maybe other unmanaged languages) can be compiled to WebAssembly?
Yes:
Go: https://go.dev/blog/wasi
Rust: https://github.com/bytecodealliance/wasmtime/blob/main/docs/...
.net: https://devblogs.microsoft.com/dotnet/extending-web-assembly...
https://logandark.net/calc is C++ compiled to WebAssembly using Emscripten. Back from I think 2018.
These days Rust is practically the poster child of compiling to WebAssembly because it's so easy. Most WASM content I see is actually about Rust.
Looks promising but it starts with a (justified) rant about terraform and then goes into how to replace Helm.
I am confused. Can yoke be used to create and manage infrastructure or just k8s resources?
Indeed. This isn't really a replacement for terraform, unless you are only using terraform to manage k8s resources. Which probably isn't most people who are currently using Terraform.
Author here. It's mainly for k8s resources; but if you install operators like external-dns or something like crossplane into your cluster, you can manage infra too.
> into your cluster
I guess the point is: what if you don't have a cluster.
ok, that makes sense. A better Helm would be nice. timoni.sh is getting better and better, but Cue is a big hurdle.
Unfortunately, I'm not a big fan of the yaml-hell that crossplane is either.
But as a Terraform replacement systeminit.com is still the strongest looking contender.
It’s just a dunk on terraform to promote yet another K8s provisioning thing.
This seems like a great approach that sits between using the sdk directly and a dsl/yaml. My experience has been that most of the people configuring these systems don’t know how to code, and configuration languages is their gateway. Most never venture past configuration which is why yaml is so used and difficult to get any traction outside of it. I think terraform adopted some of the patterns which have been around since a long time ( remember the chef va puppet discussion from a decade ago) and it massively helped with adoption. Cue seems a step up from terraform ( you can use cue vet for type checking, even if CRDs are not yet supported all the way) but tracking seems to be low as it’s hard for non-programmers to grasp. Maybe Claude will help to move all people that don’t want to manage these systems with code to something even more simpler than yaml and open the door for real infra as code for the rest.
> My experience has been that most of the people configuring these systems don’t know how to code, and configuration languages is their gateway
I don't really disagree but this is such a pessimistic, NIH-syndrome viewpoint. Feel free to look at the code for any of the major Terraform providers. There's a lot of production-hardened, battle-tested Go code that's dealing with the idiosyncrasies of the different cloud APIs. They are an incredibly deep abstraction. Terraform also implicitly builds a DAG to run operations in the right order. Comparing writing HCL to writing straight Go code with the AWS SDK, the HCL code has something like an order of magnitude fewer lines of code. It absolutely makes sense to use Terraform / HCL instead of writing straight Go code.
Yeah, don’t really understand the sentiment here. I’ve been programming for 20 years and actively use Terraform and CUE at work. I actually write a lot of Go code for our platform, but I’ve never once thought it’d be a good idea to just start calling APIs directly.
But doesn't the codeless "infrastructure as code" kind of smell like cargo cult practices, i mean there might be places where having your infrastructure defined as data is a really good thing, but at least in my work i keep hitting roadblocks where i really wish i was writing actual logic in a modern scripting language rather then trying to make data look like code and code look like data, which is what a lot of devops tutorials seem to be teaching.
> traction seems to be low when referring to cue. Autocorect issue
> If you're using a language like Go, you need to have either the Go compiler toolchain installed or prebuild binaries for every permutation of CPU architecture and OS that you want to run your infrastructure on. This doesn't scale well.
This is exactly the approach that Terraform takes. Both Terraform and its providers are written in Go, which is a great language for this purpose because of GoReleaser and the ease of compiling to different architectures and OSes. It scales just fine.
Did the author talk to any senior Terraform practicioners before building this?
> If you really do think that Terraform is code, then go try and make multiple DNS records for each random instance ID based on a dynamic number of instances. Correct me if I'm wrong, but I don't think you can do that in Terraform.
It's possible a few ways. I prefer modules, and this LLM answer describes an older way with count and for_each.
It's always possible that incantation of the problem space has a gotcha that needs a work around, but I doubt it would be a blocker.
https://www.perplexity.ai/search/if-you-really-do-think-that...
From the website:
> New tools like CUE, jsonnette, PKL, and others have emerged to address some of the short comings of raw YAML configuration and templating. Inspiring new K8s package managers such as timoni. However it is yoke’s stance that these tools will always fall short of the safety, flexibility and power of building your packages from code.
The never-ending debate continues between configuration languages and traditional languages. I don't know if the industry will ever standardize in this area.
Speaking of IAC- I have an existing GCP project with some basic infra (service accounts, cloud run jobs, cloud build scripts, and databases) what is the best tool to _import_ all of this into IAC. The only real tool I’ve found is terraformer. I have no dog in the race regarding tooling e.g if my output is Pulumi, terraform, or just straight YAML. I’m just looking to “codify” it.
Any suggestions from experience?
Just go with plain Terraform.
You can check the docs for the GCP provider to see if the resources you want to manage are "importable" into the Terraform state file; they usually are and you'll see a section at the bottom of each resources documentation page showing you how to do this. e.g. https://registry.terraform.io/providers/hashicorp/google/lat...
Your process will be -
1. Write TF configuration approximating what you think is deployed
2. Import all your resources into the state file
3. Run a `terraform plan ...` to show what Terraform wants to change about your resources (including creating any you missed or changing/recreating any your config doesn't match)
4. Correct your TF configuration to reflect the differences from 3.
5. Goto 3, repeat until you get a "No changes" plan or the you actually want TF to correct some things (add tags, for example)
6. run `terraform apply`
and optionally...
7. set up your CI/automation to run `terraform plan` regularly and report the "drift" via some means - stuff that has been changed about your resources outside of Terraform management.
I put a lot of stock in this last step, because small, incremental change is the cornerstone of platform management. If you want to make a change and come to find there's a huge amount of other stuff you have to correct as well, your change isn't small any more.
You don't need to write all the tf upfront for existing resources.
Use `import` resources in a .tf file (I like to just call it imports.tf) and run `terraform plan -generate-config-out=imported.tf`
That will dump the tf resources - often requires a little adjustment to the generated script, but it's a huge time saver
> If you really do think that Terraform is code, then go try and make multiple DNS records for each random instance ID based on a dynamic number of instances. Correct me if I'm wrong, but I don't think you can do that in Terraform.
You’re wrong. You can do that with Terraform.
You can also provision stuff that isn’t just k8s.
> This is not code. This is configuration.
I don't think those two things are mutually exclusive.
IMO hcl is absolutely code. As is html, and css, json, and yaml.
It isn't a full programming language, and I often wish it was, but I wouldn't say it isn't code.
If you really do think that Terraform is code, then go try and make multiple DNS records for each random instance ID based on a dynamic number of instances. Correct me if I'm wrong, but I don't think you can do that in Terraform.
Great take.
Except it's not, because their example is trivially easy and common in Terraform.
I was 100% for infra as code as it gives devs more freedom to get what they need. Then the startup went from 50 to 100 to 1000 and people just needed to get stuff done and usually the exact same thing over and over. So we migrated to a custom DSL which is much easier to standardize, lint, review and read. I think when you don't know what you need code is better for flexibility, when the domain is sorted, DSL.
No, Nix is "infrastructure as code, but actually".
The downside is that now you have to code in Nix.
This is not code. This is configuration.
FWIW we've been working on letting you declare data in YSH, a new Unix shell.
So you can arbitrarily interleave code and data, with the same syntax. The config dialect is called "Hay" - Hay Ain't YAML.
Here's a demo based on this example: https://github.com/oils-for-unix/blog-code/blob/main/hay/iac...
It looks almost the same as HCL (although I think this was convergent evolution, since I've actually never used Terraform):
And then the stdout of this config "program" is here - https://github.com/oils-for-unix/blog-code/blob/main/hay/out...It can be serialized to JSON, or post-processed and then serialized
---
Then I show you can wrap Resource in a for loop, as well as parameterize it with a "proc" (procedure).
This is all still in progress, and can use feedback, e.g. on Github. (This demo runs, but it relies on a recent bug fix.)The idea is not really to make something like Terraform, but rather to make a language with metaprogramming powerful enough to make your own "dialects", like Terraform.
---
I wrote a doc about Hay almost 3 years ago - Hay - Custom Languages for Unix Systems - https://oils.pub/release/0.27.0/doc/hay.html
Comments - https://lobste.rs/s/phqsxk/hay_ain_t_yaml_custom_languages_f...
At that time, Oils was a slow Python prototype, but now it's fast C++! So it's getting there
The idea of Oils is shell+Python+JSON+YAML, squished together in the same language. So this works by reflection and function calls, not generating text ("Unix sludge"). No Go templates generating YAML, etc.