I'm afraid this article kinda fails at at its job. It starts out with a very bold claim ("Zig is not only a new programming language, but it’s a totally new way to write programs"), but ends up listing a bunch of features that are not unique to Zig or even introduced by Zig: type inference (Invented in the late 60s, first practically implemented in the 80s), anonymous structs (C#, Go, Typescript, many ML-style languages), labeled breaks, functions that are not globally public by default...
It seems like this is written from the perspective of C/C++ and Java and perhaps a couple of traditional (dynamically typed) languages.
On the other hand, the concept that makes Zig really unique (comptime) is not touched upon at all. I would argue compile-time evaluation is not entirely new (you can look at Lisp macros back in the 60s), but the way Zig implements this feature and how it is used instead of generics is interesting enough to make Zig unique. I still feel like the claim is a bit hyperbolic, but there is a story that you can sell about Zig being unique. I wanted to read this story, but I feel like this is not it.
What you’re seeing is just a repeat of the same old thing. It used to be Ruby Clojure Scala Go Rust now it’s Zig.
When the author mentioned manually wiring up a PATH variable with gushing excitement I finally knew what I was up against. Holy fuck. Somebody please introduce the poor soul to UNIX because I’d rather someone pitches a tent over a cool NetBSD function or something. That would be a nerd article worth getting a box of Kleenex for.
You can switch one out for any of the others and the article would be the same.
I’m holding out for that one fucking moron who writes the next “Why I’m still using Lua in 2025” and we find out the punch line is he’s making 10M ARR off of something so dumb it shouldn’t even be possible.
In my opinion the biggest issue of Zig is that it doesn't allow attaching data to error. The error can only be passed via side channel, which is inconvenient and ENOURAGES TOOL DEVELOPERS TO NOT PASS ERROR DATA, which greatly increase debugging difficulty.
Somethings there are 100 things that possibly go wrong. With error data you can easily know which exact thing is wrong. But with error code you just know "something is wrong, don't know which exactly".
> I just spent way longer than I should have to debugging an issue of my project's build not working on Windows given that all I had to work with from the zig compiler was an error: AccessDenied and the build command that failed. When I finally gave up and switched to rewriting and then debugging things through Node the error that it returned was EBUSY and the specific path in question that Windows considered to be busy, which made the problem actually tractable ... I think the fact that even the compiler can't consistently implement this pattern points to it perhaps being too manual/tedious/unergonomic/difficult to expect the Zig ecosystem at large to do the same
The "correct" way is highly context dependent with the added proviso that Zig assumes a low-level systems context.
In this context, adding data to an error may be expedient but 1) it has a non-trivial overhead on average and 2) may be inadvisable in some circumstances due to system state. I haven't written any systems in Zig yet but in low-level high-performance C++20 code bases we basically do the same thing when it comes to error handling. The conditional late binding of error context lets you choose when and where to do it when it makes sense and is likely to be safe.
A fundamental caveat of systems languages is that expediency takes a back seat to precision, performance, and determinism. That's the nature of the thing.
Interestingly, I just read an article from matklad (who works a lot with Zig) talking about the benefits of splitting up error codes and error diagnostics, and the pattern of using a diagnostic sync to provide human-readable diagnostic information:
Honestly I was quite convinced by that, because it kind of matches my own experiences that, even when using complex `Error` objects in languages with exceptions, it's still often useful to create a separate diagnostics channel to feed information back to the user. Even for application errors for servers and things, that diagnostics channel is often just logging information out when it happens, then returning an error.
People are working on this. std.zon is generally considered to be a good example of how to handle errors and diagnostics, though it's an area of active exploration. The plan is to eventually collect all the good patterns people have come up with and (1) publish them in a collection, and (2) update std to actually use them.
I know that Zig doesn't allow attaching data to error for valid reasons. If error data contains interior pointer then it can easily cause memory safety problem. Zig doesn't have a borrow cheker or ownership system to prevent that.
This seems kinda contrived. In practice that "ERROR DATA" tends not to exist. Unexpected errors almost never originate within the code in question. In basically all cases that "ERROR DATA" is just recapitulating the result of a system call, and the OS doesn't have any data to pass.
And even if it did, interpreting the error generally doesn't every work with a microscope over attached data. You got an error from a write. What does the data contain? The file descriptor? Not great, since you really want to know the path to the file. But even then, it turns out it doesn't really matter because what really happened was the storage filled up due to a misbehaving process somewhere else.
"Error data" is one of those conceits that sounds like a good idea but in practice is mostly just busy work. Architect your systems to fail gracefully, don't fool yourself into pretending you can "handle" errors in clever ways.
That's really cool actually. Now that AI is a little more commonly available for developer tooling I feel like its easier than ever to learn any programming language since you can braindrain the model.
The standard models are pretty bad a zig right now since the language is so new and changes so fast. The entire language spec is available in one html file though so you can have a little better success feeding that for context.
> The entire language spec is available in one html file though so you can have a little better success feeding that for context.
This is what I've started doing for every library I use. I go to their Github, download their docs, and drop the whole thing into my project. Then whenever the AI gets confused, I say "consult docs/somelib/"
The article doesn't answer the question, it's all just about "the basics of zig" (there is nothing cool manually editing environment variables on Windows with 8 labeled steps (and 5 preliminary steps missing))
and the actual cool stuff is missing:
> with its concept of compile time execution, unfortunately not stressed enough in this article.
I totally vibe with the intro but then the rest of the article goes on to be a showcase bits of zig.
I feel what is missing is how each feature is so cool compared to other languages.
As a language nerd zig syntax is just so cool. It doesn’t feel the need to adhere to any conventions and seems to solve the problems in the most direct and simple way.
An example of this declaring a label and referring to a label. By moving the colon to either end it makes labels instantly understood which form it is.
And then there is the runtime promises such as no hidden control flow. There are no magical @decorators or destructors. Instead we have explicit control flow like defer.
Finally there is comptime. No need to learn another macro syntax. It’s just more zig during compilation
I was also curious what direction the article was going to take. The showcase is cool, and the features you mentioned are cool. But for me, Zig is cool is because all the pieces simply fit together with essentially no redundancy or overloading. You learn the constructs and they just compose as you expect. There's one feature I'd personally like added, but there's nothing actually _missing_. Coding in it quickly felt like using a tool I'd used for years, and that's special.
Zig's big feature imo is just the relative absence of warts in the core language. I really don't know how to communicate that in an article. You kind of just have to build something in it.
> Coding in it quickly felt like using a tool I'd used for years, and that's special.
That's been my exact experience too. I was surprised how fast I felt confident in writing zig code. I only started using it a month ago, and already I've made it to 5000 lines in a custom tcl interpreter. It just gets out of the way of me expressing the code I want to write, which is an incredible feeling. Want to focus on fitting data structures on L1 cache? Go ahead. Want to automatically generate lookup tables from an enum? 20 lines of understandable comptime. Want to use tagged pointers? Using "align(128)" ensures your pointers are aligned so you can pack enough bits in.
The feature I want is multimethods -- function overloading based on the runtime (not compile time) type of all the arguments.
Programming with it is magical, and its a huge drag to go back to languages without it. Just so much better than common OOP that depends only on the type of one special argument (self, this etc).
Common Lisp has had it forever, and Dylan transferred that to a language with more conventional syntax -- but is very near to dead now, certainly hasn't snowballed.
On the other hand Julia does it very well and seems to be gaining a lot of traction as a very high performance but very expressive and safe language.
I think this is a major mistake for Zig's target adoption market - low level programmers trying to use a better C.
Julia is phenomenally great for solo/small projects, but as soon as you have complex dependencies that _you_ can't update - all the overloading makes it an absolute nightmare to debug.
It’s incredibly silly but I hate zigs identifier policy. Mixing snake case and camel case for functions is cursed.
That said, amazing effort, progress and results from the ecosystem.
Bursting on the scene with amazing compilation dx, good allocator (and now io) hygiene/explicitness, and a great build system (though somewhat difficult to ramp on). I’m pretty committed to Rust but I am basically permanently zig curious at this point.
I've tried writing a similar post, but I think it's a bit difficult to sound convincing when talking about why Zig is so pleasant. it's really not any one thing. it's a culmination of a lot of well made, pragmatic decisions that don't sound significant on their own. they just culminate in a development experience that feels pleasantly unique.
a few of those decisions seem radical, and I often disagreed with them.. but quite reliably, as I learned more about the decision making, and got deeper into the language, I found myself agreeing with them afterall. I had many moments of enlightenment as I dug deeper.
so anyways, if you're curious, give it an honest chance. I think it's a language and community that rewards curiosity. if you find it fits for you, awesome! luckily, if it doesn't, there's plenty of options these days (I still would like to spend some quality time with Odin)
I've heard good things about Zig. I want to pick it up and experiment with it but at ~2% market share I find it hard to justify spending the time to learn and master it right now. It's usually much easier to find the time to learn a new language if there is a project (work or open source) that is also using it.
I like the idea of the `defer `keyword - you can have automatic cleanup at the end of the scope but you have to make it obvious you are doing so, no hidden execution of anything (unlike c++ destructors).
There's at least 1 thing that Zig is better than Rust is that Zig compiler for Windows can be downloaded, unzipped then used without admin right. Rust needs msvc, which cannot be installed without admin right. It is said that Rust on Windows can use cygwin but I cannot make it work even with AI help.
Zig being able to (cross)compile C and C++ feels very similar to how UV functions as a drop in replacement for pip/pip-tools. Seems like a fantastic way to gain traction in already established projects.
also i'm starting to wonder about how one would write a refactored monolith kernel for Framework open-hardware laptops in Zig, and also porting s6 process management to Zig!
> Probably the most incredible virtue of Zig compiler is its ability to compile C code. This associated with the ability to cross-compile code to be run in another architecture, different than the machine where it is was originally compiled, is already something quite different and unique.
Isn't cross compilation very, very ordinary? Inline C is cool, like C has inline ASM (for the target arch). But cross-compiling? If you built a phone app on your computer you did that as a matter of course, and there are many other common use cases.
Yes, very rare and there is a strong cartel of companies ensuring it doesn't happen in more mainstream langs through multiple avenues to protect their interests!
From helicoptering folks onto steering committee and indoctrination of young CS majors.
If I had the ability to downvote a comment yet, I'd downvote you. If you're going to spout conspiracy-theory-sounding stuff, at least provide some evidence for your claims!
> I can’t think of any other language in my 45 years long career that surprised more than Zig.
I can say the same (although my career spans only 30 years), or, more accurately, that it's one of the few languages that surprised me most.
Coming to it from a language design perspective, what surprised me is just how far partial evaluation can be taken. While strictly weaker than AST macros in expressive power (macros are "referentially opaque" and therefore more powerful than a referentially transparent partial evaluation - e.g. partial evaluation has no access to an argument's name), it turns out that it's powerful enough to replace not only most "reasonable" uses of macros, but also generics and interfaces. What gives Zig's partial evaluation (comptime) this power is its access to reflection.
Even when combined with reflection, partial evaluation is more pleasurable to work with than macros. In fact, to understand the program's semantics, partial evaluation can be ignored altogether (as it doesn't affect the meaning of computations). I.e. the semantics of a Zig program are the same as if it were interpreted by some language Zig' that is able to run all of Zig's partial-evaluation code (comptime) at runtime rather than at compile time.
Since it also removes the need for other specialised features (generics, interfaces) - even at the cost of an aesthetic that may not appeal to fans of those specialised features - it ends up creating a very expressive, yet surprisingly simple and easy-to-understand language (Lisps are also simple and expressive, but the use of macros makes understanding a Lisp program less easy).
Being simple and easy to understand makes code reviews easier, which may have a positive impact on correctness. The simplicity can also reduce compilation time, which may also have a positive impact on correctness.
Zig's insistence on explicitness - no overloading, no hidden control flow - which also assists reviews, may not be appropriate for a high-level language, but it's a great fit for an unabashedly low-level language, where being able to see every operation as explicit code "on the page" is important. While its designer may or may not admit this, I think Zig abandons C++'s belief that programs of all sizes and kinds will be written in the same language (hence its "zero-cost abstractions", made to give the illusion of a high-level language without its actual high-level abstraction). Developers writing low-level code lose the explicitness they need for review, while those writing high-level programs don't actually gain the level of abstraction required for a smooth program evolution that they need. That belief may have been reasonable in the eighties, but I think it has since been convincingly disproved.
Some Zig decisions surprised me in a way that made me go more "huh" than "wow", such as it having little encapsulation to speak of. In a high-level language I wouldn't have that (after years of experience with Java's wide ecosystem of libraries, we learned that we need even more and stronger encapsulation than we originally had to keep compatibility while evolving code). But perhaps this is the right choice for a low-level language where programs are expected to be smaller and with fewer dependencies (certainly shallower dependency graphs). I'm curious to see how this pans out.
Zig's terrific support for arenas also makes one of the most powerful low-level memory management techniques (that, like a tracing garbage collector, gives the developer a knob to trade off RAM usage for CPU) very accessible.
I have no idea or prediction on whether Zig will become popular, but it's certainly fascinating. And, being so remarkably easy to learn (especially if you're familiar with low-level programming), it costs little effort to give it a try.
This is the real answer (amongst other goodness) - this one is well executed and differentiated
Every language at scale needs a preprocessor (look at the “use server” and “use gpu” silliness happening in TS) - why is it not the the same as the language you use?
Great comment! I agree about comptime, as a Rust programmer I consider it one of the areas where Zig is clearly better than Rust with its two macro systems and the declarative generics language. It's probably the biggest "killer feature" of the language.
I look forward to a future high-level language that uses something like comptime for metaprogramming/interfaces/etc, is strongly typed, but lets you write scripts as easily as python or javascript.
Thing is, having a good JIT gives you the performance of partial evaluation pretty much automatically (at the cost of less predictability), as compilation occurs at runtime, so the distinction between compile-time and runtime largely disappears. E.g., in Java, a reflective call will eventually be compiled by the JIT into a direct call; virtual dispatch will also be compiled into direct dispatch or even inlined (when appropriate) etc..
Tryout Nim, it has powerful comptime/metaprogramming, statically typed, automatic memory management and is as easy to program as python or javascript while still allowing low level stuff.
For me it'd be hard to go back to languages that don't have all that. Only swift comes close.
Is the inline testing good in practice? I do like the clear proximity and scope of the code being tested but I can also imagine trying to cram in all the unit tests and mocking and logging and such.
Does the feature end up feeling unused, dominating app code with test code, or do people end up finding a happy medium?
"This associated with the ability to cross-compile code to be run in another architecture, different than the machine where it is was originally compiled, is already something quite different and unique."
Perhaps I'm missing something but this is utterly routine. It even has the name used here: Cross-compiling.
Zig defaults to statically linking musl when targeting Linux, so the output will not be very interesting unless you target dynamic musl, or glibc, or FreeBSD/NetBSD.
For a language that’s so low level and performance focused, I’m surprised that it has those extra io and allocator arguments to functions. Isn’t that creating code bloat and runtime overhead?
Regarding runtime overhead, I'd assume you would still need an io implementation, it is just showing it to you explicitly instead of it being hidden behind the std lib.
For simple projects where you don't want to pass it around in function parameters, you can create a global object with one implementation and use it from everywhere.
Yeah thing is it's usually better to have allocator in particular defined as a parameter so that you can use the testing allocator in your tests to detect memory leaks, double frees, etc. And then you use more optimal allocators for release mode.
the answer I've seen when it has been brought up before is that (for allocators) there is not a practical impact on performance -- allocating takes way more time than the virtual dispatch does, so it ends up being negligible. for code bloat, I'm not sure what you mean exactly; the allocator interface is implemented via a VTable, and the impact on binary size is pretty minimal. you're also not really creating more than a couple of allocators in an application (typically a general purpose allocator, and maybe an arena allocator that wraps it in specific scenarios).
for IO, which is new and I have not actually used yet, here are some relevant paragraphs:
The new Io interface is non-generic and uses a vtable for dispatching function calls to a concrete implementation. This has the upside of reducing code bloat, but virtual calls do have a performance penalty at runtime. In release builds the optimizer can de-virtualize function calls but it’s not guaranteed.
...
A side effect of proposal #23367, which is needed for determining upper bound stack size, is guaranteed de-virtualization when there is only one Io implementation being used (also in debug builds!).
It's more of an in-between C and Rust than Go as it is a systems language with no built-in garbage collector for memory management. It has a lot of memory safety features, but it's not as memory safe as Rust. However, it avoids a lot of the complexity of Rust like implicit macro expansion, managing lifetimes, generics and complex trait system, etc. It also compiles much more compactly than Rust, in my experience.
In my mind, it's an accessible systems language. Very readable. Minimal footprint.
Well, it's insanely simple, insanely fast, often more performant than Rust with lower resource usage, with first class C-interop and cross-compiling out of the box. It's easily my favorite language now, with Go being a close second. Both are opinionated and have a standard formatter that makes Zig code instantly readable when you see it, similar to Go. Rust was once interesting, but it's firmly in macro hell territory now, just like Swift, with concealed execution paths aplenty and neither cross-compiling out of the box.
>often more performant than Rust with lower resource usage
[citation needed]
If we are to trust this page [0] Rust beats Zig on most benchmarks. In the Techempower benchmarks [1] Rust submissions dominate the TOP, while Zig is... quite far.
Several posts which I've seen in the past about Zig beating Rust by 3x or such all turned to be based on low quality Rust code with some performance pitfalls like measuring performance of writing into stdout (which Rust locks by default and Zig does not) or iterating over ..= ranges which are known to be problematic from the performance perspective.
> I can’t think of any other language in my 45 years long career that surprised more than Zig. I can easily say that Zig is not only a new programming language, but it’s a totally new way to write programs, in my opinion. To say it’s merely a language to replace C or C++, it’s a huge understatement.
I don't understand how the things presented in this article are surprising. Zig has several nice features shared by many modern programming languages?
> One may wonder how the compiler discovers the variable type. The type in this case is *inferred* by the initialization.
That the author feels the need to emphasize this means either that they haven't paid attention to modern languages for a very long time, or this article is for people who haven't paid attention to modern languages for a very long time.
Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
> One is Zig’s robustness. In the case of the shift operation no wrong behavior is allowed and the situation is caught at execution time, as has been shown.
Panicking at runtime is better than just silently overflowing, but I don't know if it's the best example to show the 'robustness' of a language...
> Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
I'm not even sure I'd call this type inference (other people definitely do call it type inference) given that it's only working in one direction. Even Java (var) and C23 (auto), the two languages the author calls out, have that. It's much less convenient than something like Hindley-Milner.
> Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
It’s not common in lower level languages without garbage collectors or languages focused on compilation speed.
The only popular language I can think of is C (prior to C23). If you want to include Fortran and Ada, that would be three, but these are all very old languages. All modern system languages have type deduction for variable declarations.
This. Is Zig an interesting language? Yes sure. But “a totally new way to write programs”? No, I don’t see a single feature that is not found in any other programming languages.
I feel like the article didn't really hit on the big ones: comptime functions, no hidden control flow, elegant defaults, safe buffers, etc.
What Zig really does is make systems programming more accessible. Rust is great, but its guarantees of memory safety come with a learning curve that demands mastering lifetimes and generics and macros and a complex trait system. Zig is in that class of programming languages like C, C++, and Rust, and unlike Golang, C#, Java, Python, JS, etc that have built-in garbage collection.
The explicit control flow allows you as a developer to avoid some optimizations done in Rust (or common in 3rd party libraries) that can bloat binary sizes. This means there's no target too small for the language, including embedded systems. It also means it's a good choice if you want to create a system that maximizes performance by, for example, preventing heap allocations altogether.
The built-in C/C++ compiler and language features for interacting with C code easily also ensures that devs have access to a mature ecosystem despite the language being young.
My experience with Zig so far has been pleasurable. The main downside to the language has been the churn between minor versions (language is still pre-1.0 so makes perfect sense, but still). That being said, I like Zig's new approach to explicit async I/O that parallels how the language treats Allocators. It feels like the correct way to do it and allows developers again the flexibility to control how async and concurrency is handled (can choose single-threaded event loop or multi-threaded pool quite easily).
> This means there's no target too small for the language, including embedded systems. It also means it's a good choice if you want to create a system that maximizes performance by, for example, preventing heap allocations altogether.
I don't think there's is any significant different here between zig, C and Rust for bare-metal code size. I can get the compiler to generate the same tiny machine code in any of these languages.
That's not been my experience with Rust. On average produces binaries at least 4x bigger than the Zig I've compiled (and yes, I've set all the build optimization flags for binary size). I know it's probably theoretically possible to achieve similar results with Rust, it's just you have to be much more careful about things like monomorphization of generics, inlining, macro expansion, implicit memory allocation, etc that happen under the hood. Even Rust's standard library is quite hefty.
C, yes, you can compile C quite small very easily. Zig is like a simpler C, in my mind.
The Rust standard library in its default config should not be used if you care about code size (std is compiled with panic/fmt and backtrace machinery on by default). no_std has no visible deps besides memcpy/memset, and is comparable to bare metal C.
Nothing against (or for) Zig, but the article author seems unfamiliar with other modern languages in common use... imagine if they saw Swift or Rust. Their mind would be utterly, utterly blown.
I'm afraid this article kinda fails at at its job. It starts out with a very bold claim ("Zig is not only a new programming language, but it’s a totally new way to write programs"), but ends up listing a bunch of features that are not unique to Zig or even introduced by Zig: type inference (Invented in the late 60s, first practically implemented in the 80s), anonymous structs (C#, Go, Typescript, many ML-style languages), labeled breaks, functions that are not globally public by default...
It seems like this is written from the perspective of C/C++ and Java and perhaps a couple of traditional (dynamically typed) languages.
On the other hand, the concept that makes Zig really unique (comptime) is not touched upon at all. I would argue compile-time evaluation is not entirely new (you can look at Lisp macros back in the 60s), but the way Zig implements this feature and how it is used instead of generics is interesting enough to make Zig unique. I still feel like the claim is a bit hyperbolic, but there is a story that you can sell about Zig being unique. I wanted to read this story, but I feel like this is not it.
What you’re seeing is just a repeat of the same old thing. It used to be Ruby Clojure Scala Go Rust now it’s Zig.
When the author mentioned manually wiring up a PATH variable with gushing excitement I finally knew what I was up against. Holy fuck. Somebody please introduce the poor soul to UNIX because I’d rather someone pitches a tent over a cool NetBSD function or something. That would be a nerd article worth getting a box of Kleenex for.
You can switch one out for any of the others and the article would be the same.
I’m holding out for that one fucking moron who writes the next “Why I’m still using Lua in 2025” and we find out the punch line is he’s making 10M ARR off of something so dumb it shouldn’t even be possible.
In my opinion the biggest issue of Zig is that it doesn't allow attaching data to error. The error can only be passed via side channel, which is inconvenient and ENOURAGES TOOL DEVELOPERS TO NOT PASS ERROR DATA, which greatly increase debugging difficulty.
Somethings there are 100 things that possibly go wrong. With error data you can easily know which exact thing is wrong. But with error code you just know "something is wrong, don't know which exactly".
See: https://github.com/ziglang/zig/issues/2647#issuecomment-1444...
> I just spent way longer than I should have to debugging an issue of my project's build not working on Windows given that all I had to work with from the zig compiler was an error: AccessDenied and the build command that failed. When I finally gave up and switched to rewriting and then debugging things through Node the error that it returned was EBUSY and the specific path in question that Windows considered to be busy, which made the problem actually tractable ... I think the fact that even the compiler can't consistently implement this pattern points to it perhaps being too manual/tedious/unergonomic/difficult to expect the Zig ecosystem at large to do the same
The "correct" way is highly context dependent with the added proviso that Zig assumes a low-level systems context.
In this context, adding data to an error may be expedient but 1) it has a non-trivial overhead on average and 2) may be inadvisable in some circumstances due to system state. I haven't written any systems in Zig yet but in low-level high-performance C++20 code bases we basically do the same thing when it comes to error handling. The conditional late binding of error context lets you choose when and where to do it when it makes sense and is likely to be safe.
A fundamental caveat of systems languages is that expediency takes a back seat to precision, performance, and determinism. That's the nature of the thing.
Interestingly, I just read an article from matklad (who works a lot with Zig) talking about the benefits of splitting up error codes and error diagnostics, and the pattern of using a diagnostic sync to provide human-readable diagnostic information:
https://matklad.github.io/2025/11/06/error-codes-for-control...
Honestly I was quite convinced by that, because it kind of matches my own experiences that, even when using complex `Error` objects in languages with exceptions, it's still often useful to create a separate diagnostics channel to feed information back to the user. Even for application errors for servers and things, that diagnostics channel is often just logging information out when it happens, then returning an error.
People are working on this. std.zon is generally considered to be a good example of how to handle errors and diagnostics, though it's an area of active exploration. The plan is to eventually collect all the good patterns people have come up with and (1) publish them in a collection, and (2) update std to actually use them.
I know that Zig doesn't allow attaching data to error for valid reasons. If error data contains interior pointer then it can easily cause memory safety problem. Zig doesn't have a borrow cheker or ownership system to prevent that.
https://github.com/ziglang/zig/issues/2647#issuecomment-2670...
> I know that Zig doesn't allow attaching data to error for good reasons. If error data contains interior pointer then it causes memory safety problem
IMO this is not a good reason at all.
Changed to "valid reasons"
This is annoying. It’s because errors were designed to be a bitset and not have pointers. I would also prefer that they were a `union(enum)`.
We are free to do that as a return type like `Result(T)` and just forgo using `try`, but yeah, I wish this was in there.
This seems kinda contrived. In practice that "ERROR DATA" tends not to exist. Unexpected errors almost never originate within the code in question. In basically all cases that "ERROR DATA" is just recapitulating the result of a system call, and the OS doesn't have any data to pass.
And even if it did, interpreting the error generally doesn't every work with a microscope over attached data. You got an error from a write. What does the data contain? The file descriptor? Not great, since you really want to know the path to the file. But even then, it turns out it doesn't really matter because what really happened was the storage filled up due to a misbehaving process somewhere else.
"Error data" is one of those conceits that sounds like a good idea but in practice is mostly just busy work. Architect your systems to fail gracefully, don't fool yourself into pretending you can "handle" errors in clever ways.
I think you've skipped over all the cases where knowing the filename is actually helpful? It's true that sometimes it isn't.
Also, a line number is often helpful, which is why compilers include it. Some JSON parsers omit that, which is annoying.
Seralizing error data to text and then dumping that in a log can be pretty useful.
A neat little thing I like about Zig is one of the options for installing it is via PyPI like this: https://pypi.org/project/ziglang/
Which means you don't even have to install it separately to try it out via uvx. If you have uv installed already try this:For anyone not familiar: You can bundle arbitrary software as Python wheels. Can be convenient in cases like this!
For this sort of stuff I find micromamba / pixi a better way of managing packages, as oppposed to the pip / uv family of tools
Pixi, Conan, or Nix— all better choices than abusing the Python ecosystem to ship arbitrary executables.
That's really cool actually. Now that AI is a little more commonly available for developer tooling I feel like its easier than ever to learn any programming language since you can braindrain the model.
The standard models are pretty bad a zig right now since the language is so new and changes so fast. The entire language spec is available in one html file though so you can have a little better success feeding that for context.
> The entire language spec is available in one html file though so you can have a little better success feeding that for context.
This is what I've started doing for every library I use. I go to their Github, download their docs, and drop the whole thing into my project. Then whenever the AI gets confused, I say "consult docs/somelib/"
reinventing nix but worse.
The article doesn't answer the question, it's all just about "the basics of zig" (there is nothing cool manually editing environment variables on Windows with 8 labeled steps (and 5 preliminary steps missing))
and the actual cool stuff is missing:
> with its concept of compile time execution, unfortunately not stressed enough in this article.
indeed
I totally vibe with the intro but then the rest of the article goes on to be a showcase bits of zig.
I feel what is missing is how each feature is so cool compared to other languages.
As a language nerd zig syntax is just so cool. It doesn’t feel the need to adhere to any conventions and seems to solve the problems in the most direct and simple way.
An example of this declaring a label and referring to a label. By moving the colon to either end it makes labels instantly understood which form it is.
And then there is the runtime promises such as no hidden control flow. There are no magical @decorators or destructors. Instead we have explicit control flow like defer.
Finally there is comptime. No need to learn another macro syntax. It’s just more zig during compilation
I was also curious what direction the article was going to take. The showcase is cool, and the features you mentioned are cool. But for me, Zig is cool is because all the pieces simply fit together with essentially no redundancy or overloading. You learn the constructs and they just compose as you expect. There's one feature I'd personally like added, but there's nothing actually _missing_. Coding in it quickly felt like using a tool I'd used for years, and that's special.
Zig's big feature imo is just the relative absence of warts in the core language. I really don't know how to communicate that in an article. You kind of just have to build something in it.
> Coding in it quickly felt like using a tool I'd used for years, and that's special.
That's been my exact experience too. I was surprised how fast I felt confident in writing zig code. I only started using it a month ago, and already I've made it to 5000 lines in a custom tcl interpreter. It just gets out of the way of me expressing the code I want to write, which is an incredible feeling. Want to focus on fitting data structures on L1 cache? Go ahead. Want to automatically generate lookup tables from an enum? 20 lines of understandable comptime. Want to use tagged pointers? Using "align(128)" ensures your pointers are aligned so you can pack enough bits in.
Yeah, the real strength of Zig isn't what's there, but what isn't.
out of curiosity, what feature do you want?
The feature I want is multimethods -- function overloading based on the runtime (not compile time) type of all the arguments.
Programming with it is magical, and its a huge drag to go back to languages without it. Just so much better than common OOP that depends only on the type of one special argument (self, this etc).
Common Lisp has had it forever, and Dylan transferred that to a language with more conventional syntax -- but is very near to dead now, certainly hasn't snowballed.
On the other hand Julia does it very well and seems to be gaining a lot of traction as a very high performance but very expressive and safe language.
I think this is a major mistake for Zig's target adoption market - low level programmers trying to use a better C.
Julia is phenomenally great for solo/small projects, but as soon as you have complex dependencies that _you_ can't update - all the overloading makes it an absolute nightmare to debug.
matklad did it justice in his post here, in my opinion
https://matklad.github.io/2025/08/09/zigs-lovely-syntax.html
It’s incredibly silly but I hate zigs identifier policy. Mixing snake case and camel case for functions is cursed.
That said, amazing effort, progress and results from the ecosystem.
Bursting on the scene with amazing compilation dx, good allocator (and now io) hygiene/explicitness, and a great build system (though somewhat difficult to ramp on). I’m pretty committed to Rust but I am basically permanently zig curious at this point.
I've tried writing a similar post, but I think it's a bit difficult to sound convincing when talking about why Zig is so pleasant. it's really not any one thing. it's a culmination of a lot of well made, pragmatic decisions that don't sound significant on their own. they just culminate in a development experience that feels pleasantly unique.
a few of those decisions seem radical, and I often disagreed with them.. but quite reliably, as I learned more about the decision making, and got deeper into the language, I found myself agreeing with them afterall. I had many moments of enlightenment as I dug deeper.
so anyways, if you're curious, give it an honest chance. I think it's a language and community that rewards curiosity. if you find it fits for you, awesome! luckily, if it doesn't, there's plenty of options these days (I still would like to spend some quality time with Odin)
+1 for Odin. I wrote a little game in it last year and found it delightful.
I've heard good things about Zig. I want to pick it up and experiment with it but at ~2% market share I find it hard to justify spending the time to learn and master it right now. It's usually much easier to find the time to learn a new language if there is a project (work or open source) that is also using it.
https://survey.stackoverflow.co/2025/technology
I like the idea of the `defer `keyword - you can have automatic cleanup at the end of the scope but you have to make it obvious you are doing so, no hidden execution of anything (unlike c++ destructors).
There's at least 1 thing that Zig is better than Rust is that Zig compiler for Windows can be downloaded, unzipped then used without admin right. Rust needs msvc, which cannot be installed without admin right. It is said that Rust on Windows can use cygwin but I cannot make it work even with AI help.
Have you tried the GNU toolchain? IIRC rustup provides the option to use it instead of the MSVC toolchain during the initial installation.
You should check out cargo-zigbuild which makes use of zig for cross compiling rust projects. https://github.com/rust-cross/cargo-zigbuild
Zig being able to (cross)compile C and C++ feels very similar to how UV functions as a drop in replacement for pip/pip-tools. Seems like a fantastic way to gain traction in already established projects.
I love Zig. Never tried to write it though =).
I just use a it as cross-compiler for my Nim[0] programs.
[0] - https://nim-lang.org
Strangler fig
zig is blowing my mind so much too this week i'm so excited to learn and contribute more!!
have you checked out `river` the Wayland window compositor written in Zig? https://codeberg.org/river/river
also i'm starting to wonder about how one would write a refactored monolith kernel for Framework open-hardware laptops in Zig, and also porting s6 process management to Zig!
> Probably the most incredible virtue of Zig compiler is its ability to compile C code. This associated with the ability to cross-compile code to be run in another architecture, different than the machine where it is was originally compiled, is already something quite different and unique.
Isn't cross compilation very, very ordinary? Inline C is cool, like C has inline ASM (for the target arch). But cross-compiling? If you built a phone app on your computer you did that as a matter of course, and there are many other common use cases.
> Isn't cross compilation very, very ordinary?
Working cross compilation out of the box any-to-any still isn't.
Yes, very rare and there is a strong cartel of companies ensuring it doesn't happen in more mainstream langs through multiple avenues to protect their interests!
From helicoptering folks onto steering committee and indoctrination of young CS majors.
If I had the ability to downvote a comment yet, I'd downvote you. If you're going to spout conspiracy-theory-sounding stuff, at least provide some evidence for your claims!
This comment deserves a [citation needed] visible from geosynchronous orbit.
> I can’t think of any other language in my 45 years long career that surprised more than Zig.
I can say the same (although my career spans only 30 years), or, more accurately, that it's one of the few languages that surprised me most.
Coming to it from a language design perspective, what surprised me is just how far partial evaluation can be taken. While strictly weaker than AST macros in expressive power (macros are "referentially opaque" and therefore more powerful than a referentially transparent partial evaluation - e.g. partial evaluation has no access to an argument's name), it turns out that it's powerful enough to replace not only most "reasonable" uses of macros, but also generics and interfaces. What gives Zig's partial evaluation (comptime) this power is its access to reflection.
Even when combined with reflection, partial evaluation is more pleasurable to work with than macros. In fact, to understand the program's semantics, partial evaluation can be ignored altogether (as it doesn't affect the meaning of computations). I.e. the semantics of a Zig program are the same as if it were interpreted by some language Zig' that is able to run all of Zig's partial-evaluation code (comptime) at runtime rather than at compile time.
Since it also removes the need for other specialised features (generics, interfaces) - even at the cost of an aesthetic that may not appeal to fans of those specialised features - it ends up creating a very expressive, yet surprisingly simple and easy-to-understand language (Lisps are also simple and expressive, but the use of macros makes understanding a Lisp program less easy).
Being simple and easy to understand makes code reviews easier, which may have a positive impact on correctness. The simplicity can also reduce compilation time, which may also have a positive impact on correctness.
Zig's insistence on explicitness - no overloading, no hidden control flow - which also assists reviews, may not be appropriate for a high-level language, but it's a great fit for an unabashedly low-level language, where being able to see every operation as explicit code "on the page" is important. While its designer may or may not admit this, I think Zig abandons C++'s belief that programs of all sizes and kinds will be written in the same language (hence its "zero-cost abstractions", made to give the illusion of a high-level language without its actual high-level abstraction). Developers writing low-level code lose the explicitness they need for review, while those writing high-level programs don't actually gain the level of abstraction required for a smooth program evolution that they need. That belief may have been reasonable in the eighties, but I think it has since been convincingly disproved.
Some Zig decisions surprised me in a way that made me go more "huh" than "wow", such as it having little encapsulation to speak of. In a high-level language I wouldn't have that (after years of experience with Java's wide ecosystem of libraries, we learned that we need even more and stronger encapsulation than we originally had to keep compatibility while evolving code). But perhaps this is the right choice for a low-level language where programs are expected to be smaller and with fewer dependencies (certainly shallower dependency graphs). I'm curious to see how this pans out.
Zig's terrific support for arenas also makes one of the most powerful low-level memory management techniques (that, like a tracing garbage collector, gives the developer a knob to trade off RAM usage for CPU) very accessible.
I have no idea or prediction on whether Zig will become popular, but it's certainly fascinating. And, being so remarkably easy to learn (especially if you're familiar with low-level programming), it costs little effort to give it a try.
This is the real answer (amongst other goodness) - this one is well executed and differentiated
Every language at scale needs a preprocessor (look at the “use server” and “use gpu” silliness happening in TS) - why is it not the the same as the language you use?
Great comment! I agree about comptime, as a Rust programmer I consider it one of the areas where Zig is clearly better than Rust with its two macro systems and the declarative generics language. It's probably the biggest "killer feature" of the language.
I agree.
I look forward to a future high-level language that uses something like comptime for metaprogramming/interfaces/etc, is strongly typed, but lets you write scripts as easily as python or javascript.
Thing is, having a good JIT gives you the performance of partial evaluation pretty much automatically (at the cost of less predictability), as compilation occurs at runtime, so the distinction between compile-time and runtime largely disappears. E.g., in Java, a reflective call will eventually be compiled by the JIT into a direct call; virtual dispatch will also be compiled into direct dispatch or even inlined (when appropriate) etc..
Tryout Nim, it has powerful comptime/metaprogramming, statically typed, automatic memory management and is as easy to program as python or javascript while still allowing low level stuff.
For me it'd be hard to go back to languages that don't have all that. Only swift comes close.
perhaps mojo might be your cup of tea ?
Is the inline testing good in practice? I do like the clear proximity and scope of the code being tested but I can also imagine trying to cram in all the unit tests and mocking and logging and such.
Does the feature end up feeling unused, dominating app code with test code, or do people end up finding a happy medium?
To author -- code sample as images is great for syntax highlight but I wanted to play with the examples and.. got stuck trying to copy the content.
(also expected tesseract to do a bit better than this:
"This associated with the ability to cross-compile code to be run in another architecture, different than the machine where it is was originally compiled, is already something quite different and unique."
Perhaps I'm missing something but this is utterly routine. It even has the name used here: Cross-compiling.
Is there a decent native GUI library for Zig yet? I don't want to use bloated toolkits like GTK and Qt.
I like the simplicity and speed of Rust's eGUI. Something similar for Zig would be amazing.
I would like to see the output of the:
Zig defaults to statically linking musl when targeting Linux, so the output will not be very interesting unless you target dynamic musl, or glibc, or FreeBSD/NetBSD.
For a language that’s so low level and performance focused, I’m surprised that it has those extra io and allocator arguments to functions. Isn’t that creating code bloat and runtime overhead?
Regarding runtime overhead, I'd assume you would still need an io implementation, it is just showing it to you explicitly instead of it being hidden behind the std lib.
For simple projects where you don't want to pass it around in function parameters, you can create a global object with one implementation and use it from everywhere.
Yeah thing is it's usually better to have allocator in particular defined as a parameter so that you can use the testing allocator in your tests to detect memory leaks, double frees, etc. And then you use more optimal allocators for release mode.
the answer I've seen when it has been brought up before is that (for allocators) there is not a practical impact on performance -- allocating takes way more time than the virtual dispatch does, so it ends up being negligible. for code bloat, I'm not sure what you mean exactly; the allocator interface is implemented via a VTable, and the impact on binary size is pretty minimal. you're also not really creating more than a couple of allocators in an application (typically a general purpose allocator, and maybe an arena allocator that wraps it in specific scenarios).
for IO, which is new and I have not actually used yet, here are some relevant paragraphs:
https://kristoff.it/blog/zig-new-async-io/https://github.com/ziglang/zig/issues/23367
io and allocator objects each only contain 4 pointers or so. They are very fast to wire up and don't create much overhead at all.
Is it cool? It seems to be in nether land between Rust and Go. Not sure what is the unique use case for Zig.
It's more of an in-between C and Rust than Go as it is a systems language with no built-in garbage collector for memory management. It has a lot of memory safety features, but it's not as memory safe as Rust. However, it avoids a lot of the complexity of Rust like implicit macro expansion, managing lifetimes, generics and complex trait system, etc. It also compiles much more compactly than Rust, in my experience.
In my mind, it's an accessible systems language. Very readable. Minimal footprint.
We could not have written TigerBeetle, at least not the way it is, without Zig:
https://tigerbeetle.com/blog/2025-10-25-synadia-and-tigerbee...
Well, it's insanely simple, insanely fast, often more performant than Rust with lower resource usage, with first class C-interop and cross-compiling out of the box. It's easily my favorite language now, with Go being a close second. Both are opinionated and have a standard formatter that makes Zig code instantly readable when you see it, similar to Go. Rust was once interesting, but it's firmly in macro hell territory now, just like Swift, with concealed execution paths aplenty and neither cross-compiling out of the box.
>often more performant than Rust with lower resource usage
[citation needed]
If we are to trust this page [0] Rust beats Zig on most benchmarks. In the Techempower benchmarks [1] Rust submissions dominate the TOP, while Zig is... quite far.
Several posts which I've seen in the past about Zig beating Rust by 3x or such all turned to be based on low quality Rust code with some performance pitfalls like measuring performance of writing into stdout (which Rust locks by default and Zig does not) or iterating over ..= ranges which are known to be problematic from the performance perspective.
[0]: https://programming-language-benchmarks.vercel.app/rust-vs-z...
[1]: https://www.techempower.com/benchmarks/
As far as I can tell from my outsider perspective, Rust might be used instead of C++, Zig instead of C, and Go instead of Java.
not being a direct competitor to either of these already existing languages is exactly why it is interesting!
That inline test syntax is pretty cool; where does it come from?
> I can’t think of any other language in my 45 years long career that surprised more than Zig. I can easily say that Zig is not only a new programming language, but it’s a totally new way to write programs, in my opinion. To say it’s merely a language to replace C or C++, it’s a huge understatement.
I don't understand how the things presented in this article are surprising. Zig has several nice features shared by many modern programming languages?
> One may wonder how the compiler discovers the variable type. The type in this case is *inferred* by the initialization.
That the author feels the need to emphasize this means either that they haven't paid attention to modern languages for a very long time, or this article is for people who haven't paid attention to modern languages for a very long time.
Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
> One is Zig’s robustness. In the case of the shift operation no wrong behavior is allowed and the situation is caught at execution time, as has been shown.
Panicking at runtime is better than just silently overflowing, but I don't know if it's the best example to show the 'robustness' of a language...
> Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
I'm not even sure I'd call this type inference (other people definitely do call it type inference) given that it's only working in one direction. Even Java (var) and C23 (auto), the two languages the author calls out, have that. It's much less convenient than something like Hindley-Milner.
> Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
It’s not common in lower level languages without garbage collectors or languages focused on compilation speed.
The only popular language I can think of is C (prior to C23). If you want to include Fortran and Ada, that would be three, but these are all very old languages. All modern system languages have type deduction for variable declarations.
C++ added auto 14 years ago. Swift had it since day 1 back in 2014 if I remember right. What else is there?
Compilation speed — OCaml, Go, D, C#, Java
“Low-level” languages — Rust, C++, D
This. Is Zig an interesting language? Yes sure. But “a totally new way to write programs”? No, I don’t see a single feature that is not found in any other programming languages.
I feel like the article didn't really hit on the big ones: comptime functions, no hidden control flow, elegant defaults, safe buffers, etc.
What Zig really does is make systems programming more accessible. Rust is great, but its guarantees of memory safety come with a learning curve that demands mastering lifetimes and generics and macros and a complex trait system. Zig is in that class of programming languages like C, C++, and Rust, and unlike Golang, C#, Java, Python, JS, etc that have built-in garbage collection.
The explicit control flow allows you as a developer to avoid some optimizations done in Rust (or common in 3rd party libraries) that can bloat binary sizes. This means there's no target too small for the language, including embedded systems. It also means it's a good choice if you want to create a system that maximizes performance by, for example, preventing heap allocations altogether.
The built-in C/C++ compiler and language features for interacting with C code easily also ensures that devs have access to a mature ecosystem despite the language being young.
My experience with Zig so far has been pleasurable. The main downside to the language has been the churn between minor versions (language is still pre-1.0 so makes perfect sense, but still). That being said, I like Zig's new approach to explicit async I/O that parallels how the language treats Allocators. It feels like the correct way to do it and allows developers again the flexibility to control how async and concurrency is handled (can choose single-threaded event loop or multi-threaded pool quite easily).
> This means there's no target too small for the language, including embedded systems. It also means it's a good choice if you want to create a system that maximizes performance by, for example, preventing heap allocations altogether.
I don't think there's is any significant different here between zig, C and Rust for bare-metal code size. I can get the compiler to generate the same tiny machine code in any of these languages.
That's not been my experience with Rust. On average produces binaries at least 4x bigger than the Zig I've compiled (and yes, I've set all the build optimization flags for binary size). I know it's probably theoretically possible to achieve similar results with Rust, it's just you have to be much more careful about things like monomorphization of generics, inlining, macro expansion, implicit memory allocation, etc that happen under the hood. Even Rust's standard library is quite hefty.
C, yes, you can compile C quite small very easily. Zig is like a simpler C, in my mind.
The Rust standard library in its default config should not be used if you care about code size (std is compiled with panic/fmt and backtrace machinery on by default). no_std has no visible deps besides memcpy/memset, and is comparable to bare metal C.
Of which, perhaps, the author isn't aware? Perhaps the author has very narrow experience in programming languages.
Or it's hyperbolic.
>Perhaps the author has very narrow experience in programming languages.
I got that impression as well.
Xi's impressed about types being optional because they can be inferred.
That's ... hardly a novelty ...
Winning at chess is more "avoid gigantic blunders" than "make brilliant moves".
Zig feels like one of the few programming languages that mostly just avoids gigantic blunders.
I have some beefs with some decisions, but none of them that are an immutable failure mode that couldn't be fixed in a straightforward manner.
Move Zig. For great justice!
Nothing against (or for) Zig, but the article author seems unfamiliar with other modern languages in common use... imagine if they saw Swift or Rust. Their mind would be utterly, utterly blown.
[dead]
What is Zig?
what comes before Zag ?