“I looked at the compiler’s output and thought: this cannot be mine. I couldn’t believe what had happened. I am happier than I have been in 20 years.”

This is a long read — 2.5 years don’t compress easily. If this is not the right moment, bookmark it and come back. I believe it is worth your time.


Prologue: A Seed Planted Thirty Years Ago

In the late 1980s and early 1990s, I was a student, and I built my first compiler. It was a modest academic project, based on materials from ETH Zürich. While it was small, something happened to me during that work — I fell in love with the intricate puzzle of transforming human-readable code into something a machine could execute.

Then life happened. Career. Responsibilities. Decades passed.

By 2023, I had been working in IT for over thirty years. I was successful by most measures. But I felt empty. My profession no longer provided what I craved: innovation. I had always seen myself as a creative person, and that creative part of me was starving.

I didn’t know it yet, but I was searching for something. And I was about to find it.


Chapter 1: The Spark

It started with a small book.

I had begun studying compiler construction again — partly out of nostalgia, partly out of a nameless hunger for something new. I worked through materials from McGill University, from a Danish university, from the Hasso Plattner Institute in Germany, and of course from ETH Zürich.

Then I discovered a thin volume about PL/0, a minimalist teaching language created by Niklaus Wirth.

Who Was Niklaus Wirth?

Niklaus Wirth (1934–2024) was one of the giants of computer science. A Swiss professor at ETH Zürich, he created languages that shaped how generations of programmers think: Pascal (1970), Modula-2 (1978), and Oberon (1987). He won the Turing Award — the “Nobel Prize of Computing” — in 1984.

His philosophy was radical simplicity. He believed that understanding comes from building things that are complete but small. Not toy projects that cut corners, but real systems stripped to their essence.

PL/0 was his teaching language for compiler construction. It had only a handful of keywords, no complex features — but it contained the complete essence of a compiler: lexical analysis, parsing, symbol tables, code generation. Everything a compiler needs, nothing more.

When I opened that book in late 2023, something clicked. I didn’t just see a teaching exercise — I saw a path. A dream began to form: what if I could build a compiler that generates real machine code? Not a toy. A real compiler.

I chose Go as my implementation language and VS Code as my editor. I created an empty directory and started typing.

A few weeks later — on the 1st of January 2024 — Niklaus Wirth died quietly at his home in Switzerland. He was 89 years old.

I did not know then how deeply connected those two events would become for me. The man whose work had inspired me to begin was gone, just as I had taken my first steps. There is something both sorrowful and motivating about that coincidence that I have not entirely worked out. What I know is this: I will not let his work be forgotten. More on that at the end of this story.


Chapter 2: Two Traditions

As I dove deeper into compiler construction, I realized I was stepping into a story much larger than myself — a story about two competing visions of what programming should be.

The European tradition: In 1970, at ETH Zürich, Wirth created Pascal. Clarity above all. Strong typing. Structured programming. Code that reads almost like prose. Pascal became the teaching language of choice for decades. Millions of students — including me — learned to program with it.

The American tradition: Two years later, at Bell Labs, Dennis Ritchie created C. Efficiency above all. Direct hardware access. Minimal abstraction. C was designed to rewrite UNIX. It wasn’t meant to be pretty; it was meant to be powerful.

Here is the uncomfortable truth: C won. Despite all the elegance, despite all the educational value — C became the foundation of modern computing. Linux, macOS, PostgreSQL, nginx, your car’s firmware — all C or C++. When you are building an operating system kernel, you need control over every byte.

As I studied compiler construction in 2023, I found myself caught between these two worlds. I loved the clarity of structured, readable code. But I respected C’s power. Could there be a bridge? A language with structured readability and C’s efficiency?

I didn’t have an answer yet. But the question lodged itself in my mind and would not leave.


Chapter 3: Building Piece by Piece

I worked on the compiler alongside my regular job, stealing hours where I could. Piece by piece, it grew.

The scanner came first — the component that reads raw characters and groups them into tokens. Sounds simple; it’s not. Comments, string literals with escape sequences, Unicode, error recovery — I spent weeks on the scanner alone.

The parser came next — building structure from the token stream, understanding that 42 + x * 3 means “multiply x by 3, then add 42.” I implemented a recursive descent parser, where each grammar rule becomes a function. Elegant when it works, maddening when it doesn’t.

The symbol table tracked every declaration. The Abstract Syntax Tree and Semantic Analyzer separated a serious compiler from a simple one.

But there was a problem. I didn’t trust myself to write a machine code generator for AMD64 from scratch. The x86-64 instruction set is enormous, baroque, filled with historical baggage from the 1970s. So I took a different approach: I built a CPU emulator. My compiler would generate code for this virtual machine, and I could test everything without touching real hardware.

The emulator grew more sophisticated. Its output began to look suspiciously like real Intel assembly code.

I didn’t realize it then, but I was setting the stage for something unexpected.


Chapter 4: The Night Everything Changed

I remember that night vividly. My emulator generated output that looked like this:

    mov     rax, 42
    push    rax
    mov     rax, [rbp-8]
    pop     rbx
    add     rax, rbx

The assembly shown above is from those early days — a naive stack-based approach. The code Mica emits today is far more sophisticated.

This was almost valid Intel x86-64 assembly. My emulator executed this internally, simulating a CPU. But the syntax was so close to real assembly that I wondered: what if I just fed it to a real assembler?

I installed GCC on my Ubuntu machine. I took the assembly output from my compiler and fed it to the GNU Assembler. Then I linked it with the GNU linker.

I ran the resulting binary. It worked.

I ran another example. It worked. I ran all my test programs. They all worked.

I sat there in the middle of the night, staring at my screen, trying to process what had just happened. My compiler — my hobby project built from an empty directory — was generating real, executable Linux binaries.

That was Spring 2025. That was the breakthrough.


Chapter 5: A New Name, A New Identity

After that night, everything accelerated. I refactored the compiler fundamentally — implementing a proper AMD64 code emitter, restructuring the entire architecture. And I realized: this wasn’t PL/0 anymore. This was something new.

Following the tradition of Linus Torvalds, I derived a name from my own first name, Michael: Mica. Beyond the personal connection, mica is also a mineral — known for its layered structure, its transparency, its resilience.

The question that had been growing in my mind came roaring back: clarity and systems control — together?

The answer was the System V AMD64 ABI.

I could have invented my own calling convention. Many academic compilers do. Instead, I decided to implement full compliance with the standard Linux calling convention. The payoff: zero-overhead C interoperability. A Mica binary and a C library binary are the same kind of object. They link together directly. No adapter layer. No runtime bridge. No reimplementation required. Every C library on the system became, in principle, a Mica library.

I then implemented DWARF v5 debug information — the latest standard — enabling professional source-level debugging in GDB and VS Code. This is rare for any language, let alone a solo project.


Chapter 6: Into the Valley

Summer 2025. Autumn 2025. The darkest period.

Functions and procedures existed, but they couldn’t take parameters. No arguments. Nothing passed in, nothing passed out. This might sound minor. It’s not. Parameter passing is the backbone of any useful language.

For Mica, it was especially hard because of nested functions. Like Wirth’s original work, Mica supports functions defined inside other functions, with access to the enclosing scope. This requires static links — pointers chaining stack frames together so inner functions can reach outer variables. The compiler must pass links implicitly alongside regular arguments, generate code following link chains at arbitrary depth, and make all of this work with recursion — while remaining ABI-compatible with C.

The compiler stopped working. Not partially — completely. Every compilation ended in a crash. For weeks, I couldn’t produce a single working binary. I had lost control of my own creation.

I want to be honest: there were moments when I sat in front of my computer and felt like I was failing. Thirty years of IT experience, and I couldn’t make this work.

The parameter passing crisis was a turning point in more ways than one. It forced a decision I had been avoiding for months.


Chapter 7: First Encounters with AI — The Hard Way

For a long time, I had resisted using AI tools for serious code work. My concerns were real: code quality. Loss of control. Software engineering is not just about producing code that compiles — it is about producing code that is correct, maintainable, and reflects the architectural decisions of someone who understands what they are building.

But by mid-2025, the pain was too great. I was one person with a full-time job, a personal life, a growing codebase, and a vision that was running faster than I could carry it alone. I could not stop — I am full of ideas and I have a goal now written in a roadmap. But I could not continue as before either.

So I began. Slowly. Cautiously. Skeptically.

The first months were genuinely frustrating.

Without consistently giving the AI deep context about Mica’s architecture, design principles, and established conventions, the generated code was wrong in ways that were hard to articulate. It compiled. Sometimes it even ran. But it felt foreign. It did not reflect how I think about code. The naming was off. The structure was off. The code solved the immediate problem but violated the spirit of the surrounding system in subtle ways that only became apparent later.

I hated those sessions.

I kept asking myself: is this actually helping, or am I creating a maintenance problem I’ll spend months untangling? I rejected large amounts of what was generated. I rewrote sections entirely. Some weeks, the AI collaboration felt like it cost me more time than it saved. The code that came back felt like it had been written by a capable stranger who had skimmed the documentation but didn’t understand what I was trying to build.

What was missing, I eventually understood, was context. Not technical context alone — the kind of context that conveys not just what the code needs to do, but what the code needs to be. How it should feel. What principles it must not violate. Where it fits in a system that has been built by one person with very specific values over two years.

That took months to figure out. Not weeks. Months.

I slowly found that AI collaboration worked reliably in specific modes:

Documentation and comments. Giving a complex piece of code to an AI and asking it to write documentation — not invent documentation, but describe what was actually there — proved remarkably effective. The AI often captured the intent in prose better than I could in the moment of writing the code.

Code reviews. “Here is this function. What is wrong with it?” When the AI had enough context, it found real issues: edge cases I had missed, type consistency problems I had been living with, assumptions that could silently break.

Specification translation. The System V AMD64 ABI has hundreds of pages. DWARF v5 is complex enough to be a small book. Aggregate type passing rules — how structs containing mixed float and integer fields must be split across registers — run to seventeen pages of rules alone. Using an AI as an intelligent reader and translator of these documents, asking it to explain specific rules or apply them to specific cases, was enormously useful.

Assembly code completions. Mica needs to emit correct machine code for every combination of operation and type: int8, int16, int32, int64, uint8, float32, float64. That is dozens of combinations for arithmetic alone. Asking an AI to complete tables of instruction patterns, once a few examples were established, was one of the first places where code generation felt genuinely safe — the patterns were regular, the correctness was verifiable, and the risk of hidden errors was low.

Root cause analysis. “Here is a stack trace. Here is the surrounding code. Here is what I expected to happen. What went wrong?” AI assistants proved often better at this than I was in moments of deep frustration.

What I learned over those nine months is that AI is not a shortcut. It is a tool that requires enormous investment of context and judgment to use well. The frustrating sessions were a product of the time — AI services were still maturing, and I was still learning how to work with them. Both sides of that collaboration had room to grow. And learning that took time I did not have, and patience I had to force myself to find.


Chapter 8: December

December 2025.

I had been working toward something ambitious: transforming the single-file compiler into a full multi-file compilation platform. Cross-compilation-unit symbol resolution. A global namespace registry. Import and export across source files. Static library creation. The ability to build real programs from multiple Mica files, each compiled independently, linked together as proper ELF objects.

This was not a feature. It was a fundamental architectural rework of the entire compiler.

I was also deeply frustrated. The parameter passing rework had been relentless, and the complexity had simply kept growing. The codebase had grown enormously across 2025. The volume of source code — the compiler itself, the standard library, the VS Code extension, the tutorials, the documentation that all needed to stay consistent — had grown to a size that no longer fit comfortably in one person’s head.

I remember sitting at my desk in December 2025 with tears in my eyes. Not from sadness, exactly. From the particular frustration of being on the edge of losing control of something you have poured yourself into for two years.

The multi-file rework had broken things I had thought were solid. Regressions appeared in code that had passed tests for months. Error messages pointed in wrong directions. The compiler, which had felt like mine — which I could navigate in my sleep — had become a stranger again.

I was at the corner of stopping. Not giving up on the vision. But stopping, and accepting that Mica would never become what I had hoped.

I did not stop.


Chapter 9: 4.0.0 — Telling No One

In January 2026, I made version 4.0.0 a reality.

The codebase had more than tripled compared to where it was a year earlier. Multi-file compilation worked. Cross-compilation-unit imports worked. Static library creation worked. The compiler could build real, multi-file Mica programs and link them cleanly. It was an extremely hard release to achieve — perhaps the hardest single milestone in the project.

I told four people. Three of them were friends who are not technical. They nodded, smiled, said encouraging things. I don’t think any of them understood what had actually happened. But they saw that something mattered to me, and they supported that. That is not nothing. In a project this solitary, that kind of quiet emotional support is what keeps you going when there is no one else watching.

Around this same time, my wife and I formally founded Mica Development UG. This was a decision about commitment as much as about structure. The project had grown beyond a hobby.

There is something I have not written before: there were people who reached out to me during this period who wanted to use Mica for real development work. Actual developers. People who had seen enough to be genuinely interested.

I discouraged them. I took no money. I was not certain the project would survive long enough to be useful, and I would not have people depending on a compiler that might not make it. Some decisions about honesty are not complicated, even when the pressure is high. I could not take responsibility for other people’s work when I did not yet trust my own.

That is where version 4.0.0 lived: between private pride and honest uncertainty.


Chapter 10: The Wall

Some obstacles you see coming from a distance. Then there are walls — the ones you only discover when you walk straight into them.

A general-purpose programming language needs aggregate types. Records. Arrays. Structures within structures. Packed memory layouts. The ability to pass a ten-field record through three function calls, return it by value, classify it correctly under the ABI, annotate it with debug information, and behave consistently across every combination of nesting and packing.

The System V AMD64 ABI has seventeen pages of rules for aggregate passing alone. Small structs go in registers. Large ones go in memory with a hidden pointer passed as the first argument. Mixed structs with float and integer fields get split across register classes. Packed records compress the field layout, removing alignment gaps — which changes every offset calculation downstream. And then there is the recursion: structs containing arrays of structs.

I ran the numbers in my head. The semantic analysis alone would take months. The code generation would take months more. Testing every combination would take… I could not complete the estimate.

I was one person. I had a day job. I had a life. The math did not work — alone.

But something was about to change.


Chapter 11: A Coincidence of Time and Pain

There are moments when the timing of external events aligns in a way that feels almost designed. I am cautious about that kind of thinking. But I also know what January and February 2026 felt like when they arrived.

The AI services I had been using — cautiously, frustratedly, incrementally — made a step change in their capabilities. Not incremental improvement. A visible, meaningful shift in what they could do with complex, context-heavy work. I had been using them for six to nine months by then. I knew what they could and couldn’t do. I could see the difference clearly.

I made a decision: invest properly. I subscribed to the highest-quality tiers from both OpenAI and Anthropic. It is not cheap. For a solo engineer with a day job, it is a real monthly cost. But I have a vision and a goal I cannot abandon. The question was not whether I could afford it. The question was whether I could afford not to.

What followed was a transition that I now describe as moving from AI assistance to AI co-development.

The difference is not about the AI. It is about me.

AI assistance is using an AI to help with a specific task: explain this concept, find this bug, write this boilerplate. The AI is a sophisticated tool. You remain the sole programmer.

AI co-development is different. It is a collaboration model where I bring the vision, the architecture, the design principles, and every judgment call — and the AI brings tireless implementation capacity, broad technical knowledge, and sometimes ideas I hadn’t considered. I review every line. I reject what doesn’t fit. I push back when I disagree. But I am no longer carrying the implementation burden completely alone.

The AI services I work with most closely are Claude Opus (Anthropic) and ChatGPT Codex (OpenAI). Both have become partners I trust within clearly understood limits. They work when I ask them to. They don’t have bad days. They adapt when I give better context. What I have now is not what I had in the frustrating sessions of mid-2025. Those sessions failed because I didn’t know yet how to collaborate. Now I do.


Chapter 12: A New Kind of Partnership

January 2026. I was working on peephole optimization — post-generation cleanup that removes inefficiencies from assembled code. Redundant mov instructions. Dead stores. Back-to-back push/pop pairs that cancel out. I had planned perhaps three passes. I thought of them as optional polish.

Instead, I described the problem in detail: Mica’s instruction representation, the properties I cared about, the patterns I had seen in the generated assembly. Because the context was rich and the problem was well-defined, what came back was also well-structured.

The implementation covered a uniform interface for optimization passes — a contract that every pass would satisfy, so they could be composed, measured, and extended without touching each other. Passes ran in sequence with statistics tracked after each, so improvement could be measured rather than assumed. The first passes followed naturally from the patterns I had described, and edge cases I had noted were accounted for.

I reviewed every line. I asked questions. I pushed back on structural choices I disagreed with. The result was mine — designed by me, reviewed by me, integrated by me — but I was no longer writing every line from scratch.

We built seventeen optimization passes. In weeks.

I stood back and looked at what we had built, and I felt something I had not expected: unease. Not about the code quality — it was clean, carefully designed, reviewed line by line. The unease was philosophical. Was this still my compiler?

I sat with that question for a while. And then I arrived at an answer.

Every line passed through my eyes. I accepted nothing I did not understand. I rejected suggestions I disagreed with. The architecture remained mine — the design principles, the direction, the values. The AI was not replacing me. It was multiplying me.

The question is not whether AI contributed code. The question is whether I remained responsible for the outcome. Whether I understood it. Whether I could debug it, defend it, extend it.

The answer to all of those was: completely yes.


Chapter 13: Spectra — Naming the Light

Around this same time, I did something that surprised me with how much it mattered: I gave the intermediate language a name.

Every serious compiler has an intermediate representation — a language that sits between the source code a programmer writes and the machine instructions a CPU executes. Mica’s had been nameless for years. “The IL.” An implementation detail. Technically accurate. Completely uninspiring.

The name I chose was Spectra.

When a prism intercepts white light, it doesn’t destroy it — it reveals it. White light looks uniform, monolithic, simple. The prism shows you that it is not. It is composed of countless frequencies, each with its own energy, each occupying its own position across the spectrum. The prism makes visible what was always there, hidden inside apparent simplicity.

Spectra does the same thing to a Mica program.

A source line like result := (base + offset) * 2 looks simple. One assignment, two variables, one constant. But Spectra reveals what this actually is: a load of base, a load of offset, an addition producing a temporary, a literal 2, a multiplication producing another temporary, a store into result. Six distinct operations, each typed, each with a named operand, each with a defined position in the activation record.

m1.1:int32 = load v1.1:int32
m1.2:int32 = load v1.2:int32
m1.3:int32 = add m1.1:int32, m1.2:int32
m1.4:int32 = literal 2:int32
m1.5:int32 = multiply m1.3:int32, m1.4:int32
store m1.5:int32, v1.3:int32

Every temporary has a name. Every operation is explicit. Every type is annotated. A spectrum of atomic operations, decomposed from the apparent unity of the source.

Spectra defines 37 operations — arithmetic, comparison, logical, memory, field selection, array indexing, set operations, control flow, and function calls. It carries enough information for the emitter to produce correct machine code, and enough structure for optimization passes to reason about what the program does — not just what instructions it emits.


Chapter 14: The Lego System

By February 2026, the AI co-development model had settled into a rhythm I could describe in one word: Lego.

Lego bricks come in standard shapes with standard connectors. They don’t know what you’re building. They only know how they fit together. The architecture — the structure, the purpose, the vision — is yours. The bricks are the implementation.

When I needed a new feature, I could describe the shape of the missing brick — what it needed to accept, what it needed to produce, which existing bricks it needed to connect to. The AI could draft that brick, in Go, following the established patterns of the codebase. I reviewed it, tested it, corrected it, integrated it.

The loop became fast. Faster than anything I had experienced working alone.

But here is what made it work — and this surprised me more than anything else: the quality of the existing code determined the quality of what could be added. When the interfaces were clean, new bricks fit cleanly. When the architecture was clear, extensions were clear. When the phase boundaries were explicit, new phases slotted in without disturbing the others.

The discipline I had applied over two years of solo development — the strict separation of phases, the habit of never letting complexity accumulate without a forcing function to clean it up — suddenly paid dividends I had not anticipated. A good architecture doesn’t just organize existing code. It makes future code better.

The aggregate type implementation that I had estimated would take most of the year — records, arrays, packed variants, ABI-compliant passing and returning — was no longer impossible. It was a set of bricks to build. One at a time. Connected to existing bricks. Tested as each piece arrived.


Chapter 15: The Test Fortress

In the middle of all this, the test harness grew into something I had not planned — and could not imagine working without.

The dark months of 2025 had taught me what it costs to work without a complete safety net. I had been there. I would not go back.

The harness that exists today has four layers, each validating a different truth about the compiler.

Execution tests compile a Mica program, link it, run it, and compare the output against expected text — byte for byte. These tests prove that the complete pipeline is correct. Not individual phases — the whole thing, end to end.

Error tests verify that invalid programs are rejected correctly. Not just rejected — rejected with the right diagnostic, at the right source location, with the right message text. A compiler that accepts correct programs is half a compiler. One that also precisely explains why incorrect programs are wrong is something a programmer can actually rely on.

IL tests capture the Spectra intermediate language output and check it against patterns. When I add a new expression form, I verify that it lowers to exactly the Spectra IL I intended. Not approximately — exactly.

Assembly tests do the same for generated machine code. Does a signed 64-bit multiply produce imul? Does the prologue save the right callee-saved registers?

Across these four layers: 538 test cases.

I think about what this harness would have meant in the dark months of 2025. When the parameter passing implementation broke everything. When I spent days tracing through crashes, debugging in the dark. With this harness, every regression would have surfaced within seconds. The feedback loop that used to consume days now takes seconds of automated comparison.

The test fortress does not prevent bugs. Nothing does. But it makes bugs impossible to hide for long — and it gives you the courage to make large changes, because you know immediately whether something broke.


Chapter 16: Closing the Aggregate

March 2026. The thing I had believed was impossible.

Aggregate types — records, arrays, packed records, packed arrays, nested combinations of all of the above — are fully implemented in Mica. End to end. Scanner through DWARF v5 debug information. They work.

Not “work for simple cases.” They work.

Full field selection through multi-level selector chains. Multi-dimensional array indexing. Pointer-to-record access with automatic lowering. Pass by value. Return by value. The System V AMD64 ABI classification for aggregates with mixed integer and float fields — the seventeen-page set of rules that I had once looked at with dread. Packed layouts. DWARF type descriptions that GDB can read and display with correct field names, correct byte offsets, correct nesting.

There is a specific moment I want to record. Not the final green test run — those are wonderful but they come after the work. The moment I want to record is smaller.

I was testing a function that takes a record with four fields — two int32, one float64, one uint8 — and returns a different record with two int64 fields. The first time I ran it, the output was wrong. A field was off by four bytes.

Two years ago, I would have spent days on this. In March 2026, I added an IL test for that exact function, saw the field offset that was wrong in the Spectra output, located the layout calculation code in forty minutes, fixed the off-by-four, ran the full test suite.

Green.

The moment when you have enough infrastructure that debugging a genuinely hard problem feels manageable — that is a different kind of milestone than shipping a feature. It means the architecture has matured enough to support itself.


Chapter 17: Coming Out of Silence

I am more than sixty years old. I started this project — the project of my life — and for most of the time I have been building it, I have been silent.

Not just quiet. Silent. No public posts. No progress updates. No discussion of how I was working. A handful of friends who supported me emotionally without understanding what I was doing. Three or four people I could tell about 4.0.0 who would nod and smile and not understand and keep caring anyway.

Part of the silence was practical: the compiler was not ready. There was nothing to announce.

But part of the silence was something else. For a long time, I felt a kind of guilt about the collaboration with AI. Was I being honest when I described this as my compiler? Was I lying, in some sense, by building something this large while accepting help I had not asked for a year earlier?

I have worked through that question carefully. My answer is: no. Mica is mine.

Every design decision is mine. Every architectural judgment is mine. I review every line that enters the codebase. I reject what doesn’t fit, what I don’t understand, what violates the principles I have been building toward. The AI services I work with don’t have a vision for Mica. I do.

But I also want to be honest about what it now takes to build something like this. Since January 2026, I no longer think of myself purely as a solo developer. I think of myself as the lead of a development team with very unusual members — ones who are available on Sunday mornings, in the middle of the night, at whatever pace the problem demands, without impatience, without ego. I still review and approve every single line. I still control every direction. But I am not carrying the full implementation weight alone anymore.

The volume of work had grown beyond what one person can hold. The compiler, the standard library, the VS Code extension, the tutorials, the documentation that must all stay consistent — it had grown to a size that requires a team. I found a way to scale myself while keeping control. That is the honest version of the story.

I am writing this now because I believe there is a version of this conversation — about AI co-development, about what it means for software authorship, about how a single engineer can take on an ambitious long-range project in the second half of life — that is worth having honestly. The Internet has plenty of fear and plenty of hype. I have neither. I have nine months of hard experience, a lot of frustration, and a compiler I am proud of.

That is why I am coming out of silence.


Chapter 18: For Niklaus Wirth

I need to end this story with something personal.

Professor Niklaus Wirth died on January 1, 2024. Quietly, at home in Switzerland. He was 89 years old. Most of the world did not notice. The technology press gave him a few polite lines. The language design community paid its respects. And then the news cycle moved on, as it always does.

I noticed.

Wirth was my role model since I studied computer science. He shaped how I think about software: that clarity is not a luxury, that complexity is not proof of sophistication, that a system should be comprehensible to the person who built it. His languages — and more than his languages, his philosophy — gave a generation of computer science students a foundation worth standing on. I was one of them.

I never met him personally. We had no correspondence. He did not know I existed. But I feel connected to him through his work in a way that is difficult to explain — the way any student can feel connected to a teacher who shaped their thinking without ever being in the same room.

I fear he is being forgotten. Not maliciously — just through the ordinary drift of time and fashion. The work he did is genuinely at risk of being swallowed by complexity, by the relentless appetite for the new. Young developers today may never have encountered his ideas. That troubles me.

Mica is my answer to that.

Not a recreation of his languages. Not a nostalgic exercise. Mica is a continuation of the underlying philosophy — that a language should be readable, that its semantics should be explicit, that the programmer should be able to understand what the machine is doing — but carried forward into 2026 and beyond. Into systems programming. Into direct C interoperability. Into a trajectory toward AI-native compiler semantics that Wirth could not have imagined but that, I believe, follows naturally from the foundation he established.

Mica is my way of paying respect to that work — and my modest attempt to carry it forward.


Where the Story Doesn’t End

Mica today is a complete, tested, native compiler for Linux x86_64. It generates real ELF binaries with DWARF v5 debug information. It has a 9-phase compilation pipeline, 538 automated test cases, an intermediate language named Spectra, seventeen peephole optimization passes, and a contract-based C interoperability system that can reach any C library on the system — directly, with full type checking, with no wrapper layer and no reimplementation.

The road ahead is longer than the road behind.

Language completion. Variable initialization. Extended case with label ranges. The heap model — new, dispose, dynamic arrays, deterministic cleanup. Coroutines. A concurrency substrate built around nested procedures and functions rather than free-floating threads.

Standard library. String functions, numeric functions, file I/O — the complete surface a developer expects. POSIX contracts. Linux API contracts. The ability to reach any operating system API from structured Mica source, because the goal was never to build a private ecosystem — it was to open the one that already exists.

Optimizer. The peephole optimizer is a beginning. SSA transformation turns Spectra into a form where serious global optimization becomes tractable. Register allocation improves. Range analysis follows. Bounds-check elimination follows from that. The optimizer will grow from pattern matching into something that reasons about what programs do.

Platform. The ARM64 backend. Eventually, macOS. Both architectures production-tested with full debug information.

And further still: the AI-native direction that gives Mica its long-range purpose. Native vector and matrix types. Compile-time shape checking. CPU SIMD lowering. Compiler-native automatic differentiation. The connection between structured, readable, explicitly-typed source code and the numerical work that AI and machine learning demand.

None of that is today. But the foundation is solid enough to hold it.

I hope Mica finds its way into broad usage. I hope it builds a community built on respect and openness — people who want to understand what their compiler is doing, people who believe that clarity and power are not opposites, people who want to carry a tradition forward without being trapped by its past.

That community does not exist yet.

This is its first invitation.


Technical Summary (March 2026)

ParadigmStatically typed, compiled, procedural
PhilosophyStructured clarity. C control. Born for AI.
Type SystemStrong, ABI-aware; integers, floats, booleans, characters, enumerations, subranges, sets, records, arrays, pointers, files
Aggregate TypesRecords, arrays, packed variants — full ABI classification and DWARF v5 coverage
C InteropZero-overhead, System V AMD64 ABI, JSON contract model
Intermediate LanguageSpectra (37-operation typed three-address code)
Optimizer17 peephole passes; SSA planned for 4.6
Compiler75,545 lines of Go, zero external dependencies
Test Harness538 cases across execution, error, IL, and assembly layers
TargetLinux x86_64, System V AMD64 ABI, ELF + DWARF v5
DebuggingDWARF v5, GDB, VS Code

Resources


Timeline

1980s–90s     First compiler (student project, ETH Zürich / PascalS)
     ↓
30 years      Career in IT
     ↓
Late 2023     Rediscovery of PL/0 — the journey begins
     ↓
Jan 1, 2024   Niklaus Wirth dies, aged 89
     ↓
Feb 2024      v1.0.0 — First working compiler
     ↓
Mar 2024      v2.0.0 — AST architecture, semantic analysis
     ↓
Spring 2025   THE BREAKTHROUGH — Native x86_64 code generation works
     ↓
Mid-2025      First AI collaboration attempts — a long, frustrating road begins
     ↓
Aug 2025      v3.0.1 — Full x86_64 support, DWARF v5 debugging
     ↓
Late 2025     Parameter passing crisis — the darkest stretch
     ↓
Dec 2025      Multi-file rework — the hardest month
     ↓
Jan 2026      v4.0.0 — Multi-file, tripled codebase; Mica Development UG founded
     ↓
Jan–Feb 2026  AI co-development model crystallizes; Spectra IL named
     ↓
Jan–Feb 2026  17 peephole optimizer passes; test harness crosses 500 cases
     ↓
Mar 14, 2026  v4.5.0 — Control flow complete, library API reset, process library
     ↓
2026          String model, SSA, heap model, stdlib surface, POSIX contracts
     ↓
2027          ARM64, complete language, production-grade optimizer
     ↓
2028          AI-native: vector types, shape checking, SIMD, autodiff

If you have a dream you have been putting off — something that seems too big, too late, too impossible — I hope this story encourages you to start anyway. The wall you are afraid of might be the one that teaches you something you cannot learn any other way.